Human-in-the-Loop Security for AI Agents
Not every agent action should be auto-allowed or auto-denied. Human approval workflows bridge the gap between full autonomy and full lockdown.
Updates, tutorials, and insights about AI agent security.
Not every agent action should be auto-allowed or auto-denied. Human approval workflows bridge the gap between full autonomy and full lockdown.
Kvlar v0.3.0 ships human approval webhooks, a Python SDK, health check endpoints, and graceful shutdown — the building blocks for production deployments.
AI agents are gaining powerful capabilities — but who controls what they do? Kvlar is an open-source policy engine that brings runtime security to AI agent tool calls.
Edit your security policies and see changes instantly — no proxy restart needed. Kvlar v0.2.0 ships filesystem watching, atomic policy swaps, and graceful error handling.
The Model Context Protocol gives AI agents powerful tool access — but ships with no security layer. Here's why that's a problem and what you can do about it.
Security policies for AI agents should be testable, version-controlled, and reviewed in PRs — just like application code. Here's how to build a testing practice around agent security policies.