🐊 Policy Controller for Kubernetes
-
Updated
Mar 24, 2026 - Go
🐊 Policy Controller for Kubernetes
AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and reliability engineering for autonomous AI agents. Covers 10/10 OWASP Agentic Top 10.
Runtime policy enforcement for AI agents. Cryptographic audit trail, human-in-the-loop approvals, kill switch. Zero code changes.
API that leverages Clair to scan Docker Registries and Kubernetes Clusters for vulnerabilities
Governance gateway for AI agents — bounded, auditable, session-aware control with MCP proxy, shell proxy & HTTP API. Works with Cursor, Claude Code, Codex, and any MCP-compatible agent.
The antivirus for OpenClaw — approve dangerous actions, scan skills, block secret leaks, and keep humans in control, for safety.
ClawLess — A serverless browser-based runtime for Claw AI Agents powered by WebContainers
INTERCEPT / Policy as Code Auditing
[DEPRECATED] Moved to microsoft/agent-governance-toolkit
Open-source firewall for AI agents. Policy engine that audits and controls what OpenClaw, Claude Code, Cursor, Codex, and any AI tool can do on your machine.
RBAC/ABAC/ReBAC policy engine for Python with policy sets, condition DSL, and hot reload
The STAPL policy language for tree-structured, attribute-based access control policies
The control layer for AI agents. Intercept enforces hard limits on every MCP tool call before execution. Rate limits, spend caps, access controls. Open source.
Implementation of OASIS XACML 2.0 & 3.0 specification in Java programming language
ReleaseGuard is an open-source artifact policy engine and hardening suite. It scans, transforms, obfuscates, attests, and verifies release artifacts before they ship across every build ecosystem.
How to build your own policy engine
An in-cluster templating controller. Manage, mutate, and validate resources using webhooks and reconciliation. Backed by Jsonnet.
AI got hands. This is the leash. Policy, audit, kill switch for any AI agent with access to your accounts.
🔪 Open-source safety firewall for AI agents. Intercepts tool calls before they execute, enforces YAML policies, and kills dangerous operations in real-time. Works with OpenAI, Anthropic, LangChain, and MCP. She doesn't guard. She kills.
Add a description, image, and links to the policy-engine topic page so that developers can more easily learn about it.
To associate your repository with the policy-engine topic, visit your repo's landing page and select "manage topics."