Our team has been deploying AI agents in enterprise environments for the past 2 years, across 60+ deployments. The same governance problem kept recurring: how do you certify reliability, enforce policy, route and orchestrate context, monitor behavior, and manage agent identity—without bolting together a pile of disconnected tools?
So we built a unified stack and open-sourced it under the Cohorte AI GitHub organization:
TrustGate — black-box AI reliability certification via self-consistency sampling and conformal calibration Guardrails — declarative YAML policy engine for AI agent guardrails Context Router — intelligent context routing engine for AI agents Context Kubernetes — declarative orchestration of enterprise knowledge for agentic AI systems Agent Monitor — governance-first observability with kill switches Agent Auth — agent-specific identity and access management for AI agents
All of it is open source, Python-based, and released under Apache 2.0. The architecture is documented in the free playbook The Enterprise Agentic Platform:
Book: https://www.cohorte.co/playbooks/the-enterprise-agentic-platform
GitHub org: https://github.com/Cohorte-ai
The research behind the stack is also public. Charafeddine Mouzouni has published three papers touching the underlying problems we kept running into in practice: exploitation surfaces in LLM agents, reliability certification for AI agents, and routing dynamics in Mixture-of-Experts systems.
Comments URL: https://news.ycombinator.com/item?id=47860859
Points: 2
# Comments: 0