We built an open-source proxy that enforces LLM agent rules at the API layer – 700 GitHub stars

Cross-posting here because this problem affects everyone building with AI agents.

Prompt-based guardrails fail. The model follows your system prompt in a demo, then ignores rules when context gets big or the agent chains multiple steps.

We built Caliber - an open-source proxy that reads your rules from plain markdown and enforces them at the API layer, not in the prompt. Every call. Provider-agnostic.

Just hit 700 GitHub stars ⭐ and nearly 100 forks - the reception from devs building with AI has been amazing.

Repo: https://github.com/caliber-ai-org/ai-setup

Would love:

- Feedback on the approach

- Feature requests from people building AI agents

- Anyone who wants to contribute to the project

Building this open-source for the community.

submitted by /u/Substantial-Cost-429
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top