| I've been working on Manifest, an open-source AI cost optimization tool. The idea is simple: instead of sending every request to the same expensive model, it routes each one to the cheapest model that can handle it. Simple question → cheap model. Complex coding task → heavier model. How many people are already paying for subscriptions (ChatGPT Plus, GitHub Copilot, Ollama Cloud Pro, etc.) but still pay separately for API access on top of that. So we added direct subscription support. Right now you can plug in:
Just connect your existing plan and route across all their models. Curious about this community. How do you handle your AI costs? Do you stick with one provider, use multiple, or have you tried any routing/optimization setup? Manifest is free, runs locally, MIT license. [link] [comments] |