| Spent the last few months talking to people running heavy questions through chat( investments, career moves, technical decisions). And I kept hearing the same thing, chat threads get lost in the weeds, multiple tabs of model comparison are super tedious, and deep research is too much to read and unreliable. So we built a canvas mode. You ask one question. Three agents (normally chat 5.5, opus 4.7 and gemini 3.1) instances investigate different angles in parallel different framings, different evidence bases. Then they actually debate each other. You watch the disagreement and steer. Test question we ran: "Will the AI bubble pop in 2026?" and you can see the results for yourself. They disagreed in interesting ways and I feel its covered depth you dont normally get from one llm. you can check it out here Would love if you all could to try and drop your hardest question in the comments and I'll run the canvas + share the verdict back. Or try it yourself (free, credits on us and you can reach out for more) [link] [comments] |