I am researching about AI infrastructure and would value someone's perspective who is close to enterprise AI deployment.
At a high level, we are seeing more often: as enterprises move from copilots to autonomous or semi-autonomous agents, those agents increasingly need to take actions across APIs, internal systems, memory, and external services. The question is whether existing security, identity, observability, and governance tools are sufficient once agents start acting, or whether enterprises need a dedicated control layer that governs what agents are allowed to do at runtime.
The main questions I am trying to answer are:
- Where are you today with agent deployment? Are you already deploying autonomous or semi-autonomous agents into production, or are you still mostly experimenting, and what is currently holding back broader rollout?
- To what extent are control, governance, auditability, or policy enforcement limiting production deployment? Is this already a real blocker for you, or still more of a future concern?
- When agents interact with APIs, memory, or external tools, how are permissions and constraints handled today? Do you see action-level control as something you would expect to buy as core infrastructure? - How would you categorize a product like this in your stack? Would you compare it more to security infrastructure, IAM, observability, sandboxing, or something else?
- Which team would most likely own this internally, and from what budget? Would it sit with AI platform, engineering, security, identity, compliance, or a business unit?
- What product setup would you find most credible: embedded in an agent framework, offered by a hyperscaler, or independent and cross-platform? What proof or pilot would you need to see before taking it seriously?
- What would make this a must-have for you rather than a nice-to-have? And what would make you dismiss the category altogether?
If someone has any input i would appreciate getting some insights.
[link] [comments]