I wrote a survey of how AI capabilities are migrating into the database layer, and found at least four architecturally distinct categories: vector databases (embedding similarity), ML-in-database (train-then-predict in SQL), LLM-augmented (route to LLM per query), and predictive databases (Bayesian inference at query time, no model lifecycle).
The post covers how inference actually works in each, with architecture diagrams and a comparison table. Also discusses what the taxonomy leaves out: feature stores, AutoML platforms, AI-autonomous databases like Oracle 26ai.
Disclosure: I'm the co-founder of Aito, which falls in the predictive database category. The comparison includes where our approach falls short (latency scales with dataset size, smallest ecosystem).
https://aito.ai/blog/the-ai-database-landscape-in-2026-where-does-structured-prediction-fit/
Curious whether people think this taxonomy holds up, or if there's a fifth category I'm missing.
[link] [comments]