LLMOps: The Future of MLOps for Generative AI
Operationalizing Generative AI at scale depends on reducing model training, selection, and deployment costs, while ensuring AI fairness. Introducing LLMops.
Operationalizing Generative AI at scale depends on reducing model training, selection, and deployment costs, while ensuring AI fairness. Introducing LLMops.
Implementing HNSW + Product Quantization (PQ) vector compression in Weaviate.
We’ve raised $350M in new funding as part of our Series B led by General Catalyst and co-led by Spark Capital, with additional participation from existing investors, new financial partners, and some of the most iconic companies in tech.
Join top AI leaders at the Generative AI Meets Responsible AI virtual summit to explore challenges for implementing generative AI models.
Weaviate 1.18 introduces Faster Filtering through Bitmap Indexing, HNSW-PQ, Cursor API, and more! Learn all about it.