Couchbase, a cloud database platform, has introduced vector search as a new feature in its Capella™ Database-as-a-Service (DBaaS) and Couchbase Server. This feature enables AI applications to run onsite, across clouds, and to mobile and IoT devices at the edge. The platform also introduces LangChain and LlamaIndex support for increased developer productivity. The multipurpose database platform reduces architectural complexity, allowing organizations to build trustworthy adaptive applications more quickly and easily.

Businesses are racing to build hyper-personalized, high-performing, and adaptive applications powered by generative AI that delivers exceptional experiences to their end users. Everyday use cases include chatbots, recommendation systems, and semantic search. For example, suppose a customer wants to purchase shoes complementary to a particular outfit. In that case, they can narrow their online search for products by uploading a photo of the outfit to a mobile application, along with the brand name, customer rating, price range, and availability in a specific geographical area. This interaction with an adaptive application involves a hybrid search that includes vectors, text, numerical ranges, operational inventory queries, and geospatial matching.

As more organizations build intelligence into applications that converse with large language models (LLMs), semantic search capabilities powered by vector search — and augmented by retrieval-augmented generation (RAG) — are critical to taming hallucinations and improving response accuracy. While vector-only databases aim to solve the challenges of processing and storing data for LLMs, having multiple standalone solutions adds complexity to the enterprise IT stack and slows application performance. Couchbase’s multipurpose capabilities eliminate that friction and deliver a simplified architecture to improve the accuracy of LLM results. Couchbase also makes it easier and faster for developers to build such applications with a single SQL++ query using the vector index, removing the need to use multiple indexes or products.

Couchbase’s recent announcement of its columnar service and vector search provides customers with a unique approach that delivers cost-efficiency and reduced complexity. By consolidating workloads in one cloud database platform, Couchbase makes it easier for development teams to build trustworthy, adaptive applications that run wherever they wish. With vector search as a feature across all Couchbase products, customers gain:

  • Similarity and hybrid search, combining text, vector, range, and geospatial search capabilities.
  • RAG to make AI-powered applications more accurate, safe, and timely.
  • Enhanced performance because all search patterns can be supported within a single index to lower response latency.

In line with its AI strategy, Couchbase is extending its AI partner ecosystem with LangChain and LlamaIndex support to boost developer productivity further. Integration with LangChain enables a standard API interface to converse with a broad library of LLMs. Similarly, Couchbase’s integration with LlamaIndex will give developers more choices for LLMs when building adaptive applications. These ecosystem integrations will accelerate query prompt assembly, improve response validation, and facilitate RAG applications.

“Retrieval has become the predominant way to combine data with LLMs,” said Harrison Chase, CEO and co-founder of LangChain. “Many LLM-driven applications demand user-specific data beyond the model’s training dataset, relying on robust databases to feed in supplementary data and context from different sources. Our integration with Couchbase provides customers another powerful database option for vector store so they can more easily build AI applications.”

These new capabilities are expected to be available in the first quarter of Couchbase’s fiscal year 2025 in Capella and Couchbase Server and in beta for mobile and edge.