Introduction
The retrieval layer with built-in memory. See how Cortex personalises your LLM apps, delivers exceptional results, and delights your users with a great experience.
Meet Cortex
Building an AI app shouldn’t mean reinventing memory, search, and personalization from scratch.
We were tired of hacking together brittle pipelines—embedding tools, vector databases, ranking tweaks, caching layers—just to make AI apps barely feel intelligent. Why wasn’t there a Stripe-like SDK for memory and retrieval? So we built it.
Cortex is your plug-and-play memory infrastructure. It powers intelligent, context-aware retrieval for any AI app or agent. Whether you’re building a customer support bot, research copilot, or internal knowledge assistant.
Key Features
- Dynamic retrieval and querying that always retrieve the most relevant context
- Built-in long-term memory that evolves with every user interaction
- Personalization hooks for user preferences, intent, and history
- Developer-first SDK with the most flexible APIs and fine-grained controls
Not Just Memory. Human Intelligence.
Cortex doesn’t just fetch documents. It learns, adapts, and gets smarter with time. Imagine giving your AI apps the ability to:
- Recall what users searched before
- Re-rank answers based on user behavior
- Offer personalized responses from evolving context
- Grow more useful with every conversation
It’s like giving your AI a second brain with human level search and long term memory.
Getting Started
To begin using the Cortex:
- Obtain Your API Key: You’ll need a API Key to interact with the API. Please contact us to request an API key.
- Set Up Your Environment: Follow our Quickstart Guide to configure your first request.
- Explore the API: Dive into detailed documentation in our API Reference.
Feel free to reach out if you have any questions or need assistance getting started.