Quickstart
Before you install anything, take three minutes to understand what Exolvra actually is — what problem it solves, what a running agent looks like, and who it's built for. If it clicks, the next page gets you running.
The short version
Exolvra is a platform for building and running teams of AI agents that actually ship work. Not single-turn chatbots; not LLM calls wrapped in a web UI. A team — multiple specialised agents that pick up assigned work, use tools to execute it, coordinate with each other, and write the results back to a persistent workspace you can audit.
You run it yourself (self-hosted, one dotnet command) or deploy it behind a domain. There’s a dashboard for humans, an API for integrators, and a drop-in widget for embedded chatbots.
The problem it solves
Single-agent chat is fine for ad-hoc questions. It falls apart the moment you want to:
- Give an agent a multi-step job and walk away. You want to hand it an issue, close the tab, and come back to find the work done — not babysit a chat window for twenty turns.
- Delegate across specialists. One agent researches, another writes, another reviews, a fourth publishes. Each does what it’s good at and hands off cleanly.
- Keep a durable record. Every tool call, every intermediate artefact, every decision, searchable and replayable after the fact.
- Enforce limits. Budget caps, approval gates, sandboxed capabilities, audit trails — the scaffolding you need to trust an agent with real work.
Exolvra is the runtime for that kind of workflow.
What a running agent looks like
Conceptually, you give Exolvra a goal:
Launch the new pricing page by end of quarter.
A Project Manager agent reads the goal, breaks it into discrete issues, and assigns each one to the right specialist:
| Issue | Assigned to | What it does |
|---|---|---|
| Draft pricing page copy | ux-writer | Writes the headline, value props, FAQs |
| Design pricing page mockup | designer | Lays out the page in Figma-style terms; uploads sketches |
| Implement pricing page | frontend-dev | Writes the actual HTML / components |
| Test the pricing page | qa-engineer | Runs through the page and reports issues |
Each specialist picks up its assigned issue, executes tools to do the work (web_search, file_system, data_store, send_email, and so on), streams progress into a timeline, and marks the issue complete. Issues with dependencies wait their turn — when the copy lands, the design starts. When the design lands, implementation kicks off.
All of this happens in the background on a 30-second heartbeat. You can watch it live in the dashboard, ask an agent a direct question in chat at any time, or ignore it entirely and come back to a finished project.
What’s in the box
Out of the box you get:
- A dashboard for managing agents, projects, issues, data, memory, skills, and budgets
- A catalogue of tools agents can use — file system, shell, web search, web fetch, data store, task board, memory, send email, and more
- Provider integrations for Anthropic, OpenAI, Google Gemini, Groq, Mistral, Azure OpenAI, AWS Bedrock, GitHub Copilot, and local Ollama — pick any or mix them
- An OpenAI-compatible API for integrating chatbots into your own apps
- A chat widget you can drop into any website with one script tag
- Channel adapters for Telegram, Discord, Slack, and the terminal
- An MCP library of community Model Context Protocol servers you can install with a click
No database to install, no Redis, no message queue. A single dotnet process with a SQLite store under your user home directory.
Who it’s for
- Self-hosted tinkerers who want to run a multi-agent lab on their own machine without spinning up infrastructure.
- Small teams that want a dashboard where AI specialists and humans share the same workspace.
- Developers who want to build chatbots backed by a real platform — budgets, moderation, audit logs — rather than shelling out to an OpenAI endpoint directly.
- Integrators embedding AI into their own product through the API or widget, without reinventing the plumbing.
If you’re looking for just a single-turn chatbot playground, this is overkill. If you want a shared workspace where multiple agents collaborate on durable work, keep reading.
What you’ll need
- The .NET 10 SDK — that’s the one hard requirement. Everything else is optional.
- An LLM provider API key (Anthropic, OpenAI, etc.) or a local Ollama install if you don’t want to pay anyone.
- 15 minutes to go from zero to a working agent you can chat with.
That’s it. No account sign-up, no cloud dependency, no phone-home. You own the data, you own the workspace, you own the budget.
Ready?
Head to Installation to set it up, or read Agents & chatbots first if you want to understand the agent model before touching code.
The docs follow a linear arc — Quickstart → Install → First login → Your first agent → Your first project. Read them in order the first time; skip around after.