A desktop application that gives you AI chat, a built-in browser, specialist agents, visual workflows, and persistent memory. No subscriptions. No lock-in. Open source.
You pay monthly for chat interfaces, limited agents, and cloud-locked data. Stop paying rent on software that should be yours.
Chat + limited tools. Cloud only.
Chat + projects. No local option.
Code editor + AI. Developer only.
All of the above. Combined. Free. Yours.
No monthly fees. No usage limits. No premium tiers. Download it, use it, keep it. You only pay your LLM provider directly — at their cost, not ours.
Everything runs natively on your computer. Conversations, files, agent memory — nothing leaves your machine. Built-in P2P encrypted chat means even your messages never touch a server.
Full source code. AGPL-3.0 license. Fork it, modify it, extend it, redistribute it. We don't lock features behind paywalls or artificially limit capability.
Use your own machine as the hub — connect from any device, run any API, build natively. No cloud rental. No middleman. Your workstation is the platform.
No plugins. No extensions. No configuration hell. Install it and everything works.
Full chat interface with markdown, code highlighting, file attachments, and multi-turn conversations. Connect any LLM provider.
Browse the web within the app. Research, pull context, reference documentation — all without switching windows.
Expert presets that transform the AI into domain specialists. Frontend, backend, security, DevOps, game dev, and more.
Each agent is a behavioral configuration that shapes the LLM into a domain-specific specialist. Select one and the AI adapts instantly.
TypeScript, CSS, UI/UX, responsive design, PWAs, Core Web Vitals, component architecture.
Node.js, Python, Go, Rust, APIs, REST, GraphQL, databases, microservices, serverless.
OWASP Top 10, XSS, CSRF, SQLi, JWT, OAuth 2.0, encryption, vulnerability assessment.
Docker, Kubernetes, CI/CD, cloud platforms, Terraform, monitoring, load balancing.
PyTorch, TensorFlow, LLMs, fine-tuning, RAG, embeddings, local model deployment.
Unity, Unreal, Godot, game mechanics, physics, multiplayer, C#, GDScript.
Every built-in agent is just a starting point. Create your own specialist agents from scratch, fork existing ones and customize them, or let the AI generate an agent tailored to your exact workflow. Define behavior rules, tool access, memory scope, and domain expertise — then save and share them.
Define a new agent with custom instructions, personality, tool permissions, and domain focus. Full control over every parameter.
Take any built-in agent as a template. Adjust its behavior, add rules, restrict tools, or expand its expertise to fit your exact needs.
Describe what you need in plain language and THERION builds the agent for you — behavioral rules, tool configuration, memory setup, all wired correctly.
Grab the release for Windows, macOS, or Linux. Double-click. Done. No account required.
Open Settings. Connect Ollama (free, local), Anthropic, OpenRouter, or any compatible provider. You pay the provider directly.
Select a specialist from the home screen. Start a conversation. The AI adapts to the domain instantly.
Native desktop performance. Web technology flexibility. Everything runs locally on your machine.
Your data stays on your machine. No telemetry. No tracking. No cloud dependency. The backend and frontend ship as a single native binary. Portable. Sovereign. Yours.
Download THERION. Inspect the code. Use it for free. If you like what you see, tell someone.