Amplitica was started by engineers who watched too many manufacturing, finance, and SOC teams stuck between “send our data to a US cloud” or “don’t use AI at all.” We built the third option: a complete enterprise AI workspace that runs on your hardware, on open-source LLMs, with no compromises.
These aren’t marketing values. They drive every product decision we make.
Companies need AI they actually own. Our entire stack is designed so the customer can — at any moment — pick up the docker-compose, sever the connection, and keep operating.
Every model we ship is open-weights. Every workflow is YAML you can fork. The orchestration layer (LangGraph) is upstream open source. No vendor lock-in by construction.
The fastest way to build a workflow is to describe it. We bet the product on voice-first orchestration before it was obvious. We were right.
The short version of how we got here.
We kept seeing engineering teams spend 40 hours producing CAD documentation that an AI could finish in 4 — but they couldn’t use any cloud LLM, because the drawings were ITAR-adjacent. The gap between “AI is here” and “AI we’re allowed to use” was a chasm.
LangGraph + on-prem LLaMA 4 running in a single GPU box. One workflow we wanted to prove out end-to-end: spoken brief → 3D model + 2D drawing + ERP code. The architecture clicked.
Realised the same engine — agents, voice, workflows, chat — applies identically to finance, HR, sales, IT. Generalised it into a multi-module workspace.
Designed for manufacturing, finance, and security workloads across CZ/SK/DE/AT. Air-gapped reference architecture suited to defence-adjacent environments.
Multi-agent orchestration framework, voice-first builder, 12-module workspace. Built to scale — with the kind of organisations that share the on-prem conviction.
A small senior team based in Prague. Together we own the whole platform end-to-end — from the GPU box on the customer’s rack to the voice-to-workflow scaffolder, the chat surface, the RAG pipeline, and every ERP integration in between.
Talks to every prospect. Believes the right answer to “can you do X?” is usually “yes — here’s how we’d do it on your hardware.”
Architect of the Amplitica platform. A decade in distributed systems before deciding agents are the new microservices. Owns the runtime end-to-end.
Turns every customer document, drawing, and email thread into queryable knowledge. Owns chunking, indexing, retrieval — end-to-end.
Part UX designer, part filmmaker, part front-end engineer. Owns brand voice, product UX, the video story, and the pixel-perfect feel of every screen.
Owns the workflow runtime — from the voice-to-YAML scaffolder all the way down to ERP & CRM adapters into SAP, Helios, and Salesforce.
Keeps the on-prem deployments humming. Owns infrastructure, GPU sizing, observability, and zero-downtime rollouts across customer environments.
It’s the entire architecture. Other vendors tack “private deployment” onto a cloud-first product. We did the opposite.
We never ship customer prompts to a managed LLM unless the customer explicitly opts in.
Built in Prague, deployable across the EU. GDPR isn’t a checkbox we tick — it’s the design.
The reference architecture is built around zero internet egress. We validate that path every release.
We stand on the shoulders of giants — and contribute back.
If you’re a European company that needs on-prem AI, or an engineer who wants to build it, drop us a line.