AssistantAppsUse CasesPlattformPricingImplementationLog inWaitlist
Early AccessPlatform access runs through the waitlist — early access on request.Join waitlist

Agentic OS

The OS for AI teams.

Orchestrator, agent roles (Office · Developer · Researcher · SysAdmin · Security Advisor · …), goal engine with dependencies, runners for website and app builds, skill system, multi-LLM router, user-isolated containers — all in one managed runtime.

  • 🇪🇺 EU-hosted
  • GDPR-native
  • EU-AI-Act-ready
  • Multi-LLM
  • Managed runtime
Multi-LLM Router
ClaudeGPT-4oMistralOpen-Source

How the OS works

From input to execution — one flow, six layers.

Whether a WhatsApp message, voice memo, photo or coding session: everything starts with an input and flows through the same layers of the Agentic OS. Scroll through the stack.

Multi-LLM Router
ClaudeGPT-4oMistralOpen-Source

01 · Input

Every task starts with an input.

WhatsApp message, voice memo, site photo, web form, mobile push, API webhook. Whether app, agentic loop or coding session — the input field is always the beginning. One channel, one event, one task.

  • Text · voice · photo · webhook
  • WhatsApp · mobile · web · voice

02 · Orchestrator + agent team

One task, multiple roles.

The orchestrator receives the task and spins up the matching agent team: Office, Developer, SysAdmin, Researcher, Security Advisor. Roles hand off to each other, debate interim results, escalate questions. Not a single chatbot doing everything alone.

  • Goal engine with dependencies
  • Debate & handoff between roles
  • + roles grow with new apps

03 · Skills + runners

Capabilities and real infrastructure.

Skills give agents access to email, calendar, WhatsApp, ERP, PDF, custom APIs — tenant-scoped and auditable. Runners are real execution environments: website builds, app deploys, CI/CD pipelines. Not toy tools — productive compute.

  • Email · calendar · WhatsApp · Odoo · PDF · ERP · API
  • Website runner · app runner · build + CI/CD

04 · Multi-LLM router

Best model per task — no lock-in.

Triage with Haiku, reasoning with Sonnet, vision with GPT-4o, routine with Mistral, code with open source. Each task class gets the right model. If a provider fails or rate-limits, the router fails over automatically.

  • Claude · GPT · Mistral · open source · BYOK
  • Automatic fallback on outages

05 · Security Gate

Before any action has external impact.

Payments, outgoing mail, ERP writes, price changes: anything MEDIUM or HIGH risk lands at the Security Gate. A human approves, the audit trail is written. LOW risk (show draft, ask for info) passes through.

  • Risk score · approval queue · audit trail
  • No agent sends blind

06 · User container

One isolated runtime per user.

The whole stack above doesn't run in a shared process — it runs in a container that belongs exclusively to the user. Credentials, interim state, tool state — all physically isolated. Managed runtime: we handle provisioning, scaling, health.

  • AES-256-GCM credentials vault · per user
  • True process isolation, not just DB filters

8 pillars

The foundation — quotable in eight lines.

The deep-dives above show how it looks. Here's the compact recap, so you have the core at a glance.

User-isolated containers

Real process isolation per user — not just tenant separation.

Security Gate

Approval workflows for risky actions. Full auditability.

Credentials Vault

AES-256-GCM, per user, per integration. OAuth refresh built in.

Goal engine

Multi-step workflows with dependencies — not single-shot agents.

Multi-LLM router

Best model per task: Claude, GPT, Mistral, open source. No lock-in.

EU-hosted, GDPR-native

Data processing exclusively in Europe. EU-AI-Act-ready.

Cross-system orchestration

One agent, all systems — email, calendar, ERP, industry software.

Auditable

Every AI team action traceable and documented.

Multi-LLM router

Best model per task — no lock-in.

One model per class of task. The AI team picks the right model per task — by quality, cost, latency or hosting requirement. A provider outage automatically reroutes to the next.

TaskModel familyWhy
Triage & categorisationHaiku, small modelsCost — 95% of inbox events don't need a large model
Decisions & text generationSonnet, GPT-4oQuality on reasoning and customer-facing copy
Vision: measurements, damage photos, sketchesVision-capable LLMsMulti-modal — text + image in one pass
Routine generation, templatesOpen-source (Mistral, Llama)EU hosting, cost, reproducibility for regulated industries
Code, tool use, structured outputsGPT code modelsTool-use performance, JSON-mode stability
Long-context recallClaude 1M contextLarge surveyor files, ERP dumps without chunking
Active providers in the router
  • Claude
  • GPT-4o / GPT-5
  • Mistral
  • Llama / DeepSeek
  • Haiku
  • + BYOK

Every row has a defined fallback. If a provider fails or rate-limits, the task keeps running — just maybe with 0.4s more latency or 20% more token cost. Not a single task fails on a single vendor.

Compliance & trust

European, transparent, verifiable.

Privacy isn't a checkbox. Here's what we commit to and what you can verify.

EU hosting

Compute and storage exclusively in EU data centres. No US cloud in the hot path. Sub-processors documented on the trust page.

GDPR-native

Data processing agreements (DPA) available. Data subject rights (access, deletion, portability) are product features, not compliance theatre.

View DPA

EU-AI-Act-ready

Each app is classified by EU-AI-Act risk class. Transparency logs and human oversight are built into the platform — not retrofitted.

Audit trail

Every AI team action is logged structurally and searchable. Retention configurable per tenant. Export for external audits.

Security policy

Who we are up against

Four camps, one European answer.

The full comparison against legacy enterprise, AI-lab runtimes, consumer EU AI and vertical tools lives on the home page. Here is the short version of why platform buyers come to us.

  • Agent-native from day one — not an agent layer on top of a legacy stack (Salesforce, MS, ServiceNow).
  • App framework above the agent runtime — not just a raw runtime like Claude Managed Agents or OpenAI Operator.
  • Multi-LLM router — no lock-in to a single provider, however large they are.
  • Isolated containers per user — real isolation, not just tenant separation.
  • EU hosting + GDPR-native — structural, not explained after the fact.
  • Open platform — custom apps on equal footing with our example apps.

OpenClaw vs. Linkworld

You know OpenClaw? Linkworld is the business edition.

OpenClaw shows what agentic AI delivers for individuals — 300,000 GitHub stars in months. For companies the governance layer is missing: tenant isolation, approval workflows, GDPR, packaged vertical apps. Linkworld provides exactly that layer.

OpenClaw (open source)
Personal · Self-host
  • Open-source AI agent framework
  • Self-hosted, model-agnostic
  • Personal productivity, small teams
  • Skill marketplace with supply-chain risks (Cisco, Gartner, Trend Micro)
  • Single-tenant per installation
  • You carry security, hosting, updates yourself
Linkworld (enterprise layer)
Business · Managed
  • Hosted SaaS, EU-sovereign — 100+ tools, same idea
  • Multi-tenant with isolated user containers
  • Security Gate, approval workflows, AES-256 vault
  • GDPR-native, EU AI Act-ready
  • Packaged vertical apps (field sales, construction, audit) — ready to use
  • Audit logs, BYOK, on-prem option

This is what we build on top.

Three example apps show what the platform is built for. Custom apps are next.