Skip to content

AI that works inside your regulated perimeter.

Deeplinq deploys on your infrastructure, connects your legacy systems, and leaves your data, your models, and your audit trail inside your control.

In private preview with selected financial institutions across EMEA.

The sovereignty perimeter

A single boundary around the regulated organization. Data, orchestration, models, and governance live inside it. Sources feed in from the left; external providers connect inward from the right through scoped residency channels. Nothing crosses the boundary without being logged, evaluated, and policy-checked.

Architecture diagram of the deeplinq sovereignty perimeter. A rectangular boundary encloses four layers — governance, AI orchestration, customer data, and self-hosted open-weights models. Sources on the left and external cloud providers on the right connect inward through controlled channels.ExternalExternal01 · ActorsUsers & appsEmployeesInternal appsAgents & batchSSO · OIDC · SAML02 · SourcesContent & systemsWebsitesMailFiles & documentsAPIs · systems04 · External modelsCloud providersOpenAIAnthropicMistralGoogleLLM · embedding · STT · TTSResidency-lockedSovereignty perimeter — customer organizationLayer 04GovernanceAudit trails · Access control · Policy · Compliance evidenceLayer 03 · deeplinqAI orchestrationRouting · Retrieval · Redaction · Evaluation · ObservabilityOne interface across every model and source.Layer 01Customer dataStays inside the perimeter.Structured storesDocument & object storesVector indexes · identityLayer 02 · Self-hostedOpen-weights modelsRun on customer infrastructure.LlamaQwenMistral (OS)GemmaFalconFine-tunesLLM · embedding · STT · TTSRequestResponseIngestResidency · no retentionPolicyRetrievalInferenceEvery crossing is logged, evaluated, and policy-checked
A single boundary around the regulated organization. Data, orchestration, models, and governance live inside it. Nothing crosses without being logged, evaluated, and policy-checked.

01

Perimeter

A single, customer-controlled boundary. Data, models, orchestration, and governance live inside.

02

Residency channels

Scoped, region-locked egress to external providers. Prompts and completions are not retained.

03

Control flow

Orchestration enforces policy, retrieval, and inference across every model and source.

04

Evidence

Every request produces an immutable, exportable record for compliance and audit requirements.

Why this exists

Your constraints are the starting point.

Regulated institutions cannot ship data to a vendor's cloud. FINMA and CCAF enforce Swiss residency. CSSF, Bank Al-Maghrib, and PDPL frame customer-data processing. HIPAA and GDPR health provisions define clinical-system egress. GxP traces pharma decisions. Solvency II, MAR, and MiFID II dictate what an auditor reconstructs.

Not edge cases. The operating envelope.

Most AI platforms were built for a different customer — one accepting hyperscaler tenancy and data paths the regulator never signed off on. Deeplinq was built for the customer who cannot.

Your orchestration, connectors, agents, model routing, and audit trail sit inside the perimeter you already defend. The platform fits the envelope — not the reverse.

Who we serve

Seven regulated surfaces, one sovereignty posture.

Each surface has its own regulator, its own audit cadence, its own use cases. One platform adapts to all.

  • Private banking

    Client files and KYC reviews generate more documentation than any team can read — and secret bancaire leaves no room for external processing.

    Platform infrastructure where your team builds client-file and KYC-review agents. They run inside the bank's perimeter and leave a compliance-traceable record.

  • Commercial banking

    KYC, credit ops, customer engagement, and transaction monitoring are measured against central banks, BAM, PDPL, MAR, and MiFID II scope.

    Platform layer across onboarding, credit files, customer correspondence, and market surveillance — where your team deploys AI with national hosting and audit-grade traceability.

  • Healthcare

    Patient data cannot leave the clinical perimeter, yet teams spend hours reconciling records, prior authorizations, and care pathways.

    Unify clinical documents, administrative records, and protocol knowledge inside the hospital's infrastructure — HIPAA and GDPR health provisions respected by design.

  • Pharma

    GxP compliance, clinical data handling, and regulatory submissions demand audit traceability generic AI platforms cannot produce after the fact.

    Bring orchestration into pharma research, quality, and regulatory affairs workflows — model-version pinning, prompt archival, decision traces for GxP scrutiny.

  • Defense

    Classified information cannot touch commercial cloud, and most AI tooling is built on assumptions that collapse when air-gap is mentioned.

    Deploy fully air-gapped on-premise — inference local, no external runtime dependency, residency enforced at the hardware boundary.

  • Public sector

    National sovereignty, procurement-compliance frameworks, and citizen-data protection make hyperscaler-backed AI non-starters at the procurement gate.

    A sovereign AI layer fitting national data-protection regimes, with local hosting and the audit surface oversight bodies expect.

  • Insurance and wealth management

    Solvency II, fiduciary duty, and dense policy documentation squeeze operations between regulatory scrutiny and operational load.

    Infrastructure for document-intelligence, policy-review, and claim-processing agents your team builds inside the regulated perimeter — full traceability for actuarial and compliance.

What deeplinq does

Five capabilities, one constraint: everything stays inside.

The capability stack doesn't change for regulated institutions. The deployment envelope does.

  • Ingest — without exfiltration

    Read documents, transactions, and system records at source. Extraction, embedding, and indexing stay inside the institution's environment. No training on customer data by default.

  • Connect — across the legacy estate, inside the boundary

    Reach into core banking systems, ERPs, CRMs, and operational databases through connectors that run where the data lives. No external data plane.

  • Orchestrate — with full decision traces

    Route each query to the right model, knowledge source, and system of record. Every routing decision, retrieval, and model call is logged and auditable.

  • Agentize — with human-in-the-loop and audit trace

    Deploy agents that propose, not decide, where the institution wants a human gate. Every action carries prompt, retrieval context, model version, and outcome — exportable for audit.

  • Govern — at the policy layer, not by afterthought

    Enforce access controls at document, record, and field level. Retention, redaction, and residency through the platform — decisions versioned and reviewable.

Platform architecture

One layered system, one perimeter. Connectors ingest, RAG engine indexes, LLM router orchestrates, agents execute — all governed end-to-end inside your perimeter.

Four-layer horizontal stack: Connector Hub ingests enterprise data, RAG Engine indexes and retrieves context, LLM Router orchestrates across cloud and open-weights models, Agent Orchestrator runs autonomous agents under policy, evaluation and observability. A governance band spans all four layers.Enterprise inputsBusiness outcomesLayer 0101 / 04Connector HubIngests enterprise data fromsystems of record and work.SourcesERPCRMDocumentsMailDatabasesCollaborationNormalize · scheduleLineage · identityPre-built connectorsLayer 0202 / 04RAG EngineSemantic indexing andretrieval into business views.CapabilitiesChunking & embeddingsVector + keyword hybridRerankingContextual business viewsPermission-aware retrievalGrounded answers, citationsLayer 03 · Active03 / 04LLM RouterOrchestrates across models.One interface, any provider.Cloud APIsOpenAIAnthropicMistralGoogleOpen-weightsLlamaQwenMistral (OS)Fine-tunesPolicy gateRedactionPolicy enforceResidency checkAudit logInside perimeterSelf-hostedNo egressData stays inCost · latency · residency routingLayer 0404 / 04Agent OrchestratorAutonomous agents runningunder policy and evaluation.RuntimePlanner & tool usePolicy & guardrailsEvaluation & scoringObservability & tracesHuman-in-the-loopAudit trail per actionCross-cuttingGovernanceIdentity & accessSSO · RBAC · row-levelPolicyCatalog · versioningEvaluationAccuracy · safety · driftObservabilityDashboards · alerts
Four layers inside a single perimeter. Governance cuts across every layer; every call is policy-checked before it crosses a boundary.

How it deploys

Four deployment modes. One discipline: your data stays where it should.

  • On-premise

    Deeplinq runs entirely inside the institution's data centre. Models, vector stores, orchestration, and logs live on infrastructure the CISO already controls.

    Fit signal: private banking, defense, healthcare with strict residency, public-sector.

  • National or private cloud

    Deeplinq deploys inside a sovereign cloud region or private-tenant environment chosen by the institution. Residency and data paths defined by the institution's contract, not deeplinq.

    Fit signal: commercial banking under central-bank obligations, insurance, wealth management, sovereign-cloud healthcare.

  • Air-gapped

    No external network path. Inference, retrieval, and orchestration entirely on local infrastructure. Model updates and audit exports via controlled physical transfer.

    Fit signal: defense, classified public-sector workloads, high-security research.

  • Hybrid with a sovereignty boundary

    Sensitive data and regulated workloads stay on-premise or on national cloud; non-sensitive workloads extend where the institution chooses. Sovereignty boundary enforced at the platform layer.

    Fit signal: universal banks with mixed workloads, multinational healthcare, pharma combining internal and external research.

Deployment journey

12 weeks, 4 phases, co-constructed. From customer-side preparation to in-production operations, every phase is mapped — including the prerequisites on your side.

Four-phase deployment journey: Phase 00 Preparation on the customer side before kick-off, then three deeplinq-side phases — Foundation (W0–W4), First workflows (W4–W8), Scale and handover (W8–W12). A continuous bar below shows sovereignty, evidence and practitioner enablement operating across all 12 weeks.Pre-projectIn productionPREW0W4W8W12Kick-offIn productionPhase 0000 / 04Customer sidePreparationPre-W0 · before kick-offData sources identifiedSponsors alignedUse cases scopedTest data preparedPhase 0101 / 04deeplinq sideFoundationW0 — W4Environment provisionedConnectors wiredFirst index livePerimeter validatedPhase 0202 / 04deeplinq sideFirst workflowsW4 — W81–2 workflows liveEvidence layer activeMeasurement baselineTraining startedPhase 0303 / 04deeplinq sideScale & handoverW8 — W12Additional workflowsAccess control at scaleCustomer team autonomousSteady-state metricsCross-cuttingContinuousSovereigntyPerimeter · residencyEvidenceAudit trails · citationsPractitioner enablementTraining · handover
Preparation sits with the customer. Foundation, first workflows, scale and handover sit with deeplinq. Sovereignty, evidence, and practitioner enablement operate across all 12 weeks.

Why model choice matters

Lock-in on the intelligence layer is an audit risk.

A regulated institution that depends on a single model provider inherits that provider's roadmap, data posture, and regulatory exposure. "The vendor's model produced it" is not an answer a regulator accepts.

Deeplinq treats model choice as a compliance decision. Institutions run proprietary frontier models, open-weights on local inference, or both — selected by task, sensitivity, and regulator expectation. Model versions are pinned so an auditor reconstructs a decision months later. Models replace without rewriting the platform.

Where a hyperscaler assistant or CRM-native copilot forces tenancy and a single provider, deeplinq keeps the model behind an interface the institution controls.

Currently supported

Cloud APIs (with data residency)

  • OpenAI
  • Anthropic
  • Mistral
  • Google

Open-weights (self-hosted)

  • Llama
  • Qwen
  • Mistral open
  • Gemma
  • Falcon

Model agnosticism

Customer organization defines what's sensitive. deeplinq routes every workload to the right provider — Cloud APIs with residency controls for non-sensitive workloads, open-weights inside your perimeter for everything else.

Model agnosticism: customer workloads flow into the deeplinq orchestration box which routes each request, based on policy, either to Cloud APIs with residency controls (OpenAI, Anthropic, Mistral, Google) or to open-weights models self-hosted inside the customer perimeter (Llama, Qwen, Mistral OS, Gemma, Falcon).Customer workloadsModel executionCustomer organization01 / 03Sources, workloads, policyWhat is asked, about what data,under which classification.Data sourcesERPCRMDocumentsMailDatabasesWorkloads · sensitivityPublicmarketing · researchInternalops · productivityConfidentialfinance · HR · legalRestrictedregulated · PII · PHIPolicy definitionsResidency · sensitivity · routing rulesRequestdeeplinq · orchestration02 / 03Policy-based routingOne interface. One contract. Any provider,chosen per workload under customer policy.Policy-based routingper requestPer-workload model selectionfit to taskResidency-awareEU · FR · on-premCost & latency optimizationper SLAPolicy gateevaluated per requestClassificationResidencyRedactionPurposeAudit logConsent“The choice belongs to the institution,not the platform.”non-sensitivesensitiveOutside perimeter03AZone ACloud APIs· with residency controlsPolicy gate enforced · redaction · audit log · regional endpointsOpenAIAnthropicMistralGoogleBest forNon-sensitive workloads · frontier capabilityInside perimeter03BZone BOpen-weights· run inside your perimeterNo egress · data stays in · self-hosted inference · air-gap compatibleLlamaMetaQwenAlibabaMistral OSApache 2.0GemmaGoogleFalconTIIBest forSensitive workloads · regulated environmentsAir-gapped deploymentstrust boundary
The institution classifies. The orchestration layer routes. The model executes — inside or outside the perimeter as policy prescribes.

Supported model providers

9 providers across Cloud APIs and open-weights. Choose what fits your sovereignty posture.

Two rows of supported model providers with uniform visual weight. Row 1 — Cloud APIs with residency controls: OpenAI, Anthropic, Mistral, Google. Row 2 — Open-weights run inside your perimeter: Llama, Qwen, Mistral OS, Gemma, Falcon. Neutral wordmark placeholders; no provider is featured.CLOUD APISwith residency controls04 PROVIDERSOpenAIAPI · GPTAnthropicAPI · CLAUDEMistralAPI · LE CHATGoogleAPI · GEMINIOPEN-WEIGHTSrun inside your perimeter05 PROVIDERSLlamaMETA · OPENQwenALIBABA · OPENMistral OSAPACHE 2.0GemmaGOOGLE · OPENFalconTII · OPENdeeplinq · SUPPORTED MODEL PROVIDERS09 PROVIDERS

How we map to your frameworks

The frameworks your regulator cares about, addressed inside the platform.

What deeplinq does for each framework today. Coverage-in-progress is marked explicitly. Nothing aspirational.

GDPR (EU)
Residency enforced by deployment location. Right-to-erasure at the knowledge-graph layer. No personal-data processing outside the perimeter by default.
EU AI Act
High-risk AI controls via logging, human oversight gates, model documentation. Alignment in progress, targeting 2026.
DORA (EU)
Operational resilience via ICT-perimeter deployment, incident logging, third-party dependency minimisation.
MiFID II / MAR
Transaction surveillance, research-augmentation source attribution, audit reconstruction via decision-trace exports.
Solvency II
Prudential workflow via document intelligence and actuarial unification inside the regulated perimeter. On request per insurance carrier.
ACPR / CNIL (France)
French supervisory expectations via national hosting, data-processing records, DPO-ready audit exports.
FINMA / CCAF (Switzerland)
Swiss residency, secret bancaire posture, supervisory-review readiness via on-premise and Swiss-cloud deployment.
CSSF (Luxembourg)
Luxembourg requirements via regional deployment and outsourcing-arrangement documentation.
Bank Al-Maghrib / Loi 09-08 (Morocco)
National hosting, Moroccan residency, central-bank reporting via local deployment.
PDPL (UAE)
UAE residency and data-protection via regional deployment. Central Bank UAE alignment on request.
HIPAA (US)
PHI inside the provider's perimeter. BAA-compatible deployment. US-region hosting on request.
GDPR health provisions (EU)
Special-category health data within the hospital perimeter — consent and access logs exposed to the DPO.
GxP (pharma)
Audit traceability, model-version pinning, prompt/response archival for regulatory-submission and quality workflows.
SOC 2
Type II audit underway, targeting 2026.
National sovereign-cloud requirements
Deployment on certified sovereign-cloud providers via the national or private-cloud mode. Accreditations vary by country.

The evidence layer

What the auditor receives.

A regulator review is not won on claims. It is won on evidence. Deeplinq treats the evidence layer as first-class.

Every prompt, retrieval, model call, and agent action is archived with full decision trace — inputs, retrieved context, model version, parameters, reasoning path where exposed, human approval status where a gate applies.

Access audited at user, role, and document level. Policy changes versioned. Model-version pins prevent silent substitution.

When a regulator asks for an evidence bundle, the institution exports a structured archive. No reconstruction project. No vendor intermediary.

For banking prospects

Banking deserves its own conversation.

The platform is horizontal, but banking density justifies a dedicated view — KYC, credit ops, market surveillance, customer engagement; integration footprint across Temenos, Finastra, Avaloq, Murex, SAP, Oracle.

If you are arriving from a banking mandate, the dedicated page goes deeper.

Start a conversation. Not a sales process.

A working session with our team on your regulatory envelope, your deployment constraints, and the use cases that matter to your institution. Pragmatic, technical, short.