Where do you host inference models (on-prem, EU cloud, U.S.-based)?
June 3, 2025
Default setups use EU-based cloud providers (e.g. AWS Frankfurt, Azure EU, or Hetzner) to meet GDPR and data residency requirements.
-
For projects involving sensitive or regulated data, we offer on-premise or private-cloud inference (e.g. with LLaMA, Mistral) using Dockerized GPU nodes.
U.S.-based APIs (e.g. OpenAI, Anthropic) are used only when data exposure is low-risk and explicit consent is given.
This allows us to balance performance, cost, and compliance per project needs.