Skip to main content

AI Compliance

ADITO's AI platform is self-hosted, runs in Germany, and does not share data with external AI providers. This page documents how the platform handles security, data, transparency, and ethics.


1. Security & data privacy

Infrastructure

The model runs on ADITO Cloud servers in Germany. No inference requests leave ADITO infrastructure. ADITO Cloud is ISO 27001 certified.

Data isolation

Each customer environment has its own API key and isolated network path. Prompt content is not accessible across customer environments. Requests are processed independently with no cross-tenant session state or conversation history.

Data protection

  • All data processing is subject to ADITO's Data Processing Agreement (DPA) and applicable EU law.
  • Customer data is not sent to external AI providers, cloud services, or third parties for AI inference.

Contractual framework

Implementation partners using ADITO AI tools are covered by the Zusatzvereinbarung zum Implementierungs-Partner-Vertrag zur Nutzung von ADITO KI-Tools, which defines their obligations when deploying ADITO AI features in customer projects.

Failure modes

AI language models produce incorrect outputs sometimes. To reduce the impact:

  • AI outputs require explicit human confirmation before record changes or actions execute.
  • Where structured output is required, the API supports schema-constrained responses to reduce malformed data.
  • Known edge cases are documented in the API reference.

2. Data processing & storage

Data flow

When an application makes an AI request:

  1. The application sends a prompt (text and optional context data) to the ADITO AI API.
  2. The request goes to the self-hosted model running on ADITO Cloud infrastructure in Germany.
  3. The model processes the input in memory and returns a response.
  4. No prompt content or response data leaves ADITO Cloud infrastructure.

What is stored

Data typeStored?Where
Prompt contentYes, 90 daysADITO Cloud, Germany — retained solely to diagnose technical issues
Model response contentYes, 90 daysADITO Cloud, Germany — retained solely to diagnose technical issues
API access logs (timestamps, usage data, API key ID)YesADITO Cloud, Germany — operational and security purposes only
Anonymous usage metadata (error types, latency, feature usage)YesADITO Cloud, Germany — product monitoring only; no user-identifiable content
CRM data referenced in promptsAs part of promptStored as part of the prompt log (see above); not stored separately and not accessible outside of that log

Monitoring

Prompts and responses are stored securely in ADITO Cloud to diagnose technical issues. They are not accessed unless a specific support case requires it, and only engineers directly responsible for the AI platform have access. Anonymous operational metrics — error rates, response latency, feature usage — are also collected to catch problems early. These metrics don't identify users or customers.

Data retention

  • Prompt content and model responses are stored for 90 days and then permanently deleted. They are not shared with third parties.
  • Stored data is never used to train, fine-tune, or benchmark the underlying model.
  • Customers can request access to or deletion of stored data through ADITO's standard data subject processes.

3. Transparency

ADITO uses a pre-trained open-source model rather than training its own. Data preprocessing, feature engineering, model creation, and evaluation were done by the model's developers, not by ADITO. Their process is publicly documented; anyone can read it. ADITO's transparency focus is on how the model is deployed and used, not on how it was built.

  • The model's weights and architecture are publicly available. There is no black-box in the data path.
  • The model runs on ADITO-operated servers in Germany, with no reliance on a third-party AI service.
  • The full API specification is in the ADITO-LLM API Reference.
  • Capabilities and known limitations are in the Capabilities section.
  • Customer prompts are not used to train, fine-tune, or evaluate the model.

4. Ethics & human oversight

The model assists users and applications; humans make the decisions.

  • AI outputs are suggestions or drafts. Record changes and action execution require explicit input from a user or a configured, auditable process.
  • The platform is built around European fundamental rights: human dignity, freedom, democracy, equality, the rule of law, pluralism, non-discrimination, tolerance, justice, solidarity, and gender equality.
  • ADITO is bound by the German Grundgesetz and the UN Universal Declaration of Human Rights. The platform will not be used against democracy, human rights, or fundamental freedoms.
  • The platform operates under EU law, including GDPR and the EU AI Act.
  • AI-assisted workflows don't systematically disadvantage any group of users, customers, or contacts.
  • ADITO tracks AI safety standards and EU AI Act requirements and updates its practices as they evolve.

5. Bias & fairness

ADITO does not train its own models. It uses a pre-trained open-source large language model, which means no customer data enters the training process.

  • Biases in the model come from its original pre-training data, which the model's developers document publicly.
  • Because the model's architecture and training methodology are public, the broader AI community can audit and report on bias independently — which wouldn't be possible with a proprietary model.
  • ADITO reviews AI output quality regularly through trained staff. Findings are documented on a set schedule regardless of outcome.
  • Like all large language models, this one may reflect biases from its training data. Users should review AI outputs before acting on them, especially in sensitive contexts.