Skip to main content
Version: 1.6

Security

This page provides an overview of Privatemode's security properties. If user privacy and data protection are non-negotiable for you, the Privatemode API is the right choice.

Security properties

The Privatemode API delivers robust security features:

  • Confidentiality: By design, all your prompts and replies remain private, meaning they're accessible only to you. The Privatemode API leverages confidential computing to enforce end-to-end encryption from the application, through the inference process by the GenAI, and back to the user.

  • Integrity and authenticity of the GenAI service: Strong isolation within hardware-enforced Confidential Computing Environments (CCEs), combined with integrity and authenticity verification of the entire API infrastructure and code, ensures a thorough protection against tampering or malicious manipulation.

Details on how these security goals are achieved can be found in our Architecture section.

Key benefits

Compared to other conventional GenAI APIs, the Privatemode API offers strong privacy and data protection without compromising on inference capabilities. Below, we detail which parties typically have access to your data when using other conventional GenAI services and how the Privatemode API is different.

Different roles in resolving a GenAI API call

Sketch of entities

To help you understand who can typically access user data in conventional GenAI API services, we provide an overview of the usual parties involved in the supply chain and explain why they often have access to your data.

In most GenAI API services, the following four relevant entities are involved and have direct or indirect access to certain types of sensitive data:

  • The infrastructure provider: Provides the compute infrastructure to run the model and inference code, such as AWS or CoreWeave.
  • The platform provider: Supplies the software environment that runs the AI model, such as Hugging Face.
  • The model provider: Develops and/or supplies the actual AI model, such as Mistral or Anthropic.
  • The service provider: Integrates all components and offers the SaaS to the end user.

In many scenarios, one organization may have different roles at the same time. The following table gives three examples.

APIService providerPlatform providerModel providerInfrastructure provider
OpenAI GPTOpenAIOpenAIOpenAIMicrosoft Azure
HuggingFaceHuggingFaceHuggingFaceCohere, Mistral, and othersAWS, GCP, and others
PrivatemodeEdgeless SystemsvLLMMetaMicrosoft Azure

In the case of the well-known OpenAI GPT API, OpenAI is the service provider, the platform provider, and the model provider, while Microsoft Azure provides the infrastructure.

HuggingFace offers an inference API, which allows the user to choose between AI models. The company HuggingFace acts both as the service provider and the platform provider.

The Privatemode API is provided by us (Edgeless Systems). The service runs on Microsoft Azure and uses the open-source framework vLLM to serve a Meta AI model.

Common privacy threats of conventional GenAI APIs

Let's examine how these entities can access relevant data within widespread GenAI services like the OpenAI API and the HuggingFace API.

The infrastructure provider is highly privileged and controls hardware components and system software like the hypervisor. With this control, the infrastructure provider can typically access all data that's being processed. In the case of a GenAI API service, this includes the user data and the AI model.

On top of the infrastructure runs the software provided by the platform provider. This software has access to both the AI model and the user data. The software may leak data through implementation mistakes, logging interfaces, remote-access capabilities, or even backdoors.

The service provider typically has privileged access to the platform software and the software (e.g., a web frontend) that receives user data. Correspondingly, the service provider can access both the AI model and the user data. In particular, the service provider may decide to re-train or fine-tune the AI model using the user data. This is oftentimes a concern among users, as it may leak data to other users through the AI model's answer. For example, such a case has been reported for ChatGPT.

In the simplest case, the model provider only provides the raw weights (i.e., numbers) that make up the AI model. In this case, the model provider can't, directly or indirectly, access user data. However, in cases where the model provider provides additional software, leaks similar to those discussed for the platform provider may happen for user data.

How the Privatemode API is different

In contrast to other GenAI API services, the Privatemode API thoroughly protects against data access by these four parties when resolving an API call. No one can access your data—not the infrastructure provider, the platform provider, the model provider, or us as the service provider.