Skip to main content
Version: 1.7

Quickstart

The Privatemode API provides a GenAI inference service designed with end-to-end encryption and privacy preservation at its core. Setting it up is a breeze. It shouldn't take more than 10 minutes of your time.

1. Install Docker

Follow the instructions to install Docker.

2. Run the proxy

The Privatemode API comes with its own proxy. The privatemode-proxy takes care of client-side encryption and verifies the integrity and identity of the entire service using remote attestation. Use the following command to run the proxy:

docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your-api-key>
tip

Instead of using Docker, you may run the native binary on Linux.

This opens an endpoint on your host on port 8080. This guide assumes that you run and use the proxy on your local machine. Alternatively, you can run it on another machine and configure TLS encryption.

3. Send prompts

Now you're all set to use the API. The proxy handles all the security (and confidential computing) intricacies for you. Start by sending your first prompt:

Example request

curl localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "latest",
"messages": [
{
"role": "user",
"content": "Hello Privatemode!"
}
]
}'

Example response

    "id": "chat-c87bdd75d1394dcc886556de3db5d0c9",
"object": "chat.completion",
"created": 1727429032,
"model": "latest",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello. I'm here to help you in any way I can.",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": null
}
],
"usage": {
"prompt_tokens": 34,
"total_tokens": 49,
"completion_tokens": 15
},
"prompt_logprobs": null
}

The code performs the following steps:

  1. Construct a prompt request following the OpenAI Chat API specification.
  2. Send the prompt request to the privatemode-proxy. The proxy handles end-to-end encryption and verifies the integrity of the Privatemode backend that serves the endpoint.
  3. Receive and print the response.
info

Privatemode doesn't use any OpenAI services. It only adheres to the same interface definitions to provide a great development experience and ensure easy code portability.