Proxy and API quickstart
Use this quickstart if you want to call the API from your own code or product, or use third-party integrations such as coding assistants.
1. Sign in to the portal
Open the Privatemode portal and sign in.
The portal is where you manage:
- organizations
- access keys
- usage
- billing
2. Create or select an organization
Rate limits, usage, and billing are managed at the organization level.
If this is your first time using Privatemode:
- Create an organization
- Select it in the portal
- Continue with access key creation
For more details, see the Overview and Organizations pages.
3. Create an access key
In the portal, go to Access keys and create a new access key.
Save the key securely when it's shown to you. You will need it to authenticate the Privatemode proxy and client applications.
For best practices, see Access keys.
4. Install Docker
Follow the instructions to install Docker.
On Windows, the easiest way is to run the proxy inside the Windows Subsystem for Linux (WSL) with the networking mode set to mirrored. Open "WSL Settings" and go to "Networking" to set the networking mode.
5. Run the proxy
The Privatemode API comes with its own proxy. The Privatemode proxy takes care of client-side encryption and verifies the integrity and identity of the entire service using remote attestation. Use the following command to run the proxy:
docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your-access-key>
Instead of using Docker, you may run the native binary on Linux.
This opens an endpoint on your host on port 8080. This guide assumes that you run and use the proxy on your local machine. Alternatively, you can run it on another machine and configure TLS encryption.
6. Send prompts
Now you're all set to use the API. The proxy handles all the security and confidential computing details for you. Start by sending your first prompt:
- Bash
- Python
- JavaScript
Example request
#!/usr/bin/env bash
curl localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "kimi-k2.5",
"messages": [
{
"role": "user",
"content": "Hello Privatemode!"
}
]
}'
Example response
{
"id": "chatcmpl-proxy_021e5a68-0346-432b-a0f0-7b516ab0c9a7_0",
"object": "chat.completion",
"created": 1776086356,
"model": "kimi-k2.5",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": " Hello! I'm Kimi, ...",
"refusal": null,
"annotations": null,
"audio": null,
"function_call": null,
"tool_calls": [],
"reasoning": " The user greeted me ... ",
"reasoning_content": " The user greeted me ... "
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": 163586,
"token_ids": null
}
],
"service_tier": null,
"system_fingerprint": null,
"usage": {
"prompt_tokens": 30,
"total_tokens": 420,
"completion_tokens": 390,
"prompt_tokens_details": null
},
"prompt_logprobs": null,
"prompt_token_ids": null,
"kv_transfer_params": null
}
Example request
import openai
client = openai.OpenAI(
api_key="placeholder", # Already set in the proxy, but needs to be non-empty here
base_url="http://localhost:8080/v1", # Adjust as necessary
)
response = client.chat.completions.create(
model="kimi-k2.5",
messages=[
{"role": "user", "content": "Hello Privatemode!"},
],
)
print(response.choices[0].message.content)
Example response
It's nice to meet you. Is there something I can help you with or would you like to chat?
Example request
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'placeholder', // Already set in the proxy, but needs to be non-empty here
baseURL: 'http://localhost:8080/v1', // Adjust as necessary
});
const response = await client.chat.completions.create({
model: 'kimi-k2.5',
messages: [{ role: 'user', content: 'Hello Privatemode!' }],
});
console.log(response.choices[0].message.content);
Example response
It's nice to meet you. Is there something I can help you with or would you like to chat?
The code performs the following steps:
- Construct a prompt request following the OpenAI Chat API specification.
- Send the prompt request to the Privatemode proxy. The proxy handles end-to-end encryption and verifies the integrity of the Privatemode backend that serves the endpoint.
- Receive and print the response.
Privatemode doesn't use any OpenAI services. It only adheres to the same interface definitions to provide a great development experience and ensure easy code portability.
Next steps
- Learn about proxy configuration
- Explore available models
- Read the API overview
- Set up coding assistants
- Manage keys, usage, and billing in the portal