Quickstart
The Privatemode API provides a GenAI inference service designed with end-to-end encryption and privacy preservation at its core. Setting it up is a breeze. It shouldn't take more than 10 minutes of your time.
1. Install Docker
Follow the instructions to install Docker.
2. Run the proxy
The Privatemode API comes with its own proxy. The privatemode-proxy takes care of client-side encryption and verifies the integrity and identity of the entire service using remote attestation. Use the following command to run the proxy:
docker run -p 8080:8080 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your-api-key>
Instead of using Docker, you may run the native binary on Linux.
This opens an endpoint on your host on port 8080. This guide assumes that you run and use the proxy on your local machine. Alternatively, you can run it on another machine and configure TLS encryption.
3. Send prompts
Now you're all set to use the API. The proxy handles all the security (and confidential computing) intricacies for you. Start by sending your first prompt:
- Bash
- Python
- Javascript
Example request
curl localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "latest",
"messages": [
{
"role": "user",
"content": "Hello Privatemode!"
}
]
}'
Example response
"id": "chat-c87bdd75d1394dcc886556de3db5d0c9",
"object": "chat.completion",
"created": 1727429032,
"model": "latest",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello. I'm here to help you in any way I can.",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": null
}
],
"usage": {
"prompt_tokens": 34,
"total_tokens": 49,
"completion_tokens": 15
},
"prompt_logprobs": null
}
Example request
import requests
import json
# A wrapper class for the API for convenience
class Privatemode:
def __init__(self, url, port):
self.endpoint = f"http://{url}:{port}/v1/chat/completions" # Adjust path if necessary
self.model = "latest"
def run(self, prompt):
# JSON payload with the necessary parameters
payload = {
"model": self.model,
"messages": [{
"role": "user",
"content": prompt
}],
}
headers = {"Content-Type": "application/json"}
# Sending the request to your proxy
response = requests.post(self.endpoint, headers=headers, data=json.dumps(payload))
# Error handling in case of a bad response
if response.status_code != 200:
raise Exception(f"Error {response.status_code}: {response.text}")
# Return the response JSON
return response.json()
# Example usage
if __name__ == "__main__":
# Initialize the wrapper with the URL and port of proxy
model = Privatemode(url="localhost", port=8080)
# Run a prompt through the API
try:
response = model.run("Hello Privatemode")
print(response.get('choices')[0].get('message').get('content')) # Print the API response
except Exception as e:
print(f"An error occurred: {e}")
Example response
It's nice to meet you. Is there something I can help you with or would you like to chat?
Example request
import fetch from "node-fetch";
// A wrapper class for the API for convenience
class Privatemode {
constructor(url, port) {
this.endpoint = `http://${url}:${port}/v1/chat/completions`; // Adjust path if necessary
this.model = "latest";
}
async run(prompt) {
// JSON payload with the necessary parameters
const payload = {
model: this.model,
messages: [
{
role: "user",
content: prompt,
},
],
};
const headers = {
"Content-Type": "application/json",
};
try {
// Sending the request to your proxy
const response = await fetch(this.endpoint, {
method: "POST",
headers: headers,
body: JSON.stringify(payload),
});
// Error handling in case of a bad response
if (!response.ok) {
throw new Error(`Error ${response.status}: ${response.statusText}`);
}
// Return the response JSON
const data = await response.json();
return data;
} catch (error) {
// Handle errors
throw new Error(`Request failed: ${error.message}`);
}
}
}
// Example usage
(async () => {
// Initialize the wrapper with the URL and port of the proxy
const model = new Privatemode("localhost", 8080);
// Run a prompt through the API
try {
const response = await model.run("Hello Privatemode");
console.log(response.choices[0].message.content); // Print the API response
} catch (error) {
console.log(`An error occurred: ${error.message}`);
}
})();
Example response
It's nice to meet you. Is there something I can help you with or would you like to chat?
The code performs the following steps:
- Construct a prompt request following the OpenAI Chat API specification.
- Send the prompt request to the privatemode-proxy. The proxy handles end-to-end encryption and verifies the integrity of the Privatemode backend that serves the endpoint.
- Receive and print the response.
Privatemode doesn't use any OpenAI services. It only adheres to the same interface definitions to provide a great development experience and ensure easy code portability.