Skip to main content
Version: 1.12

Privatemode as a secure backend for PrivateGPT

PrivateGPT is an easy to use framework for securely running context-aware AI applications locally. Combined with Privatemode, you can offload computationally intensive inference operations onto our servers, while ensuring that all your data stays private at all times.

Since Privatemode serves an OpenAI compatible API, it can interface with PrivateGPT running in openailike mode.

Set-up guide

First, start your privatemode-proxy:

docker run -p 8000:8000 ghcr.io/edgelesssys/privatemode/privatemode-proxy:latest --apiKey <your_api_key> --port 8000

Next, follow the PrivateGPT installation instructions to install the required dependencies. Run the following command in the checked out PrivateGPT repository to install the dependencies required for the default configuration of the openailike mode:

poetry install --extras "ui embeddings-huggingface llms-openai-like vector-stores-qdrant"

Update the configuration to use the latest model for inference in settings-vllm.yaml:

openai:
model: latest

You can now run PrivateGPT using the settings-vllm.yam profile:

PGPT_PROFILES=vllm make run

Go to http://localhost:8001/ to access the deployment through the Gradio UI.