Skip to main content
Version: 1.7

Client

On the client side, the user interacts with the privatemode-proxy, which effectively serves as the API endpoint for any inference requests to the LLM. Ideally, the proxy is deployed on the user's machine or within a secure and trusted network.

Privatemode-proxy

The client-side privatemode-proxy acts as the trust anchor of the Privatemode API. Ensuring its integrity and authenticity during setup is crucial for maintaining the overall security of the system.

The privatemode-proxy performs three main tasks:

  1. Attesting the server side: The privatemode-proxy verifies the attestation service using remote attestation. This process indirectly confirms that the attestation service

    • properly verifies all AI workers.
    • facilitates secure key exchanges.

    In essence, this step ensures the integrity and authenticity of Privatemode API's server side.

  2. Encrypting outgoing prompts and decrypting incoming responses: Upon successful attestation, the privatemode-proxy exchanges a secret key with the AI worker via the attestation service. This key enables end-to-end encryption between the privatemode-proxy and the confidential computing environment of the AI worker, ensuring private communication.

  3. Adding authorization to inference requests: During configuration, the privatemode-proxy is set up with an authorization token. This token is automatically added to all inference requests to authenticate and authorize them.

By performing these tasks, the privatemode-proxy ensures secure and trustworthy interactions between the client and the AI infrastructure.