Skip to main content
Version: Next

Client side

On the client side, the user interacts with the privatemode-proxy, which effectively serves as the API endpoint for any inference requests to the LLM. Ideally, the proxy is deployed on the user's machine or within a secure and trusted network.

Proxy

The client-side privatemode-proxy acts as the trust anchor of the Privatemode API. Ensuring its integrity and authenticity during setup is crucial for maintaining the overall security of the system.

The proxy performs three main tasks:

  1. Attesting the server side: The proxy verifies the attestation service using remote attestation. This process indirectly confirms that the attestation service

    • properly verifies all AI workers.
    • facilitates secure key exchanges.

    In essence, this step ensures the integrity and authenticity of Privatemode API's server side.

  2. Encrypting outgoing prompts and decrypting incoming replies: Upon successful attestation, the proxy exchanges a secret key with the AI worker via the attestation service. This key enables end-to-end encryption between the proxy and the confidential computing environment of the AI worker, ensuring private communication.

  3. Adding authorization to inference requests: During proxy configuration, the proxy is set up with an authorization token. This token is automatically added to all inference requests to authenticate and authorize them.

By performing these tasks, the proxy ensures secure and trustworthy interactions between the client and the AI infrastructure.