Skip to main content
Version: 1.8

Release notes

v1.8.0

  • Privatemode now runs on Contrast. This changes how the privatemode-proxy verifies the deployment. The changes to attestation are described in the documentation.
  • Fixed an app issue where prompts sent before initialization would return errors. The app now waits for initialization to complete before responding.
  • Fixed incorrect Content-Type header in /v1/models endpoints, changing from text/plain to application/json to resolve Web UI compatibility issues.
  • Persist app logs (see Logging).
  • Add a system prompt with knowledge about Privatemode AI in the app. This allows users to ask basic questions about the security of the service.

v1.7.0

  • The deprecated model parameter hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 was removed. Make sure that you updated the model (see v1.5.0 release note).
  • As part of the rename to Privatemode the domains used by the privatemode-proxy (previously continuum-proxy) are changed to api.privatemode.ai, secret.privatemode.ai and contrast.privatemode.ai.
  • The privatemode-proxy flag --asEndpoint was renamed to --ssEndpoint (secret-service) to reflect the new architecture.
  • The privatemode-proxy flag --disableUpdate was removed (see deprecation notice in v1.5).
  • The secret-service (was attestation-service) is now available on port 443 (was 3000).
  • Temporarily reduce context window size from 130.000 tokens to 70.000 tokens. This is done to increase capacity during an internal migration.
  • The desktop app log directory was updated from $CFG_DIR/EdgelessSystems/continuum to $CFG_DIR/EdgelessSystems/privatemode. See transparency log.
  • The desktop app now opens external links from the chat in the browser.

v1.6.0

  • The product is renamed to Privatemode. Older proxy versions remain compatible with the API.
  • The proxy container location has changed to ghcr.io/edgelesssys/privatemode/privatemode-proxy. See proxy configuration.
  • The manifest log directory has changed from continuum-manifests to manifests. See proxy configuration.

v1.5.0

warning

Please update the model parameter in your request body. The old parameter (hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4) is outdated and support will be dropped in the next minor release. If you always want to use the latest model, please use the new model parameter (latest). For more information, see Example Prompting.

  • Upgrade to the Llama 3.3 70B model (ibnzterrell/Meta-Llama-3.3-70B-Instruct-AWQ-INT4 ) for improved quality.
  • Upgrade vLLM to v0.6.6.
  • disableUpdate flag is deprecated. Providing a manifest file via --manifestPath will automatically disable the update behavior. Refer to Manifest management for more details.

v1.4.0

  • Major rewrite of the documentation
  • Support token-based billing for Stripe
  • Fixes a bug to return errors as type text/event-stream if requested by the client

v1.3.1

  • Improve stability for cases where the AMD Key Distribution Service is unavailable.

v1.3.0

  • Internal changes to license management.

v1.2.2

  • Fixes a bug for streaming requests that made optional parameters required if stream_options: {"include_usage": true} wasn't set

v1.2.0

  • Add arm64 support for the continuum-proxy. Find information on how to use it in the Continuum-proxy section.
  • Token tracking is now automatically enabled for streaming requests by transparently setting include_usage in the stream_options.

v1.1.0

  • Increase peak performance by more than 40% through improved request scheduling
  • Increase performance by about 6% through vLLM upgrade to v0.6.1