Skip to main content
Version: 1.29

Models API

Use the Privatemode model API to get a list of currently available models. The API is compatible with the OpenAI list models API. To list models, send your requests to the Privatemode proxy.

List models

You can get a list of all available models using the models endpoint.

GET /v1/models

This endpoint lists all currently available models.

Returns

The response is a list of model objects.

Example request

#!/usr/bin/env bash

curl localhost:8080/v1/models

Example response

{
"object": "list",
"data": [
{
"id": "llama-3.3-70b",
"object": "model",
"tasks": [
"generate",
"tool_calling"
]
},
{
"id": "gemma-3-27b",
"object": "model",
"tasks": [
"generate",
"vision"
]
},
{
"id": "multilingual-e5-large",
"object": "model",
"tasks": [
"embed"
]
}
]
}
note

Starting with v1.28.0, shortened model IDs (without prefixes and suffixes) were introduced. For backward compatibility, the original full model IDs are still supported for now.

As a result, the /models endpoint may return both the shortened and full model IDs for older models. This duplication doesn't affect the behavior of any other endpoints.

Supported model tasks

Response field tasks provides a lists of all tasks a model supports:

  • embed: Create vector representations (embeddings) of input text.
  • generate: Generate text completions or chat responses from prompts.
  • tool_calling: Invoke function calls or tools (such as retrieval-augmented generation or plugins).

Note that tasks isn't part of the OpenAI API spec.