LLM API

Chat Completions

Spice provides an OpenAI compatible chat completion AI at https://data.spiceai.io/v1/chat/completions. Authorize with the endpoint using an App API key.

The App requires a configured and deployed model to respond to chat completion requests.

For more information about using chat completions, refer to the OpenAI documentation.

Create Chat Completion

post
/v1/chat/completions

Creates a model response for the given chat conversation.

Authorizations
X-API-KEYstringRequired
Body
anyOptional
Responses
200

Chat completion generated successfully

application/json
Responseany
post
/v1/chat/completions

Text-to-SQL (NSQL)

post
/v1/nsql

Generate and optionally execute a natural-language text-to-SQL (NSQL) query.

This endpoint generates a SQL query using a natural language query (NSQL) and optionally executes it. The SQL query is generated by the specified model and executed if the Accept header is not set to application/sql. When stream is true, the response is streamed as Server-Sent Events (SSE).

Authorizations
X-API-KEYstringRequired
Header parameters
AcceptstringRequired

The format of the response, one of 'application/json' (default), 'application/vnd.spiceai.nsql.v1+json', 'application/sql', 'text/csv' or 'text/plain'. 'application/sql' will only return the SQL query generated by the model.

Body
anyOptional
Responses
200

SQL query executed successfully

post
/v1/nsql

List Models

get
/v1/models

List all models, both machine learning and language models, available in the runtime.

Authorizations
X-API-KEYstringRequired
Query parameters
formatanyOptional

The format of the response (e.g., json or csv).

statusbooleanOptional

If true, includes the status of each model in the response.

metadata_fieldsstringOptional

A comma-separated list of metadata fields to include in the response (e.g., supports_responses_api)

Responses
200

List of models in JSON format

Responseany
get
/v1/models

Last updated

Was this helpful?