LLM API
Chat Completions
Chat completion generated successfully
The specified model was not found
An internal server error occurred while processing the chat completion
The format of the response, one of 'application/json' (default), 'application/vnd.spiceai.nsql.v1+json', 'application/sql', 'text/csv' or 'text/plain'. 'application/sql' will only return the SQL query generated by the model.
SQL query executed successfully
Invalid request parameters
Internal server error
The format of the response (e.g., json or csv).
If true, includes the status of each model in the response.
A comma-separated list of metadata fields to include in the response (e.g., supports_responses_api)
List of models in JSON format
Internal server error occurred while processing models
Last updated
Was this helpful?