OpenAI
Instructions for using language models hosted on OpenAI or compatible services with Spice.
To use a language model hosted on OpenAI (or compatible), specify the openai
path in the from
field.
For a specific model, include it as the model ID in the from
field (see example below). The default model is gpt-4o-mini
.
Configuration
from
from
The from
field takes the form openai:model_id
where model_id
is the model ID of the OpenAI model, valid model IDs are found in the {endpoint}/v1/models
API response.
Example:
name
name
The model name. This will be used as the model ID within Spice and Spice's endpoints (i.e. https://data.spiceai.io/v1/models
). This can be set to the same value as the model ID in the from
field.
params
params
endpoint
The OpenAI API base endpoint. Can be overridden to use a compatible provider (i.e. Nvidia NIM).
https://api.openai.com/v1
tools
-
system_prompt
An additional system prompt used for all chat completions to this model.
-
openai_api_key
The OpenAI API key.
-
openai_org_id
The OpenAI organization ID.
-
openai_project_id
The OpenAI project ID.
-
openai_temperature
Set the default temperature to use on chat completions.
-
openai_response_format
-
See Large Language Models for additional configuration options.
Supported OpenAI Compatible Providers
Spice supports several OpenAI compatible providers. Specify the appropriate endpoint in the params section.
Azure OpenAI
Follow Azure AI Models instructions.
Groq
Groq provides OpenAI compatible endpoints. Use the following configuration:
NVidia NIM
NVidia NIM models are OpenAI compatible endpoints. Use the following configuration:
View the Spice cookbook for an example of setting up NVidia NIM with Spice here.
Parasail
Parasail also offers OpenAI compatible endpoints. Use the following configuration:
Refer to the respective provider documentation for more details on available models and configurations.
Last updated
Was this helpful?