Welcome to Spice.ai Cloud
Welcome to the Spice.ai Cloud Platform documentation!
Last updated
Welcome to the Spice.ai Cloud Platform documentation!
Last updated
The Spice.ai Cloud Platform is an AI application and agent cloud; an AI-backend-as-a-service comprising of composable, ready-to-use AI and agent building blocks including high-speed SQL query, LLM inference, Vector Search, and RAG built-on cloud-scale, managed Spice.ai OSS.
This documentation pertains to the Spice.ai Cloud Platform.
For documentation on the self-hostable Spice.ai OSS Project, please visit docs.spiceai.org.
With the Spice.ai Cloud Platform, powered by Spice.ai OSS, you can:
Query and accelerate data: Run high-performance SQL queries across multiple data sources with results optimized for AI applications and agents.
Use AI Models: Perform large language model (LLM) inference with major providers including OpenAI, Anthropic, and Grok for chat, completion, and generative AI workflows.
Collaborate on Spicepods: Share, fork, and manage datasets, models, embeddings, evals, and tools in a collaborative, community-driven hub indexed by spicerack.org.
Use-Cases
Fast, virtualized data views: Build specialized “small data” warehouses to serve fast, virtualized views across large datasets for applications, APIs, dashboards, and analytics.
Performance and reliability: Manage replicas of hot data, cache SQL queries and AI results, and load-balance AI services to improve resiliency and scalability.
Production-grade AI workflows: Use Spice.ai Cloud as a data and AI proxy for secure, monitored, and compliant production environments, complete with advanced observability and performance management.
Take it for a spin by starting with the getting started guide.
Feel free to ask any questions or queries to the team in Discord.