View AI usage and performance for your app.
The builder uses AI models for chat, code generation, and edits. Usage is tied to your account; you can connect your own API keys in Settings so the builder uses your quota and preferred providers.
Configure API keys and model providers in Settings → Integrations. Your keys are stored securely and used only for your sessions.
OpenAI
GPT and embedding models. Add your API key in Settings → Integrations so the builder uses your quota and preferred models.
View connectorAnthropic
Claude models. Add your API key in Settings → Integrations to use Claude in the builder.
View connectorGroq
Fast inference. Add your API key in Settings → Integrations to use Groq models in the builder.
View connectorOpenRouter
Many models through one API. Add your key in Settings → Integrations to use OpenRouter in the builder, or set server env OPENROUTER_API_KEY so Motivd can fall back here after Groq (default: xiaomi/mimo-v2-pro).
View connectorMistral
Mistral and Mixtral models. Add your API key in Settings → Integrations to use them in the builder.
View connectorTogether
Open and custom models. Add your API key in Settings → Integrations to use Together in the builder.
View connectorDeepSeek
DeepSeek chat and embedding models. Add your API key in Settings → Integrations to use them in the builder.
View connectorCohere
Command and embed models. Add your API key in Settings → Integrations to use Cohere in the builder.
View connectorxAI (Grok)
Grok models. Add your API key in Settings → Integrations to use xAI in the builder.
View connectorFireworks
Fast inference and open models. Add your API key in Settings → Integrations to use Fireworks in the builder.
View connectorGoogle (Gemini)
Gemini models. Add your API key in Settings → Integrations to use Google AI in the builder.
View connectorAzure OpenAI
OpenAI models on Azure. Add your endpoint and key in Settings → Integrations to use Azure OpenAI in the builder.
View connectorCustom OpenAI-compatible
Any OpenAI-compatible endpoint. Add base URL and API key in Settings → Integrations to use your own inference service.
View connectorPerplexity
Web-grounded AI with citations. Add your API key in Settings → Integrations to use Perplexity models in the builder.
View connectorReplicate
Run open-source models via API. Add your token in Settings → Integrations to use Replicate's OpenAI-compatible proxy in the builder.
View connectorAnyscale
Scalable Ray and LLM endpoints. Add your API key in Settings → Integrations to use Anyscale in the builder.
View connectorHugging Face
Inference API and hosted models. Add your token in Settings → Integrations to use Hugging Face in the builder.
View connectorfal.ai
Fast inference and image models. Add your API key in Settings → Integrations to use fal.ai in the builder.
View connectorSiliconFlow
Efficient LLM inference and APIs. Add your API key in Settings → Integrations to use SiliconFlow in the builder.
View connectorOctoAI
Optimized inference for open and custom models. Add your token in Settings → Integrations to use OctoAI in the builder.
View connectorNVIDIA NIM
NVIDIA inference microservices. Add your API key in Settings → Integrations to use NIM models in the builder.
View connectorMoonshot AI (Kimi)
Kimi and Moonshot chat models via an OpenAI-compatible API. Add your API key in Settings → Integrations to run Moonshot models in the builder.
View connectorBaseten
Deploy and serve open and custom models with OpenAI-compatible inference. Add your API key in Settings → Integrations to use Baseten endpoints in the builder.
View connector