MotivdMotivd
AboutCase studiesDocs
PricingSecurity
Back to Overview

AI

View AI usage and performance for your app.

The builder uses AI models for chat, code generation, and edits. Usage is tied to your account; you can connect your own API keys in Settings so the builder uses your quota and preferred providers.

Configure API keys and model providers in Settings → Integrations. Your keys are stored securely and used only for your sessions.

Open Settings → Integrations

Related connectors

OpenAI

GPT and embedding models. Add your API key in Settings → Integrations so the builder uses your quota and preferred models.

View connector

Anthropic

Claude models. Add your API key in Settings → Integrations to use Claude in the builder.

View connector

Groq

Fast inference. Add your API key in Settings → Integrations to use Groq models in the builder.

View connector

OpenRouter

Many models through one API. Add your key in Settings → Integrations to use OpenRouter in the builder, or set server env OPENROUTER_API_KEY so Motivd can fall back here after Groq (default: xiaomi/mimo-v2-pro).

View connector

Mistral

Mistral and Mixtral models. Add your API key in Settings → Integrations to use them in the builder.

View connector

Together

Open and custom models. Add your API key in Settings → Integrations to use Together in the builder.

View connector

DeepSeek

DeepSeek chat and embedding models. Add your API key in Settings → Integrations to use them in the builder.

View connector

Cohere

Command and embed models. Add your API key in Settings → Integrations to use Cohere in the builder.

View connector

xAI (Grok)

Grok models. Add your API key in Settings → Integrations to use xAI in the builder.

View connector

Fireworks

Fast inference and open models. Add your API key in Settings → Integrations to use Fireworks in the builder.

View connector

Google (Gemini)

Gemini models. Add your API key in Settings → Integrations to use Google AI in the builder.

View connector

Azure OpenAI

OpenAI models on Azure. Add your endpoint and key in Settings → Integrations to use Azure OpenAI in the builder.

View connector

Custom OpenAI-compatible

Any OpenAI-compatible endpoint. Add base URL and API key in Settings → Integrations to use your own inference service.

View connector

Perplexity

Web-grounded AI with citations. Add your API key in Settings → Integrations to use Perplexity models in the builder.

View connector

Replicate

Run open-source models via API. Add your token in Settings → Integrations to use Replicate's OpenAI-compatible proxy in the builder.

View connector

Anyscale

Scalable Ray and LLM endpoints. Add your API key in Settings → Integrations to use Anyscale in the builder.

View connector

Hugging Face

Inference API and hosted models. Add your token in Settings → Integrations to use Hugging Face in the builder.

View connector

fal.ai

Fast inference and image models. Add your API key in Settings → Integrations to use fal.ai in the builder.

View connector

SiliconFlow

Efficient LLM inference and APIs. Add your API key in Settings → Integrations to use SiliconFlow in the builder.

View connector

OctoAI

Optimized inference for open and custom models. Add your token in Settings → Integrations to use OctoAI in the builder.

View connector

NVIDIA NIM

NVIDIA inference microservices. Add your API key in Settings → Integrations to use NIM models in the builder.

View connector

Moonshot AI (Kimi)

Kimi and Moonshot chat models via an OpenAI-compatible API. Add your API key in Settings → Integrations to run Moonshot models in the builder.

View connector

Baseten

Deploy and serve open and custom models with OpenAI-compatible inference. Add your API key in Settings → Integrations to use Baseten endpoints in the builder.

View connector