Skip to main content
MCP (Model Context Protocol) is a model context protocol launched by Anthropic that allows AI models (such as Claude, GPT, etc.) to call external tools through standardized interfaces. With the Luma MCP Server provided by AceData Cloud, you can directly generate AI videos in AI clients like Claude Desktop, VS Code, Cursor, etc.

Feature Overview

Luma MCP Server provides the following core functionalities:
  • Text to Video Generation — Generate high-quality videos from text prompts
  • Image to Video Generation — Generate videos starting or ending with images
  • Video Continuation — Continue generating from the last frame of an existing video
  • Multiple Aspect Ratios — Supports various ratios such as 16:9, 9:16, 1:1, etc.
  • Visual Enhancement — Optional visual quality enhancement feature
  • Task Querying — Monitor generation progress and obtain results

Prerequisites

Before use, you need to obtain an AceData Cloud API Token:
  1. Register or log in to the AceData Cloud platform
  2. Go to the Luma Videos API page
  3. Click “Acquire” to get the API Token (first-time applicants receive free credits)

Installation Configuration

pip install mcp-luma

Method 2: Source Installation

git clone https://github.com/AceDataCloud/MCPLuma.git
cd MCPLuma
pip install -e .
Once installed, you can use the mcp-luma command to start the service.

Using in Claude Desktop

Edit the Claude Desktop configuration file:
  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
Add the following configuration:
{
  "mcpServers": {
    "luma": {
      "command": "mcp-luma",
      "env": {
        "ACEDATACLOUD_API_TOKEN": "your API Token"
      }
    }
  }
}
If using uvx (no need to install the package in advance):
{
  "mcpServers": {
    "luma": {
      "command": "uvx",
      "args": ["mcp-luma"],
      "env": {
        "ACEDATACLOUD_API_TOKEN": "your API Token"
      }
    }
  }
}
After saving the configuration, restart Claude Desktop to use Luma-related tools in the conversation.

Using in VS Code / Cursor

Create .vscode/mcp.json in the project root directory:
{
  "servers": {
    "luma": {
      "command": "mcp-luma",
      "env": {
        "ACEDATACLOUD_API_TOKEN": "your API Token"
      }
    }
  }
}
Or use uvx:
{
  "servers": {
    "luma": {
      "command": "uvx",
      "args": ["mcp-luma"],
      "env": {
        "ACEDATACLOUD_API_TOKEN": "your API Token"
      }
    }
  }
}

Available Tools List

Tool NameDescription
luma_generate_videoGenerate video from text prompts
luma_generate_video_from_imageGenerate video from an image
luma_extend_videoContinue an existing video
luma_extend_video_from_urlContinue from a video specified by URL
luma_get_taskQuery the status of a single task
luma_get_tasks_batchBatch query task statuses

Usage Examples

After configuration, you can directly call these functions in the AI client using natural language, for example:
  • “Help me generate a video of a sunset by the sea”
  • “Use this photo as the first frame to generate a 5-second video”
  • “Continue this video and extend it further”
  • “Generate a vertical video with a 9:16 aspect ratio”

More Information