# EvoLink.AI ## Docs - [Get Credits Usage](https://docs.evolink.ai/en/api-manual/account-management/get-credits.md): Query the current user and token credit balance and usage information - [Qwen Voice Design](https://docs.evolink.ai/en/api-manual/audio-series/qwen-tts/qwen-voice-design.md): - Create a custom voice profile from a text description and receive the voice name and a preview audio clip - [Qwen3 TTS VD](/en/api-manual/audio-series/qwen-tts/qwen3-tts-vd) speech synthesis **must use a voice created by this API** — system built-in voices are not supported - Asynchronous processi… - [Qwen3 TTS VD](https://docs.evolink.ai/en/api-manual/audio-series/qwen-tts/qwen3-tts-vd.md): - Convert text to speech audio; **must use a custom voice created with [Qwen Voice Design](/en/api-manual/audio-series/qwen-tts/qwen-voice-design)** — system built-in voices are not supported - Workflow: call `qwen-voice-design` to create a voice → obtain the `voice` name → pass it to the `voice` pa… - [Suno Music Generation Beta](https://docs.evolink.ai/en/api-manual/audio-series/suno/suno-music-generation.md): - Suno AI music generation model, supports generating complete music based on text descriptions or lyrics - Supports custom mode (fine control over style, title, lyrics) and simple mode (AI auto-generation) - Supports [Persona](/en/api-manual/audio-series/suno/suno-persona-creation) for reusable voc… - [Suno Persona Creation Beta](https://docs.evolink.ai/en/api-manual/audio-series/suno/suno-persona-creation.md): - Extract reusable Persona (vocal/style characteristics) from completed Suno music generation tasks - After successful creation, a `persona_id` is returned, which can be applied in subsequent [Suno Music Generation](/en/api-manual/audio-series/suno/suno-music-generation) via `persona_id` and `person… - [Base64 File Upload](https://docs.evolink.ai/en/api-manual/file-series/upload-base64.md): - Supports Base64 encoding and Data URL formats - Automatically identifies file types and categorizes storage - Returns accessible file URLs and download links - Files will expire after 72 hours - Current user quota is limited. Uploads will fail when quota is exhausted. Please save locally if persis… - [File Stream Upload](https://docs.evolink.ai/en/api-manual/file-series/upload-stream.md): - Upload files using multipart/form-data format - Supports both underscore and camelCase parameter naming - Suitable for uploading local files directly - Files will expire after 72 hours - Current user quota is limited. Uploads will fail when quota is exhausted. Please save locally if persistent sto… - [URL File Upload](https://docs.evolink.ai/en/api-manual/file-series/upload-url.md): - Upload by providing a remote file URL - System will automatically download and store the remote file - Suitable for migrating files from other servers - Files will expire after 72 hours - Current user quota is limited. Uploads will fail when quota is exhausted. Please save locally if persistent st… - [GPT-4O Image Generation Beta](https://docs.evolink.ai/en/api-manual/image-series/gpt-4o/gpt-4o-image-generation.md): - GPT-4o (gpt-4o) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [GPT Image 1.5 Image Generation](https://docs.evolink.ai/en/api-manual/image-series/gpt-image-1.5/gpt-image-1.5-image-generation.md): - GPT Image 1.5 (gpt-image-1.5) model supports text-to-image, image-to-image, and image editing modes - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [GPT Image 1.5 Image Generation Beta](https://docs.evolink.ai/en/api-manual/image-series/gpt-image-1.5/gpt-image-1.5-lite-image-generation.md): - GPT Image 1.5 (gpt-image-1.5-beta) model supports text-to-image, image-to-image, and image editing modes - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promp… - [GPT Image 1 Image Generation](https://docs.evolink.ai/en/api-manual/image-series/gpt-image-1/gpt-image-1-image-generation.md): - GPT Image 1 (gpt-image-1) model supports text-to-image, image-to-image, and image editing modes - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [GPT Image 2 Image Generation Beta](https://docs.evolink.ai/en/api-manual/image-series/gpt-image-2/gpt-image-2-beta-image-generation.md): - GPT Image 2 (gpt-image-2-beta) model supports text-to-image, image-to-image, and image editing modes - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [GPT Image 2 Image Generation](https://docs.evolink.ai/en/api-manual/image-series/gpt-image-2/gpt-image-2-image-generation.md): - GPT Image 2 (gpt-image-2) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them prom… - [Midjourney V7 Prompt Parameter Guide](https://docs.evolink.ai/en/api-manual/image-series/midjourney/midjourney-v7-prompt-guide.md): All available parameters for the Midjourney V7 model in prompts, including value ranges, defaults, dependencies, and conflicts - [Midjourney V7 Canvas Edit](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-edit.md): - Reposition generated images on the canvas and fill blank areas with AI - Suitable for adjusting composition, expanding scenes, etc. - Async processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - [Midjourney V7 Enhance](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-enhance.md) - [Midjourney V7 Image Generation](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-image-generate.md): - Midjourney V7 model supports generating high-quality images via natural language prompts, 4 images per generation - Supports text-to-image and image-to-image (reference image URLs in prompt) - Supports all V7 native parameter syntax (e.g. --ar, --s, --c), see [Prompt Parameter Guide](/en/api-manua… - [Midjourney V7 Inpaint](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-inpaint.md) - [Midjourney V7 Outpaint](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-outpaint.md) - [Midjourney V7 Pan](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-pan.md) - [Midjourney V7 Remix](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-remix.md): - Re-create previously generated images with new prompts - Change content or style while preserving the original image structure - Difference from variation: remix requires a prompt to reinterpret the original - Async processing mode, use the returned task ID to [query status](/en/api-manual/task-ma… - [Midjourney V7 Remove Background](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-remove-bg.md): - Automatically remove image background and generate transparent images - The simplest model, only requires one input image - Does not depend on a source task, directly pass in image URL - Does not support prompt, speed or other parameters - Async processing mode, use the returned task ID to [query… - [Midjourney V7 Retexture](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-retexture.md): - Change image texture and style while preserving original structure - Does not depend on a source task, directly pass in image URL - Retextured tasks only support upscale as follow-up - Async processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - [Midjourney V7 Upload Paint](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-upload-paint.md): - Upload images for advanced canvas editing, supporting mask area specification and position adjustment - Similar to mj-v7-edit, but does not depend on existing tasks, directly pass in images - Async processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-… - [Midjourney V7 Upscale](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-upscale.md): - Upscale generated images to higher resolution - Supports two modes: standard and creative - Already upscaled images cannot be upscaled again (will return 403 error) - Async processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - [Midjourney V7 Variation](https://docs.evolink.ai/en/api-manual/image-series/midjourney/mj-v7-variation.md): - Based on completed mj-v7 series tasks, generate style variants with differences for the specified image - Supports optional prompt; if provided, content is modified along with the variation - Async processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-… - [Nanobanana-2 Image Generation Beta](https://docs.evolink.ai/en/api-manual/image-series/nanobanana/nanobanana-2-beta-image-generate.md): - Nano Banana 2 Beta (nano-banana-2-beta) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please… - [Nanobanana 2 Image Generation](https://docs.evolink.ai/en/api-manual/image-series/nanobanana/nanobanana-2-image-generate.md): - Nano Banana 2 (gemini-3.1-flash-image-preview) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours,… - [Nanobanana Image Generation Beta](https://docs.evolink.ai/en/api-manual/image-series/nanobanana/nanobanana-image-generate.md): - Nano Banana (nano-banana-beta) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them… - [Nanobanana Pro Image Generation Beta](https://docs.evolink.ai/en/api-manual/image-series/nanobanana/nanobanana-pro-beta-image-generate.md): - Nano Banana Pro Beta (nano-banana-pro-beta) model supports text-to-image, image-to-image, image editing and other generation modes, cost-effective - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid f… - [Nanobanana Pro Image Generation](https://docs.evolink.ai/en/api-manual/image-series/nanobanana/nanobanana-pro-image-generate.md): - Nano Banana Pro (gemini-3-pro-image-preview) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, pl… - [Qwen Image Edit](https://docs.evolink.ai/en/api-manual/image-series/qwen/qwen-image-edit.md): - Qwen (qwen-image-edit) model supports image editing, image-to-image and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [Qwen Image Edit Plus](https://docs.evolink.ai/en/api-manual/image-series/qwen/qwen-image-edit-plus.md): - Qwen (qwen-image-edit-plus) model supports image editing, image-to-image and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [Seedream-4.0 Image Generation](https://docs.evolink.ai/en/api-manual/image-series/seedream/seedream-4.0-image-generate.md): - Seedream 4.0 (doubao-seedream-4.0) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save… - [Seedream 4.5 Image Generation](https://docs.evolink.ai/en/api-manual/image-series/seedream/seedream-4.5-image-generate.md): - Seedream 4.5 (doubao-seedream-4.5) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save… - [Seedream 5.0 Lite Image Generation](https://docs.evolink.ai/en/api-manual/image-series/seedream/seedream-5.0-lite-image-generate.md): - Seedream 5.0 Lite (doubao-seedream-5.0-lite) model supports text-to-image, image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, pl… - [Wan2.5 Image to Image](https://docs.evolink.ai/en/api-manual/image-series/wan2.5/wan2.5-image-to-image.md): - WAN2.5 (wan2.5-image-to-image) model supports image-to-image, image editing and other generation modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [Wan2.5 Text to Image](https://docs.evolink.ai/en/api-manual/image-series/wan2.5/wan2.5-text-to-image.md): - WAN2.5 (wan2.5-text-to-image) model supports text-to-image generation mode - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [Z Image Turbo Image Generation](https://docs.evolink.ai/en/api-manual/image-series/z-image-turbo/z-image-turbo-image-generate.md): - Z Image Turbo is an ultra-fast text-to-image generation model with exceptional quality - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated image links are valid for 24 hours, please save them promptly - [Claude - Messages API](https://docs.evolink.ai/en/api-manual/language-series/claude/claude-messages-api.md): - Send a structured list of input messages with text and/or image content, and the model will generate the next message in the conversation. - The Messages API can be used for either single queries or stateless multi-turn conversations. - [DeepSeek V4 - OpenAI-Compatible API](https://docs.evolink.ai/en/api-manual/language-series/deepseek-v4/deepseek-v4-chat.md): - Call the DeepSeek V4 model using the OpenAI Chat Completions protocol - Supports two models: `deepseek-v4-flash` (fast general-purpose) and `deepseek-v4-pro` (deep reasoning) - **Plain text conversation**: Single- or multi-turn contextual dialogue with 1M ultra-long context - **System prompts**: C… - [DeepSeek V4 - Anthropic-Compatible API](https://docs.evolink.ai/en/api-manual/language-series/deepseek-v4/deepseek-v4-messages.md): - Invoke DeepSeek V4 models using the Anthropic Messages protocol - Supports `deepseek-v4-flash` / `deepseek-v4-pro` - Request / response structures aligned with the Anthropic API - **Plain text conversation** (image / document content types are not yet supported) - **System prompts**: Passed via th… - [DeepSeek - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/deepseek/deepseek-reference.md): - Call DeepSeek models using OpenAI SDK format - Synchronous processing mode, real-time response - Supports `deepseek-chat` (general conversation) and `deepseek-reasoner` (deep reasoning) models - **Text Chat**: Single or multi-turn contextual conversation - **System Prompts**: Customize AI role and… - [Doubao Seed 2.0 - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-quickstart.md): - Call Doubao Seed 2.0 series models using OpenAI SDK format - Synchronous processing mode, returns chat content in real time - Minimized parameters for quick start - Supported models: `doubao-seed-2.0-pro`, `doubao-seed-2.0-lite`, `doubao-seed-2.0-mini`, `doubao-seed-2.0-code` - 💡 Need more featur… - [Doubao Seed 2.0 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-reference.md): - Call Doubao Seed 2.0 series models using OpenAI SDK format - Synchronous processing mode, real-time response - **Text Chat**: Single or multi-turn contextual conversation - **System Prompts**: Customize AI role and behavior - **Multimodal Input**: Supports text + image + video mixed input - **Deep… - [Doubao Seed 2.0 Responses API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-responses-quickstart.md): - Use Responses API format to call Doubao Seed 2.0 series models - Supports server-side context storage, enabling multi-turn conversations via `previous_response_id` - Minimal parameters, quick start - Supported models: `doubao-seed-2.0-pro`, `doubao-seed-2.0-lite`, `doubao-seed-2.0-mini`, `doubao-s… - [Doubao Seed 2.0 Responses API - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-responses-reference.md): - Use Responses API format to call Doubao Seed 2.0 series models - **Server-side context storage**: Implement multi-turn conversations via `previous_response_id`, no need to manually pass conversation history - **Multimodal input**: Supports text + image + video + file (PDF) mixed input - **Deep thi… - [EvoLink Auto - Claude Format](https://docs.evolink.ai/en/api-manual/language-series/evolink-auto/evolink-auto-claude.md): Intelligent routing using Anthropic Messages API format - [EvoLink Auto - Gemini Format](https://docs.evolink.ai/en/api-manual/language-series/evolink-auto/evolink-auto-gemini.md): Intelligent routing using Google Generative AI format - [EvoLink Auto - Smart Model Routing](https://docs.evolink.ai/en/api-manual/language-series/evolink-auto/evolink-auto-quickstart.md): The system automatically selects the most suitable model to process the request - [EvoLink Moderation 1.0 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/evolink-moderation-1.0/evolink-moderation-1.0-api.md): - Synchronous endpoint that detects harmful content across 13 dimensions for the input **text** and/or **image** - The response returns a unified summary via the `evolink_summary` field with `risk_level` / `flagged` / `violations` / `max_score` / `max_category` - [Gemini 2.5 Flash Lite - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash-lite/native-api/native-api-quickstart.md): - Call Gemini-2.5-flash-lite model using Google Native API format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 2.5 Flash Lite - Native API - API Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash-lite/native-api/native-api-reference.md): - Call gemini-2.5-flash-lite model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimodal inpu… - [Gemini 2.5 Flash Lite - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash-lite/openai-sdk/openai-sdk-quickstart.md): - Call gemini-2.5-flash-lite model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 2.5 Flash Lite - OpenAI SDK - API Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash-lite/openai-sdk/openai-sdk-reference.md): - Call gemini-2.5-flash-lite model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Customize AI… - [Gemini 2.5 Flash - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash/native-api/native-api-quickstart.md): - Call Gemini-2.5-flash model using Google Native API format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 2.5 Flash - Native API - API Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash/native-api/native-api-reference.md): - Call gemini-2.5-flash model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimodal input**:… - [Gemini 2.5 Flash - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash/openai-sdk/openai-sdk-quickstart.md): - Call gemini-2.5-flash model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 2.5 Flash - OpenAI SDK - API Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-flash/openai-sdk/openai-sdk-reference.md): - Call gemini-2.5-flash model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Customize AI role… - [Gemini 2.5 Pro - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-pro/native-api/native-api-quickstart.md): - Call Gemini-2.5-pro model using Google Native API format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 2.5 Pro - Native API - API Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-pro/native-api/native-api-reference.md): - Call Gemini-2.5-pro model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimodal input**: Su… - [Gemini 2.5 Pro - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-pro/openai-sdk/openai-sdk-quickstart.md): - Call Gemini-2.5-pro model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 2.5 Pro - OpenAI SDK - API Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-2.5-pro/openai-sdk/openai-sdk-reference.md): - Call Gemini-3.0-pro model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Customize AI role an… - [Gemini 3.0 Flash - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-flash/native-api/native-api-quickstart.md): - Use Google Native API format to call gemini-3.0-flash model - Synchronous processing mode, real-time response - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 3.0 Flash - Native API - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-flash/native-api/native-api-reference.md): - Call gemini-3-flash-preview model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimodal inp… - [Gemini 3.0 Flash - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-flash/openai-sdk/openai-sdk-quickstart.md): - Call Gemini-3.0-flash model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 3.0 Flash - OpenAI SDK - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-flash/openai-sdk/openai-sdk-reference.md): - Call Gemini-3.0-flash model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Customize AI role… - [Gemini 3.0 Pro - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-pro/native-api/native-api-quickstart.md): - Use Google Native API format to call Gemini-3.0-pro model - Synchronous processing mode, real-time response - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 3.0 Pro - Native API - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-pro/native-api/native-api-reference.md): - Call Gemini-3-pro-preview model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimodal input… - [Gemini 3.0 Pro - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-pro/openai-sdk/openai-sdk-quickstart.md): - Call Gemini-3.0-pro model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 3.0 Pro - OpenAI SDK - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.0-pro/openai-sdk/openai-sdk-reference.md): - Call Gemini-3.0-pro model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Customize AI role an… - [Gemini 3.1 Flash Lite - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-flash-lite-preview/native-api/native-api-quickstart.md): - Use Google Native API format to call Gemini-3.1-flash-lite-preview model - Synchronous processing mode, real-time response - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 3.1 Flash Lite - Native API - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-flash-lite-preview/native-api/native-api-reference.md): - Call Gemini-3.1-flash-lite-preview model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimo… - [Gemini 3.1 Flash Lite - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-flash-lite-preview/openai-sdk/openai-sdk-quickstart.md): - Call Gemini-3.1-flash-lite-preview model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 3.1 Flash Lite - OpenAI SDK - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-flash-lite-preview/openai-sdk/openai-sdk-reference.md): - Call Gemini-3.1-flash-lite-preview model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Custo… - [Gemini 3.1 Pro Customtools - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro-preview-customtools/native-api/native-api-quickstart.md): - Use Google Native API format to call Gemini-3.1-pro-customtools model - Synchronous processing mode, real-time response - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 3.1 Pro Customtools - Native API - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro-preview-customtools/native-api/native-api-reference.md): - Call Gemini-3-pro-preview model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimodal input… - [Gemini 3.1 Pro Customtools - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro-preview-customtools/openai-sdk/openai-sdk-quickstart.md): - Call Gemini-3.1-pro-customtools model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 3.1 Pro Customtools - OpenAI SDK - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro-preview-customtools/openai-sdk/openai-sdk-reference.md): - Call Gemini-3.1-pro-customtools model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Customiz… - [Gemini 3.1 Pro - Native API - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro/native-api/native-api-quickstart.md): - Use Google Native API format to call Gemini-3.1-pro model - Synchronous processing mode, real-time response - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./native-api-reference) - [Gemini 3.1 Pro - Native API - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro/native-api/native-api-reference.md): - Call Gemini-3-pro-preview model using Google Native API format - Can use synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **Multimodal input… - [Gemini 3.1 Pro - OpenAI SDK - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro/openai-sdk/openai-sdk-quickstart.md): - Call Gemini-3.1-pro model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - Minimal parameters for quick start - 💡 Need more features? Check [Full API Reference](./openai-sdk-reference) - [Gemini 3.1 Pro - OpenAI SDK - Full Reference](https://docs.evolink.ai/en/api-manual/language-series/gemini-3.1-pro/openai-sdk/openai-sdk-reference.md): - Call Gemini-3.1-pro model using OpenAI SDK format - Synchronous processing mode, returns conversation content in real-time - **Plain text conversation**: Single-turn or multi-turn contextual dialogue, see simple_text and multi_turn examples in code samples - **System prompt**: Customize AI role an… - [GPT-5.1 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/gpt-5-1/gpt-5-1-api.md): - Use OpenAI SDK format to call GPT-5.1 series models - Synchronous processing mode, real-time response - **Available models**: gpt-5.1 (base), gpt-5.1-chat (optimized for conversation), gpt-5.1-thinking (with reasoning output) - **Text conversation**: Single or multi-turn contextual dialogue - **Sy… - [GPT-5.2 - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gpt-5.2/gpt-5.2-quickstart.md): - Use OpenAI SDK format to call GPT-5.2 model - Synchronous processing mode, real-time response - Minimal parameters, quick start - Need more features? Check out [Complete API Reference](./gpt-5.2-reference) - [GPT-5.2 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/gpt-5.2/gpt-5.2-reference.md): - Use OpenAI SDK format to call GPT-5.2 model - Synchronous processing mode, real-time response - **Text conversation**: Single or multi-turn contextual dialogue - **System prompts**: Customize AI role and behavior - **Multimodal input**: Supports text + image mixed input - Quick start? Check out [Q… - [GPT-5.4 - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gpt-5.4/gpt-5.4-quickstart.md): - Use OpenAI SDK format to call GPT-5.4 model - Synchronous processing mode, real-time response - Minimal parameters, quick start - Need more features? Check out [Complete API Reference](./gpt-5.4-reference) - [GPT-5.4 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/gpt-5.4/gpt-5.4-reference.md): - Use OpenAI SDK format to call GPT-5.4 model - Synchronous processing mode, real-time response - **Text conversation**: Single or multi-turn contextual dialogue - **System prompts**: Customize AI role and behavior - **Multimodal input**: Supports text + image mixed input - Quick start? Check out [Q… - [GPT-5.5 - Quick Start](https://docs.evolink.ai/en/api-manual/language-series/gpt-5.5/gpt-5.5-quickstart.md): - Use OpenAI SDK format to call GPT-5.5 model - Synchronous processing mode, real-time response - Minimal parameters, quick start - Need more features? Check out [Complete API Reference](./gpt-5.5-reference) - [GPT-5.5 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/gpt-5.5/gpt-5.5-reference.md): - Use OpenAI SDK format to call GPT-5.5 model - Synchronous processing mode, real-time response - **Text conversation**: Single or multi-turn contextual dialogue - **System prompts**: Customize AI role and behavior - **Multimodal input**: Supports text + image mixed input - Quick start? Check out [Q… - [Kimi K2 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/kimi-k2/Kimi-K2-api.md): - Use OpenAI SDK format to call Kimi-K2 model - Synchronous processing mode, real-time response - **Text conversation**: Single or multi-turn contextual dialogue, see simple_text and multi_turn examples - **System prompts**: Customize AI role and behavior, see system_prompt example - **Multimodal in… - [MiniMax-M2.5 - Complete API Reference](https://docs.evolink.ai/en/api-manual/language-series/minimax-m2.5/minimax-m2.5-api.md): - Use OpenAI SDK format to call MiniMax-M2.5 model - Synchronous processing mode, real-time response - **Text conversation**: Single or multi-turn contextual dialogue - **System prompts**: Customize AI role and behavior - [Error Codes Reference](https://docs.evolink.ai/en/api-manual/task-management/error-codes.md): Complete list of task error codes and troubleshooting guide - [Query Task Status](https://docs.evolink.ai/en/api-manual/task-management/get-task-detail.md): Query the status, progress, and result information of asynchronous tasks by task ID - [Grok Imagine Image to Video Beta](https://docs.evolink.ai/en/api-manual/video-series/grok/grok-imagine-image-to-video.md): - Grok Imagine (grok-imagine-image-to-video-beta) model supports image-to-video mode - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Grok Imagine Text to Video Beta](https://docs.evolink.ai/en/api-manual/video-series/grok/grok-imagine-text-to-video.md): - Grok Imagine (grok-imagine-text-to-video-beta) model supports text-to-video mode - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Hailuo-02 Video Generation](https://docs.evolink.ai/en/api-manual/video-series/hailuo/hailuo-02-video-generate.md): - Hailuo 02 (MiniMax-Hailuo-02) supports T2V (Text-to-Video), I2V (Image-to-Video) and FLF (First-Last-Frame) modes - Auto mode detection: 0 images=T2V, 1 image=I2V, 2 images=FLF - Full-featured model, supports 512P resolution (I2V mode only) - Supports 15 camera motion commands like `[Pan left]`, `… - [Hailuo-2.3-Fast Video Generation](https://docs.evolink.ai/en/api-manual/video-series/hailuo/hailuo-2-3-fast-video-generate.md): - Hailuo 2.3 Fast (MiniMax-Hailuo-2.3-Fast) supports I2V (Image-to-Video) mode only - Fastest generation speed, ultimate physics effects - Supports 15 camera motion commands like `[Pan left]`, `[Push in]`, `[Static shot]` - Async processing, use returned task ID to [query status](/en/api-manual/task… - [Hailuo-2.3 Video Generation](https://docs.evolink.ai/en/api-manual/video-series/hailuo/hailuo-2-3-video-generate.md): - Hailuo 2.3 (MiniMax-Hailuo-2.3) supports T2V (Text-to-Video) and I2V (Image-to-Video) modes - Auto mode detection: 0 images=T2V, 1 image=I2V - SOTA instruction following, high-quality output - Supports 15 camera motion commands like `[Pan left]`, `[Push in]`, `[Static shot]` - Async processing, us… - [HappyHorse 1.0 Image-to-Video](https://docs.evolink.ai/en/api-manual/video-series/happyhorse1.0/happyhorse-1.0-image-to-video.md): - HappyHorse 1.0 image-to-video, generates video from a single first-frame image - `prompt` is optional (when omitted, the first frame drives free interpretation) - Output aspect ratio is auto-determined by the first frame; **does not support** `aspect_ratio` - Asynchronous processing mode, use the… - [HappyHorse 1.0 Reference-to-Video](https://docs.evolink.ai/en/api-manual/video-series/happyhorse1.0/happyhorse-1.0-reference-to-video.md): - HappyHorse 1.0 reference-to-video, supports 1–9 reference images plus a text prompt - **Character reference convention**: in `prompt`, use `character1`, `character2`, `character3` ... keywords to reference the images in `image_urls` array in order - Asynchronous processing mode, use the returned t… - [HappyHorse 1.0 Text-to-Video](https://docs.evolink.ai/en/api-manual/video-series/happyhorse1.0/happyhorse-1.0-text-to-video.md): - HappyHorse 1.0 text-to-video, generates video from a text prompt only - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [HappyHorse 1.0 Video-Edit](https://docs.evolink.ai/en/api-manual/video-series/happyhorse1.0/happyhorse-1.0-video-edit.md): - HappyHorse 1.0 video edit, takes 1 source video and text instructions to perform style transfer, local replacement, etc. - Optionally accepts 0–5 reference images for style / subject guidance - **Does not support** `duration`: output video duration = `min(input video duration, 15)`, determined by… - [Kling Custom Element](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-custom-element.md): - Kling Custom Element (kling-custom-element) creates reusable subject elements (characters/objects) from reference images or videos - After successful creation, the returned `element_id` can be referenced in Kling O3 series and Kling V3 Image-to-Video via the `element_list` parameter, enabling cons… - [Kling-O1 Image to Video](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-o1-image-to-video.md): - Kling-O1 Image to Video (kling-o1-image-to-video) model supports image-to-video generation - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Kling-O1 Video Edit](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-o1-video-edit.md): - Kling-O1 Video Edit (kling-o1-video-edit) model supports video editing - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Kling-O1 Video Edit (Fast)](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-o1-video-edit-fast.md): - Kling-O1 Video Edit Fast (kling-o1-video-edit-fast) model supports fast video editing - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Kling-O3 Image to Video](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-o3-image-to-video.md): - Kling-O3 Image to Video (kling-o3-image-to-video) generates videos based on input images, powered by the Kling AI kling-v3-omni model - Supports first frame, last frame, reference images, element control, multi-shot, and sound effects - Asynchronous processing mode, use the returned task ID to [qu… - [Kling-O3 Reference to Video](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-o3-reference-to-video.md): - Kling-O3 Reference to Video (kling-o3-reference-to-video) generates new videos based on the style and motion characteristics of a reference video, powered by the Kling AI kling-v3-omni model - The reference video serves as a feature reference (not direct editing), and can be combined with text, re… - [Kling-O3 Text to Video](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-o3-text-to-video.md): - Kling-O3 Text to Video (kling-o3-text-to-video) pure text-driven video generation, based on Kling AI kling-v3-omni model - Supports single-shot and multi-shot modes, can generate videos with sound effects - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/tas… - [Kling-O3 Video Edit](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-o3-video-edit.md): - Kling-O3 Video Edit (kling-o3-video-edit) edits and modifies the original video, based on the Kling AI kling-v3-omni model - Output video duration and aspect ratio remain consistent with the input video - Supports editing with text instructions, reference images, and elements - Sound generation is… - [Kling-V3 Image to Video](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-v3-image-to-video.md): - Kling-V3 Image to Video (kling-v3-image-to-video) model supports image-to-video generation - Supports first frame, last frame, element control, multi-shot, and sound effects - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) -… - [Kling-V3 Motion Control](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-v3-motion-control.md): - Kling-V3 Motion Control (kling-v3-motion-control) model supports generating motion-driven videos using **reference image + reference video** - The system extracts motion trajectories from the reference video and applies them to the character/object in the reference image, generating a new video wi… - [Kling-V3 Text to Video](https://docs.evolink.ai/en/api-manual/video-series/kling/kling-v3-text-to-video.md): - Kling-V3 Text to Video (kling-v3-text-to-video) model supports text-to-video generation - Supports single-shot and multi-shot modes, capable of generating videos with sound effects - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-de… - [OmniHuman-1.5 Digital Human Video Generation](https://docs.evolink.ai/en/api-manual/video-series/omnihuman/omnihuman-1.5-video-generate.md): - OmniHuman-1.5 (omnihuman-1.5) model generates digital human videos driven by audio - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Seedance-1.0-Pro-Fast Video Generation](https://docs.evolink.ai/en/api-manual/video-series/seedance1.0/seedance-1.0-pro-fast-video-generate.md): - Seedance 1.0 Pro Fast (doubao-seedance-1.0-pro-fast) model supports multiple generation modes including text-to-video and image-to-video - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hour… - [Seedance-1.5-Pro Video Generation](https://docs.evolink.ai/en/api-manual/video-series/seedance1.5/seedance-1.5-pro-video-generate.md): - Seedance 1.5 Pro (seedance-1.5-pro) model supports multiple generation modes including text-to-video, image-to-video, and first-last-frame - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 ho… - [Seedance 2.0 Fast Image-to-Video](https://docs.evolink.ai/en/api-manual/video-series/seedance2.0/seedance-2.0-fast-image-to-video.md): - Input 1 image for first-frame video generation, input 2 images for first-last-frame video generation, the model determines automatically - **Now supports AIGC realistic human materials** - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-t… - [Seedance 2.0 Fast Reference-to-Video](https://docs.evolink.ai/en/api-manual/video-series/seedance2.0/seedance-2.0-fast-reference-to-video.md): - Input reference images (0–9) + reference videos (0–3) + reference audio (0–3) + text prompt to generate video - Supports various creative scenarios including new generation, video editing, and video extension - **Now supports AIGC realistic human materials** - Asynchronous processing mode, use the… - [Seedance 2.0 Fast Text-to-Video](https://docs.evolink.ai/en/api-manual/video-series/seedance2.0/seedance-2.0-fast-text-to-video.md): - Generate videos from text prompts, supports web search for enhanced timeliness - **Now supports AIGC realistic human materials** - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours… - [Seedance 2.0 Image-to-Video](https://docs.evolink.ai/en/api-manual/video-series/seedance2.0/seedance-2.0-image-to-video.md): - Input 1 image for first-frame video generation, input 2 images for first-last-frame video generation, the model determines automatically - **Now supports AIGC realistic human materials** - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-t… - [Seedance 2.0 Complete Parameter Guide](https://docs.evolink.ai/en/api-manual/video-series/seedance2.0/seedance-2.0-overview.md): Unified API for all Seedance 2.0 models, select a specific model via the `model` parameter - [Seedance 2.0 Reference-to-Video](https://docs.evolink.ai/en/api-manual/video-series/seedance2.0/seedance-2.0-reference-to-video.md): - Input reference images (0–9) + reference videos (0–3) + reference audio (0–3) + text prompt to generate video - Supports various creative scenarios including new generation, video editing, and video extension - **Now supports AIGC realistic human materials** - Asynchronous processing mode, use the… - [Seedance 2.0 Text-to-Video](https://docs.evolink.ai/en/api-manual/video-series/seedance2.0/seedance-2.0-text-to-video.md): - Generate videos from text prompts, supports web search for enhanced timeliness - **Now supports AIGC realistic human materials** - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours… - [Sora-2 Video Generation Beta-Max](https://docs.evolink.ai/en/api-manual/video-series/sora2/sora-2-beta-max-video-generate.md): - Sora 2 (sora-2-beta-max) model supports text-to-video, image-to-video and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Sora-2 Video Generation](https://docs.evolink.ai/en/api-manual/video-series/sora2/sora-2-preview-video-generate.md): - Sora 2 Preview (sora-2-preview) supports text-to-video, image-to-video and more - Async processing, use returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save promptly - [Sora Character Generation](https://docs.evolink.ai/en/api-manual/video-series/sora2/sora-character-generate.md): - Sora Character (sora-character) model generates character profiles from videos - Async processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated profile picture links are valid for 24 hours, please save them promptly - [Sora-2 Video Generation Beta](https://docs.evolink.ai/en/api-manual/video-series/sora2/sora2-video-generate.md): - Sora 2 (sora-2-beta) model supports text-to-video, image-to-video and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Sora-2-Pro Video Generation](https://docs.evolink.ai/en/api-manual/video-series/sora2pro/sora-2-pro-preview-video-generate.md): - Sora 2 Pro (sora-2-pro-preview) model supports text-to-video, image-to-video and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Sora-2-Pro Video Generation Beta](https://docs.evolink.ai/en/api-manual/video-series/sora2pro/sora2pro-video-generate.md): - Sora 2 Pro (sora-2-pro) model supports text-to-video, image-to-video and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Topaz Video Upscale](https://docs.evolink.ai/en/api-manual/video-series/topaz/topaz-video-upscale.md): - Topaz Video Upscale (topaz-video-upscale) model supports AI-powered video super-resolution upscaling - Supports 1x (enhancement only), 2x, and 4x upscale factors - Billing is based on input video duration (per second) and upscale factor - Asynchronous processing mode, use the returned task ID to [… - [Veo3.1-Fast Video Generation](https://docs.evolink.ai/en/api-manual/video-series/veo3.1/veo-3.1-fast-generate-preview-generate.md): - Veo 3.1 Fast Generate Preview supports text-to-video, first-frame image-to-video and more - Async processing, use returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save promptly - [Veo3.1-Pro Video Generation](https://docs.evolink.ai/en/api-manual/video-series/veo3.1/veo-3.1-generate-preview-generate.md): - Veo 3.1 Generate Preview supports text-to-video, first-frame image-to-video and more - Async processing, use returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours - [Veo3.1-Fast Video Generation Beta](https://docs.evolink.ai/en/api-manual/video-series/veo3.1/veo3.1-fast-video-generate.md): - Veo 3.1 Fast (veo-3.1-fast) model supports text-to-video, first-frame image-to-video and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Veo3.1-Pro Video Generation Beta](https://docs.evolink.ai/en/api-manual/video-series/veo3.1/veo3.1-pro-video-generate.md): - Veo 3.1 Pro (veo3.1-pro-beta) model supports text-to-video, first-and-last-frame image-to-video and other modes - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promp… - [VideoRetalk Video Generate](https://docs.evolink.ai/en/api-manual/video-series/videoretalk/videoretalk-video-generate.md): - Audio-driven lip-sync video generation — replaces the lip movements of the person in the video with ones matching the target audio - Asynchronous processing mode; use the returned task ID to [query the result](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24… - [Wan2.5 Image to Video](https://docs.evolink.ai/en/api-manual/video-series/wan2.5/wan2.5-image-to-video.md): - WAN2.5 (wan2.5-image-to-video) model supports first-frame image-to-video mode - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Wan2.5 Text to Video](https://docs.evolink.ai/en/api-manual/video-series/wan2.5/wan2.5-text-to-video.md): - WAN2.5 (wan2.5-text-to-video) model supports text-to-video mode - Asynchronous processing mode, use the returned task ID to [query](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Wan2.6 Image to Video](https://docs.evolink.ai/en/api-manual/video-series/wan2.6/wan2.6-image-to-video.md): - WAN2.6 (wan2.6-image-to-video) model supports first-frame image-to-video generation - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Wan2.6 Image-to-Video Flash](https://docs.evolink.ai/en/api-manual/video-series/wan2.6/wan2.6-image-to-video-flash.md): - WAN2.6 Flash (wan2.6-image-to-video-flash) model supports first-frame image-to-video generation with faster speed - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save t… - [Wan2.6 Reference Video](https://docs.evolink.ai/en/api-manual/video-series/wan2.6/wan2.6-reference-video.md): - WAN2.6 (wan2.6-reference-video) model supports reference video-to-video generation - Upload reference videos, the model will extract character appearance and voice to generate new videos - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-t… - [Wan2.6 Reference Video Flash](https://docs.evolink.ai/en/api-manual/video-series/wan2.6/wan2.6-reference-video-flash.md): - WAN2.6 Flash (wan2.6-reference-video-flash) model supports reference video generation with faster speed - Upload reference videos, the model will reference character appearance and voice from the videos to generate new videos - Asynchronous processing mode, use the returned task ID to [query statu… - [Wan2.6 Text to Video](https://docs.evolink.ai/en/api-manual/video-series/wan2.6/wan2.6-text-to-video.md): - WAN2.6 (wan2.6-text-to-video) model supports text-to-video generation - Asynchronous processing mode, use the returned task ID to [query status](/en/api-manual/task-management/get-task-detail) - Generated video links are valid for 24 hours, please save them promptly - [Claude Code CLI](https://docs.evolink.ai/en/integration-guide/claude-code-cli.md): Connect Claude Code CLI to EvoLink.AI - [CodeBuddy / WorkBuddy](https://docs.evolink.ai/en/integration-guide/codebuddy-workbuddy.md): Connect CodeBuddy and WorkBuddy to EvoLink.AI - [Codex CLI](https://docs.evolink.ai/en/integration-guide/codex-cli.md): Connect Codex CLI to EvoLink.AI - [Gemini CLI](https://docs.evolink.ai/en/integration-guide/gemini-cli.md): Connect Gemini CLI to EvoLink.AI - [OpenClaw Manual Installation - Smart Model Routing](https://docs.evolink.ai/en/integration-guide/openclaw.md): Manually install and configure OpenClaw Gateway with Smart Model Routing support - [OpenClaw Auto Install](https://docs.evolink.ai/en/integration-guide/openclaw-auto.md): Install and manage OpenClaw instances with OpenClaw Manager - [OpenClaw + Feishu](https://docs.evolink.ai/en/integration-guide/openclaw-feishu.md): Connect OpenClaw to EvoLink.AI via Feishu (Lark) - [OpenClaw + Telegram](https://docs.evolink.ai/en/integration-guide/openclaw-telegram.md): Connect OpenClaw to EvoLink.AI - [OpenCode](https://docs.evolink.ai/en/integration-guide/opencode.md): Connect OpenCode to EvoLink.AI - [EvoLink](https://docs.evolink.ai/en/introduction.md): Enterprise AI gateway platform with unified access to leading image, video, and language models - [Quick Integration](https://docs.evolink.ai/en/quickstart.md): Use EvoLink to call image, video, and text models quickly ## OpenAPI Specs - [kling-v3-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-v3-text-to-video.json) - [kling-v3-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-v3-image-to-video.json) - [kling-o3-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-o3-text-to-video.json) - [kling-o3-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-o3-image-to-video.json) - [evolink-moderation-1.0-api](https://docs.evolink.ai/ko/api-manual/language-series/evolink-moderation-1.0/evolink-moderation-1.0-api.json) - [happyhorse-1.0-video-edit](https://docs.evolink.ai/ko/api-manual/video-series/happyhorse1.0/happyhorse-1.0-video-edit.json) - [happyhorse-1.0-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/happyhorse1.0/happyhorse-1.0-text-to-video.json) - [happyhorse-1.0-reference-to-video](https://docs.evolink.ai/ko/api-manual/video-series/happyhorse1.0/happyhorse-1.0-reference-to-video.json) - [happyhorse-1.0-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/happyhorse1.0/happyhorse-1.0-image-to-video.json) - [kling-v3-motion-control](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-v3-motion-control.json) - [kling-o3-video-edit](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-o3-video-edit.json) - [kling-o3-reference-to-video](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-o3-reference-to-video.json) - [kling-custom-element](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-custom-element.json) - [openai-sdk-reference](https://docs.evolink.ai/ko/api-manual/language-series/gemini-3.1-pro/openai-sdk/openai-sdk-reference.json) - [native-api-reference](https://docs.evolink.ai/ko/api-manual/language-series/gemini-3.1-pro/native-api/native-api-reference.json) - [claude-messages-api](https://docs.evolink.ai/ko/api-manual/language-series/claude/claude-messages-api.json) - [gpt-5.5-reference](https://docs.evolink.ai/ko/api-manual/language-series/gpt-5.5/gpt-5.5-reference.json) - [gpt-5.5-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/gpt-5.5/gpt-5.5-quickstart.json) - [gpt-image-2-image-generation](https://docs.evolink.ai/ko/api-manual/image-series/gpt-image-2/gpt-image-2-image-generation.json) - [minimax-m2.5-api](https://docs.evolink.ai/ko/api-manual/language-series/minimax-m2.5/minimax-m2.5-api.json) - [Kimi-K2-api](https://docs.evolink.ai/ko/api-manual/language-series/kimi-k2/Kimi-K2-api.json) - [gpt-5.4-reference](https://docs.evolink.ai/ko/api-manual/language-series/gpt-5.4/gpt-5.4-reference.json) - [gpt-5.4-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/gpt-5.4/gpt-5.4-quickstart.json) - [gpt-5.2-reference](https://docs.evolink.ai/ko/api-manual/language-series/gpt-5.2/gpt-5.2-reference.json) - [gpt-5.2-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/gpt-5.2/gpt-5.2-quickstart.json) - [gpt-5-1-api](https://docs.evolink.ai/ko/api-manual/language-series/gpt-5-1/gpt-5-1-api.json) - [openai-sdk-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/gemini-3.1-pro/openai-sdk/openai-sdk-quickstart.json) - [native-api-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/gemini-3.1-pro/native-api/native-api-quickstart.json) - [evolink-auto-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/evolink-auto/evolink-auto-quickstart.json) - [doubao-seed-2.0-responses-reference](https://docs.evolink.ai/ko/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-responses-reference.json) - [doubao-seed-2.0-responses-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-responses-quickstart.json) - [doubao-seed-2.0-reference](https://docs.evolink.ai/ko/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-reference.json) - [doubao-seed-2.0-quickstart](https://docs.evolink.ai/ko/api-manual/language-series/doubao-seed-2.0/doubao-seed-2.0-quickstart.json) - [deepseek-reference](https://docs.evolink.ai/ko/api-manual/language-series/deepseek/deepseek-reference.json) - [deepseek-v4-messages](https://docs.evolink.ai/ko/api-manual/language-series/deepseek-v4/deepseek-v4-messages.json) - [deepseek-v4-chat](https://docs.evolink.ai/ko/api-manual/language-series/deepseek-v4/deepseek-v4-chat.json) - [gpt-image-2-beta-image-generation](https://docs.evolink.ai/ko/api-manual/image-series/gpt-image-2/gpt-image-2-beta-image-generation.json) - [nanobanana-2-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/nanobanana/nanobanana-2-image-generate.json) - [evolink-auto-claude](https://docs.evolink.ai/ko/api-manual/language-series/evolink-auto/evolink-auto-claude.json) - [seedance-2.0-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/seedance2.0/seedance-2.0-text-to-video.json) - [seedance-2.0-reference-to-video](https://docs.evolink.ai/ko/api-manual/video-series/seedance2.0/seedance-2.0-reference-to-video.json) - [seedance-2.0-overview](https://docs.evolink.ai/ko/api-manual/video-series/seedance2.0/seedance-2.0-overview.json) - [seedance-2.0-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/seedance2.0/seedance-2.0-image-to-video.json) - [seedance-2.0-fast-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/seedance2.0/seedance-2.0-fast-text-to-video.json) - [seedance-2.0-fast-reference-to-video](https://docs.evolink.ai/ko/api-manual/video-series/seedance2.0/seedance-2.0-fast-reference-to-video.json) - [seedance-2.0-fast-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/seedance2.0/seedance-2.0-fast-image-to-video.json) - [mj-v7-remove-bg](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-remove-bg.json) - [mj-v7-outpaint](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-outpaint.json) - [mj-v7-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-image-generate.json) - [mj-v7-variation](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-variation.json) - [mj-v7-upscale](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-upscale.json) - [mj-v7-upload-paint](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-upload-paint.json) - [mj-v7-retexture](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-retexture.json) - [mj-v7-remix](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-remix.json) - [mj-v7-pan](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-pan.json) - [mj-v7-inpaint](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-inpaint.json) - [mj-v7-enhance](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-enhance.json) - [mj-v7-edit](https://docs.evolink.ai/ko/api-manual/image-series/midjourney/mj-v7-edit.json) - [topaz-video-upscale](https://docs.evolink.ai/ko/api-manual/video-series/topaz/topaz-video-upscale.json) - [veo-3.1-generate-preview-generate](https://docs.evolink.ai/ko/api-manual/video-series/veo3.1/veo-3.1-generate-preview-generate.json) - [veo-3.1-fast-generate-preview-generate](https://docs.evolink.ai/ko/api-manual/video-series/veo3.1/veo-3.1-fast-generate-preview-generate.json) - [gpt-image-1.5-image-generation](https://docs.evolink.ai/ko/api-manual/image-series/gpt-image-1.5/gpt-image-1.5-image-generation.json) - [nanobanana-2-beta-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/nanobanana/nanobanana-2-beta-image-generate.json) - [videoretalk-video-generate](https://docs.evolink.ai/fr/api-manual/video-series/videoretalk/videoretalk-video-generate.json) - [qwen3-tts-vd](https://docs.evolink.ai/ko/api-manual/audio-series/qwen-tts/qwen3-tts-vd.json) - [qwen-voice-design](https://docs.evolink.ai/ko/api-manual/audio-series/qwen-tts/qwen-voice-design.json) - [nanobanana-pro-beta-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/nanobanana/nanobanana-pro-beta-image-generate.json) - [sora2-video-generate](https://docs.evolink.ai/暂时存档不用/ko/sora2beta/sora2-video-generate.json) - [veo3.1-pro-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/veo3.1/veo3.1-pro-video-generate.json) - [veo3.1-fast-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/veo3.1/veo3.1-fast-video-generate.json) - [sora-2-pro-preview-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/sora2pro/sora-2-pro-preview-video-generate.json) - [grok-imagine-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/grok/grok-imagine-text-to-video.json) - [grok-imagine-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/grok/grok-imagine-image-to-video.json) - [nanobanana-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/nanobanana/nanobanana-image-generate.json) - [gpt-image-1.5-lite-image-generation](https://docs.evolink.ai/ko/api-manual/image-series/gpt-image-1.5/gpt-image-1.5-lite-image-generation.json) - [gpt-4o-image-generation](https://docs.evolink.ai/ko/api-manual/image-series/gpt-4o/gpt-4o-image-generation.json) - [suno-persona-creation](https://docs.evolink.ai/ko/api-manual/audio-series/suno/suno-persona-creation.json) - [suno-music-generation](https://docs.evolink.ai/ko/api-manual/audio-series/suno/suno-music-generation.json) - [seedance-1.5-pro-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/seedance1.5/seedance-1.5-pro-video-generate.json) - [get-task-detail](https://docs.evolink.ai/ko/api-manual/task-management/get-task-detail.json) - [evolink-auto-gemini](https://docs.evolink.ai/ko/api-manual/language-series/evolink-auto/evolink-auto-gemini.json) - [veo3.1-fast-extend-video-extend](https://docs.evolink.ai/暂时存档不用/en/api-manual/video-series/veo3.1/veo3.1-fast-extend-video-extend.json) - [wan2.6-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/wan2.6/wan2.6-text-to-video.json) - [wan2.6-reference-video](https://docs.evolink.ai/ko/api-manual/video-series/wan2.6/wan2.6-reference-video.json) - [wan2.6-reference-video-flash](https://docs.evolink.ai/ko/api-manual/video-series/wan2.6/wan2.6-reference-video-flash.json) - [wan2.6-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/wan2.6/wan2.6-image-to-video.json) - [wan2.6-image-to-video-flash](https://docs.evolink.ai/ko/api-manual/video-series/wan2.6/wan2.6-image-to-video-flash.json) - [wan2.5-text-to-video](https://docs.evolink.ai/ko/api-manual/video-series/wan2.5/wan2.5-text-to-video.json) - [wan2.5-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/wan2.5/wan2.5-image-to-video.json) - [sora2pro-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/sora2pro/sora2pro-video-generate.json) - [sora-character-generate](https://docs.evolink.ai/ko/api-manual/video-series/sora2/sora-character-generate.json) - [sora-2-preview-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/sora2/sora-2-preview-video-generate.json) - [sora-2-beta-max-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/sora2/sora-2-beta-max-video-generate.json) - [seedance-1.0-pro-fast-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/seedance1.0/seedance-1.0-pro-fast-video-generate.json) - [omnihuman-1.5-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/omnihuman/omnihuman-1.5-video-generate.json) - [kling-o1-video-edit](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-o1-video-edit.json) - [kling-o1-video-edit-fast](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-o1-video-edit-fast.json) - [kling-o1-image-to-video](https://docs.evolink.ai/ko/api-manual/video-series/kling/kling-o1-image-to-video.json) - [hailuo-2-3-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/hailuo/hailuo-2-3-video-generate.json) - [hailuo-2-3-fast-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/hailuo/hailuo-2-3-fast-video-generate.json) - [hailuo-02-video-generate](https://docs.evolink.ai/ko/api-manual/video-series/hailuo/hailuo-02-video-generate.json) - [z-image-turbo-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/z-image-turbo/z-image-turbo-image-generate.json) - [wan2.5-text-to-image](https://docs.evolink.ai/ko/api-manual/image-series/wan2.5/wan2.5-text-to-image.json) - [wan2.5-image-to-image](https://docs.evolink.ai/ko/api-manual/image-series/wan2.5/wan2.5-image-to-image.json) - [seedream-5.0-lite-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/seedream/seedream-5.0-lite-image-generate.json) - [seedream-4.5-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/seedream/seedream-4.5-image-generate.json) - [seedream-4.0-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/seedream/seedream-4.0-image-generate.json) - [qwen-image-edit](https://docs.evolink.ai/ko/api-manual/image-series/qwen/qwen-image-edit.json) - [qwen-image-edit-plus](https://docs.evolink.ai/ko/api-manual/image-series/qwen/qwen-image-edit-plus.json) - [nanobanana-pro-image-generate](https://docs.evolink.ai/ko/api-manual/image-series/nanobanana/nanobanana-pro-image-generate.json) - [gpt-image-1-image-generation](https://docs.evolink.ai/ko/api-manual/image-series/gpt-image-1/gpt-image-1-image-generation.json) - [upload-url](https://docs.evolink.ai/ko/api-manual/file-series/upload-url.json) - [upload-stream](https://docs.evolink.ai/ko/api-manual/file-series/upload-stream.json) - [upload-base64](https://docs.evolink.ai/ko/api-manual/file-series/upload-base64.json) - [get-credits](https://docs.evolink.ai/ko/api-manual/account-management/get-credits.json) - [sora2remix-video-generate](https://docs.evolink.ai/暂时存档不用/en/sora2remix/sora2remix-video-generate.json) - [sora2proremix-video-generate](https://docs.evolink.ai/暂时存档不用/en/sora2proremix/sora2proremix-video-generate.json)