Documentation Index
Fetch the complete documentation index at: https://docs.evolink.ai/llms.txt
Use this file to discover all available pages before exploring further.
Quick Integration
This guide helps you complete your first EvoLink request in a few minutes. Multimodal workloads use asynchronous tasks; text models use a synchronous messages API for chat and coding tools.Image Generation
Create an image generation task with GPT Image 2 and query the result through the task API.
Video Generation
Create text-to-video, image-to-video, and reference-to-video tasks with Seedance 2.0.
Text Generation
Use the Claude Messages API to receive synchronous text responses.
Prerequisites
Create an API Key
Open the EvoLink dashboard, create an API Key, and store it securely.
Choose a Base URL
Use
https://api.evolink.ai for image, video, audio, and other multimodal tasks. Use https://direct.evolink.ai for text models.Request Flow
Multimodal tasks use the same flow:1. Submit Task
Call an image, video, or audio endpoint and receive a task ID in the response
id field.2. Query Status
Use
GET /v1/tasks/{task_id} to check pending, processing, completed, or failed.3. Get Results
When the task completes, read the generated file URL from the
results field.Generated image and video URLs usually expire. In production, download and store completed results in your own storage as soon as possible.
Image Generation
Create an image generation task with GPT Image 2:results array:
Video Generation
Create a text-to-video task with Seedance 2.0:Text Generation
Claude text models should usehttps://direct.evolink.ai with the /v1/messages endpoint:
Python Example
This example submits an image task, polls status, and reads the final result:Rate Limits
EvoLink request rate limits are configured per model. RPM, concurrency, and task submission limits may vary by model. The actual limits depend on model type, upstream service capacity, account tier, and real-time availability. Lightweight text models usually support higher request rates, while image and video generation models may have lower limits because tasks take longer and consume more upstream resources. For asynchronous generation models, a successful API response only means the task has been accepted or created; it does not mean the task has completed. For high-concurrency workloads, implement a server-side queue and retrieve final results through the task query API or callbacks. If you repeatedly receive HTTP 429 errors, or your workload requires higher RPM or concurrency limits, contact [email protected]. We will evaluate the request based on your use case and upstream capacity.Production Recommendations
Key Management
Store API Keys in environment variables or a secrets manager, and use separate keys for different environments.
Task Polling
Set polling intervals based on task type. Image tasks can be polled more frequently; video tasks should usually be polled less often.
Error Handling
Handle
failed task states and HTTP errors, including rate limits, insufficient balance, and parameter errors.Result Storage
Generated result URLs expire. In production, download and store completed files in your own storage system.
Next Steps
Image API
View GPT Image 2 parameters, examples, and response structure.
Video API
View Seedance 2.0 text-to-video, image-to-video, and reference-to-video capabilities.
Task Management
View task status queries, result fields, and error structure.