Documentation Index
Fetch the complete documentation index at: https://docs.evolink.ai/llms.txt
Use this file to discover all available pages before exploring further.
https://direct.evolink.ai, which has better support for text models and long-lived connections. https://api.evolink.ai is the primary endpoint for multimodal services and serves as a fallback address for text models.##All APIs require Bearer Token authentication##
Get API Key:
Visit API Key Management Page to get your API Key
Add to request header:
Authorization: Bearer YOUR_API_KEYChat model name
gemini-2.5-flash-lite "gemini-2.5-flash-lite"
List of chat messages, supports multi-turn dialogue and multimodal input
1Whether to return response in streaming mode
true: Streaming return, receives content in real-time chunksfalse: Returns complete response at oncefalse
Maximum number of completion tokens for the generated response, corresponding to Gemini's maxOutputTokens.
1 <= x <= 655362000
Maximum number of tokens for the generated response, compatible with the legacy OpenAI parameter.
1 <= x <= 655362000
Sampling temperature, controls output randomness
Description:
0 <= x <= 20.7
Nucleus Sampling parameter
Description:
Recommendation: Do not adjust temperature and top_p simultaneously
0 <= x <= 10.9
Frequency penalty coefficient. Range: -2.0 to 2.0. Corresponds to Gemini's frequencyPenalty.
-2 <= x <= 20
Presence penalty coefficient. Range: -2.0 to 2.0. Corresponds to Gemini's presencePenalty.
-2 <= x <= 20
Stop sequences. Supports a string or string array, corresponding to Gemini's stopSequences.
Number of generated candidates.
x >= 11
Limits reasoning effort. Gemini 2.5 Flash and Flash Lite support none to disable thinking; low/medium/high map to different reasoning budgets.
none, low, medium, high "medium"
Random seed used to make output as reproducible as possible, corresponding to Gemini's seed.
12345
Whether to return token logprob information, corresponding to Gemini's responseLogprobs.
true
Number of top logprob values returned for each token, corresponding to Gemini's logprobs.
0 <= x <= 205
Response format settings, supporting JSON mode and JSON Schema, corresponding to Gemini's responseMimeType, responseSchema and responseJsonSchema.
Streaming response options. Can be set when stream is true.
List of tool definitions for Function Calling.
Controls tool-calling behavior.
none, auto, required Gemini extension parameters.
Chat completion generated successfully
Unique identifier for the chat completion
"chatcmpl-20251010015944503180122WJNB8Eid"
Model name actually used
"gemini-2.5-flash-lite"
Response type
chat.completion "chat.completion"
Creation timestamp
1760032810
List of chat completion choices
Token usage statistics