Skip to main content
POST
/
v1
/
chat
/
completions
curl --request POST \
  --url https://api.evolink.ai/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-5.2",
  "messages": [
    {
      "role": "user",
      "content": "Please introduce yourself"
    }
  ]
}
'
{
  "id": "chatcmpl-20251010015944503180122WJNB8Eid",
  "model": "gpt-5.2",
  "object": "chat.completion",
  "created": 1760032810,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm GPT-5.2, with enhanced reasoning and understanding capabilities. I excel at handling complex problems, multi-step reasoning, and code generation.\\n\\nKey features include:\\n- Stronger logical reasoning\\n- Better context understanding\\n- More accurate code generation"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 1891,
    "total_tokens": 1904,
    "prompt_tokens_details": {
      "cached_tokens": 0,
      "text_tokens": 13,
      "audio_tokens": 0,
      "image_tokens": 0
    },
    "completion_tokens_details": {
      "text_tokens": 0,
      "audio_tokens": 0,
      "reasoning_tokens": 1480
    },
    "input_tokens": 0,
    "output_tokens": 0,
    "input_tokens_details": null
  }
}

Authorizations

Authorization
string
header
required

All endpoints require Bearer Token authentication

Get API Key:

Visit API Key Management Page to get your API Key

Add to request headers:

Authorization: Bearer YOUR_API_KEY

Body

application/json
model
enum<string>
default:gpt-5.2
required

Model name for chat completion

Available options:
gpt-5.2
Example:

"gpt-5.2"

messages
object[]
required

List of messages for the conversation, supports multi-turn dialogue and multimodal input (text, images)

Minimum array length: 1
stream
boolean

Whether to stream the response

  • true: Stream response, returns content chunk by chunk in real-time
  • false: Wait for complete response and return all at once
Example:

false

temperature
number

Sampling temperature, controls randomness of output

Notes:

  • Lower values (e.g., 0.2): More deterministic and focused output
  • Higher values (e.g., 1.5): More random and creative output
Required range: 0 <= x <= 2
Example:

0.7

top_p
number

Nucleus sampling parameter

Notes:

  • Controls sampling from tokens with cumulative probability
  • For example, 0.9 means sampling from tokens with top 90% cumulative probability
  • Default: 1.0 (considers all tokens)

Recommendation: Do not adjust both temperature and top_p simultaneously

Required range: 0 <= x <= 1
Example:

0.9

top_k
integer

Top-K sampling parameter

Notes:

  • For example, 10 means only considering the top 10 most probable tokens during each sampling step
  • Smaller values make output more focused
  • Default: unlimited
Required range: x >= 1
Example:

40

Response

Chat completion successful

id
string

Unique identifier for the chat completion

Example:

"chatcmpl-20251010015944503180122WJNB8Eid"

model
string

The model used for completion

Example:

"gpt-5.2"

object
enum<string>

Response type

Available options:
chat.completion
Example:

"chat.completion"

created
integer

Unix timestamp when the completion was created

Example:

1760032810

choices
object[]

List of completion choices

usage
object

Token usage statistics