Skip to main content
POST
/
v1
/
chat
/
completions
curl --request POST \
  --url https://api.evolink.ai/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-5.1",
  "messages": [
    {
      "role": "user",
      "content": "Please introduce yourself"
    }
  ],
  "temperature": 1
}
'
{
  "id": "chatcmpl-abc123",
  "model": "gpt-5.1",
  "object": "chat.completion",
  "created": 1698999496,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hi there! How can I help you?",
        "reasoning_content": "Let me think about this step by step..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 8,
    "completion_tokens": 292,
    "total_tokens": 300
  }
}

Authorizations

Authorization
string
header
required

All endpoints require Bearer Token authentication

Get API Key:

Visit API Key Management Page to get your API Key

Add to request headers:

Authorization: Bearer YOUR_API_KEY

Body

application/json
model
enum<string>
required

Model name for chat completion

  • gpt-5.1: Base model for general tasks
  • gpt-5.1-chat: Optimized for conversational tasks
  • gpt-5.1-thinking: Features reasoning capabilities with thinking process output (returns reasoning_content)
Available options:
gpt-5.1,
gpt-5.1-chat,
gpt-5.1-thinking
Example:

"gpt-5.1"

messages
object[]
required

List of messages for the conversation, supports multi-turn dialogue and multimodal input

Minimum array length: 1
stream
boolean
default:false

Whether to stream the response

  • true: Stream response, returns content chunk by chunk in real-time
  • false: Wait for complete response and return all at once
Example:

false

max_tokens
integer

Maximum number of tokens to generate in the response

Required range: x >= 1
Example:

2000

temperature
number
default:1

Sampling temperature, controls randomness of output

  • Lower values (e.g., 0.2): More deterministic and focused output
  • Higher values (e.g., 1.5): More random and creative output
Required range: 0 <= x <= 2
Example:

1

top_p
number
default:1

Nucleus sampling parameter

  • Controls sampling from tokens with cumulative probability
  • For example, 0.9 means sampling from tokens with top 90% cumulative probability
Required range: 0 <= x <= 1
Example:

0.9

frequency_penalty
number
default:0

Frequency penalty, number between -2.0 and 2.0

  • Positive values penalize new tokens based on their frequency in the text
Required range: -2 <= x <= 2
Example:

0

presence_penalty
number
default:0

Presence penalty, number between -2.0 and 2.0

  • Positive values penalize new tokens based on whether they appear in the text
Required range: -2 <= x <= 2
Example:

0

stop

Stop sequences, generation stops when these sequences are matched

tools
object[]

List of tools for Function Calling

Response

Chat completion successful

id
string

Unique identifier for the chat completion

Example:

"chatcmpl-abc123"

model
string

The model used for completion

Example:

"gpt-5.1"

object
enum<string>

Response type

Available options:
chat.completion
Example:

"chat.completion"

created
integer

Unix timestamp when the completion was created

Example:

1698999496

choices
object[]

List of completion choices

usage
object

Token usage statistics