Skip to main content
POST
/
v1
/
chat
/
completions
curl --request POST \
  --url https://api.evolink.ai/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "deepseek-chat",
  "messages": [
    {
      "role": "user",
      "content": "Tell me about yourself"
    }
  ]
}
'
{
  "id": "930c60df-bf64-41c9-a88e-3ec75f81e00e",
  "model": "deepseek-chat",
  "object": "chat.completion",
  "created": 1770617860,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm DeepSeek, a powerful AI assistant. I excel at general conversation, code generation, mathematical reasoning and many other tasks.",
        "reasoning_content": "Let me analyze this problem...",
        "tool_calls": [
          {
            "id": "<string>",
            "type": "function",
            "function": {
              "name": "<string>",
              "arguments": "<string>"
            }
          }
        ]
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 16,
    "completion_tokens": 10,
    "total_tokens": 26,
    "prompt_cache_hit_tokens": 0,
    "prompt_cache_miss_tokens": 16
  },
  "system_fingerprint": "fp_eaab8d114b_prod0820_fp8_kvcache"
}

Authorizations

Authorization
string
header
required

##All APIs require Bearer Token authentication##

Get API Key:

Visit API Key Management Page to get your API Key

Add to request header:

Authorization: Bearer YOUR_API_KEY

Body

application/json
model
enum<string>
default:deepseek-chat
required

Chat model name

  • deepseek-chat: General conversation model
  • deepseek-reasoner: Deep reasoning model, excels at math, coding and complex logical reasoning

Note: deepseek-reasoner does not support temperature, top_p, tools, tool_choice, response_format parameters. Passing these will be rejected by upstream

Available options:
deepseek-chat,
deepseek-reasoner
Example:

"deepseek-chat"

messages
(System Message · object | User Message · object | Assistant Message · object | Tool Message · object)[]
required

Conversation message list, supports multi-turn conversation

Different roles have different field structures, select the corresponding role to view

Minimum array length: 1
thinking
object

Thinking mode control (Beta)

Details:

  • Controls the deep thinking feature of deepseek-reasoner model
  • When enabled, the model will perform deep reasoning before responding
frequency_penalty
number
default:0

Frequency penalty parameter to reduce repetitive content

Details:

  • Positive values penalize tokens based on their frequency in the generated text
  • Higher values make it less likely to repeat existing content
  • Default: 0 (no penalty)
Required range: -2 <= x <= 2
Example:

0

max_tokens
integer

Maximum number of tokens to generate

Details:

  • The model will stop generating when this limit is reached
  • If not set, the model decides the generation length
Required range: x >= 1
Example:

4096

presence_penalty
number
default:0

Presence penalty parameter to encourage new topics

Details:

  • Positive values penalize tokens based on whether they have appeared in the text
  • Higher values encourage discussing new topics
  • Default: 0 (no penalty)
Required range: -2 <= x <= 2
Example:

0

response_format
object

Specify response format

Details:

  • Set to {"type": "json_object"} to enable JSON mode
  • In JSON mode, the model will output valid JSON content
stop

Stop sequences. The model will stop generating when encountering these strings

Details:

  • Can be a single string or an array of strings
  • Maximum 16 stop sequences
stream
boolean
default:false

Whether to stream the response

  • true: Stream via SSE (Server-Sent Events), returning content in real-time chunks
  • false: Wait for the complete response before returning
Example:

false

stream_options
object

Streaming response options

Only effective when stream=true

temperature
number
default:1

Sampling temperature, controls output randomness

Details:

  • Lower values (e.g. 0.2): More deterministic, focused output
  • Higher values (e.g. 1.5): More random, creative output
  • Default: 1
Required range: 0 <= x <= 2
Example:

1

top_p
number
default:1

Nucleus Sampling parameter

Details:

  • Controls sampling from tokens whose cumulative probability reaches the threshold
  • For example, 0.9 means sampling from tokens reaching 90% cumulative probability
  • Default: 1.0 (consider all tokens)

Tip: Avoid adjusting both temperature and top_p simultaneously

Required range: 0 <= x <= 1
Example:

1

tools
object[]

Tool definition list for Function Calling

Details:

  • Maximum 128 tool definitions
  • Each tool requires a name, description and parameter schema
Maximum array length: 128
tool_choice

Controls tool calling behavior

Options:

  • none: Do not call any tools
  • auto: Model decides whether to call tools
  • required: Force the model to call one or more tools

Default: none when no tools provided, auto when tools are provided

Available options:
none,
auto,
required
logprobs
boolean
default:false

Whether to return token log probabilities

Details:

  • When set to true, the response will include log probability information for each token
top_logprobs
integer

Return log probabilities of the top N most likely tokens

Details:

  • Requires logprobs to be set to true
  • Range: [0, 20]
Required range: 0 <= x <= 20

Response

Chat completion generated successfully

id
string

Unique identifier for the chat completion

Example:

"930c60df-bf64-41c9-a88e-3ec75f81e00e"

model
string

Actual model name used

Example:

"deepseek-chat"

object
enum<string>

Response type

Available options:
chat.completion
Example:

"chat.completion"

created
integer

Creation timestamp

Example:

1770617860

choices
object[]

List of chat completion choices

usage
object

Token usage statistics

system_fingerprint
string

System fingerprint identifier

Example:

"fp_eaab8d114b_prod0820_fp8_kvcache"