Skip to main content
POST
/
v1
/
messages
Create a Message
curl --request POST \
  --url https://api.evolink.ai/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "claude-sonnet-4-5-20250929",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": "Hello, world"
    }
  ]
}
'
{
  "model": "claude-haiku-4-5-20251001",
  "id": "msg_bdrk_017XLrAa77zWvfBGQ6ESvrxz",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "# Hey there! 👋\n\nHow's it going? What can I help you with today?"
    }
  ],
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 8,
    "cache_creation_input_tokens": 0,
    "cache_read_input_tokens": 0,
    "output_tokens": 24
  }
}

Authorizations

Authorization
string
header
required

##All APIs require Bearer Token authentication##

Get API Key:

Visit API Key Management Page to get your API Key

Add to request header:

Authorization: Bearer YOUR_API_KEY

Body

application/json
model
enum<string>
required

The model that will complete your prompt.

Available options:
claude-haiku-4-5-20251001,
claude-sonnet-4-5-20250929,
claude-opus-4-1-20250805,
claude-sonnet-4-20250514,
claude-opus-4-5-20251101
Example:

"claude-sonnet-4-5-20250929"

messages
InputMessage · object[]
required

Input messages.

Our models are trained to operate on alternating user and assistant conversational turns. When creating a new Message, you specify the prior conversational turns with the messages parameter, and the model then generates the next Message in the conversation. Consecutive user or assistant turns in your request will be combined into a single turn.

Each input message must be an object with a role and content. You can specify a single user-role message, or you can include multiple user and assistant messages.

max_tokens
integer
required

The maximum number of tokens to generate before stopping.

Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

Required range: x >= 1
Example:

1024

container

Container parameters with skills to be loaded.

context_management
ContextManagementConfig · object

Context management configuration.

mcp_servers
RequestMCPServerURLDefinition · object[]

MCP servers to be utilized in this request

Maximum array length: 20
metadata
Metadata · object

An object describing metadata about the request.

service_tier
enum<string>

Determines whether to use priority capacity (if available) or standard capacity for this request.

Available options:
auto,
standard_only
stop_sequences
string[]

Custom text sequences that will cause the model to stop generating.

stream
boolean

Whether to incrementally stream the response using server-sent events.

system

System prompt.

Example:

"Today's date is 2024-06-01."

temperature
number

Amount of randomness injected into the response.

Defaults to 1.0. Ranges from 0.0 to 1.0.

Required range: 0 <= x <= 1
Example:

1

thinking
Enabled · object

Configuration for enabling Claude's extended thinking.

tool_choice
Auto · object

How the model should use the provided tools.

tools
(Custom tool · object | Bash tool (2024-10-22) · object | Bash tool (2025-01-24) · object | Code execution tool (2025-05-22) · object | CodeExecutionTool_20250825 · object | Computer use tool (2024-01-22) · object | MemoryTool_20250818 · object | Computer use tool (2025-01-24) · object | Text editor tool (2024-10-22) · object | Text editor tool (2025-01-24) · object | Text editor tool (2025-04-29) · object | TextEditor_20250728 · object | Web search tool (2025-03-05) · object | WebFetchTool_20250910 · object)[]

Definitions of tools that the model may use.

top_k
integer

Only sample from the top K options for each subsequent token.

Required range: x >= 0
Example:

5

top_p
number

Use nucleus sampling.

Required range: 0 <= x <= 1
Example:

0.7

Response

Message object

id
string
required

Unique object identifier.

The format and length of IDs may change over time.

Example:

"msg_013Zva2CMHLNnXjNJJKqJ2EF"

type
string
required

Object type.

For Messages, this is always "message".

Allowed value: "message"
role
string
required

Conversational role of the generated message.

This will always be "assistant".

Allowed value: "assistant"
content
(Text · object | Thinking · object | Redacted thinking · object | Tool use · object | Server tool use · object | Web search tool result · object | ResponseWebFetchToolResultBlock · object | Code execution tool result · object | ResponseBashCodeExecutionToolResultBlock · object | ResponseTextEditorCodeExecutionToolResultBlock · object | MCP tool use · object | MCP tool result · object | Container upload · object)[]
required

Content generated by the model.

This is an array of content blocks, each of which has a type that determines its shape.

model
enum<string>
required

The model that handled the request.

Available options:
claude-haiku-4-5-20251001,
claude-sonnet-4-5-20250929,
claude-opus-4-1-20250805,
claude-sonnet-4-20250514
Example:

"claude-sonnet-4-5-20250929"

stop_reason
enum<string> | null
required

The reason that we stopped.

Available options:
end_turn,
max_tokens,
stop_sequence,
tool_use,
pause_turn,
refusal,
model_context_window_exceeded
stop_sequence
string | null
required

Which custom stop sequence was generated, if any.

usage
Usage · object
required

Billing and rate-limit usage.

context_management
ResponseContextManagement · object

Context management response.

container
Container · object

Information about the container used in this request.