Skip to main content
POST
/
v1
/
messages
Create a Message
curl --request POST \
  --url https://api.evolink.ai/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "claude-sonnet-4-5-20250929",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": "Hello, world"
    }
  ]
}'
{
  "model": "claude-haiku-4-5-20251001",
  "id": "msg_bdrk_017XLrAa77zWvfBGQ6ESvrxz",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "# Hey there! 👋\n\nHow's it going? What can I help you with today?"
    }
  ],
  "stop_reason": "end_turn",
  "stop_sequence": null,
  "usage": {
    "input_tokens": 8,
    "cache_creation_input_tokens": 0,
    "cache_read_input_tokens": 0,
    "output_tokens": 24
  }
}

Authorizations

Authorization
string
header
required

##All APIs require Bearer Token authentication##

Get API Key:

Visit API Key Management Page to get your API Key

Add to request header:

Authorization: Bearer YOUR_API_KEY

Body

application/json
model
enum<string>
required

The model that will complete your prompt.

Available options:
claude-haiku-4-5-20251001,
claude-sonnet-4-5-20250929,
claude-opus-4-1-20250805,
claude-sonnet-4-20250514
Examples:

"claude-sonnet-4-5-20250929"

messages
InputMessage · object[]
required

Input messages.

Our models are trained to operate on alternating user and assistant conversational turns. When creating a new Message, you specify the prior conversational turns with the messages parameter, and the model then generates the next Message in the conversation. Consecutive user or assistant turns in your request will be combined into a single turn.

Each input message must be an object with a role and content. You can specify a single user-role message, or you can include multiple user and assistant messages.

max_tokens
integer
required

The maximum number of tokens to generate before stopping.

Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

Required range: x >= 1
Examples:

1024

container

Container identifier for reuse across requests. Container parameters with skills to be loaded.

context_management
object | null

Context management configuration.

mcp_servers
RequestMCPServerURLDefinition · object[]

MCP servers to be utilized in this request

Maximum length: 20
metadata
object

An object describing metadata about the request.

service_tier
enum<string>

Determines whether to use priority capacity (if available) or standard capacity for this request.

Available options:
auto,
standard_only
stop_sequences
string[]

Custom text sequences that will cause the model to stop generating.

stream
boolean

Whether to incrementally stream the response using server-sent events.

system

System prompt.

Examples:

"Today's date is 2024-06-01."

temperature
number

Amount of randomness injected into the response.

Defaults to 1.0. Ranges from 0.0 to 1.0.

Required range: 0 <= x <= 1
Examples:

1

thinking
object

Configuration for enabling Claude's extended thinking.

  • Enabled
  • Disabled
tool_choice
object

How the model should use the provided tools. The model will automatically decide whether to use tools.

  • Auto
  • Any
  • Tool
  • None
tools
Tools · array

Definitions of tools that the model may use.

  • Custom tool
  • Bash tool (2024-10-22)
  • Bash tool (2025-01-24)
  • Code execution tool (2025-05-22)
  • CodeExecutionTool_20250825
  • Computer use tool (2024-01-22)
  • MemoryTool_20250818
  • Computer use tool (2025-01-24)
  • Text editor tool (2024-10-22)
  • Text editor tool (2025-01-24)
  • Text editor tool (2025-04-29)
  • TextEditor_20250728
  • Web search tool (2025-03-05)
  • WebFetchTool_20250910
top_k
integer

Only sample from the top K options for each subsequent token.

Required range: x >= 0
Examples:

5

top_p
number

Use nucleus sampling.

Required range: 0 <= x <= 1
Examples:

0.7

Response

Message object

id
string
required

Unique object identifier.

The format and length of IDs may change over time.

Examples:

"msg_013Zva2CMHLNnXjNJJKqJ2EF"

type
string
required

Object type.

For Messages, this is always "message".

Allowed value: "message"
role
string
required

Conversational role of the generated message.

This will always be "assistant".

Allowed value: "assistant"
content
Content · array
required

Content generated by the model.

This is an array of content blocks, each of which has a type that determines its shape.

  • Text
  • Thinking
  • Redacted thinking
  • Tool use
  • Server tool use
  • Web search tool result
  • ResponseWebFetchToolResultBlock
  • Code execution tool result
  • ResponseBashCodeExecutionToolResultBlock
  • ResponseTextEditorCodeExecutionToolResultBlock
  • MCP tool use
  • MCP tool result
  • Container upload
model
enum<string>
required

The model that handled the request.

Available options:
claude-haiku-4-5-20251001,
claude-sonnet-4-5-20250929,
claude-opus-4-1-20250805,
claude-sonnet-4-20250514
Examples:

"claude-sonnet-4-5-20250929"

stop_reason
enum<string> | null
required

The reason that we stopped.

Available options:
end_turn,
max_tokens,
stop_sequence,
tool_use,
pause_turn,
refusal,
model_context_window_exceeded
stop_sequence
string | null
required

Which custom stop sequence was generated, if any.

usage
object
required

Billing and rate-limit usage.

context_management
object | null

Context management response.

container
object | null

Information about the container used in this request. Information about the container used in the request (for the code execution tool)