##All APIs require Bearer Token authentication##
Get API Key:
Visit the API Key Management Page to get your API Key
Add to request headers:
Authorization: Bearer YOUR_API_KEYChat model name
doubao-seed-2.0-pro: Flagship, strongest overall capability, ideal for complex reasoning and high-quality generationdoubao-seed-2.0-lite: Lightweight, faster speed, cost-effectivedoubao-seed-2.0-mini: Ultra-fast, quickest response, suitable for simple tasksdoubao-seed-2.0-code: Code-specialized, optimized for code generation and understandingdoubao-seed-2.0-pro, doubao-seed-2.0-lite, doubao-seed-2.0-mini, doubao-seed-2.0-code "doubao-seed-2.0-pro"
List of conversation messages, supports multi-turn conversation and multimodal input (text, image, video)
1Control whether the model enables deep thinking mode
Different models may vary in support and default values
Whether to stream the response content
false: Model generates all content before returning the result at oncetrue: Returns model-generated content incrementally via SSE protocol, ending with a data: [DONE] message. When stream is true, you can set the stream_options field to get token usage statisticsfalse
Options for streaming responses. Can be set when stream is true
Maximum length of model response (in tokens)
Note:
4096
Controls the maximum output length of the model, including both response and chain-of-thought content (in tokens)
Note:
0 <= x <= 6553616384
Sampling temperature, controls output randomness
Note:
0 <= x <= 20.7
Nucleus sampling probability threshold
Note:
0 <= x <= 10.9
The model will stop generating when it encounters the string(s) specified in the stop field. The stop string itself will not be included in the output. Up to 4 strings are supported
Note: Deep thinking models do not support this field
["hello", "weather"]Limits the amount of thinking effort, reducing thinking depth can improve speed and consume fewer tokens
minimal: Disable thinking, answer directlylow: Lightweight thinking, prioritizes quick responsemedium: Balanced mode, balances speed and depthhigh: Deep analysis, handles complex problemsminimal, low, medium, high "medium"
Specify the model response format
Supports three formats: text (default), json_object, json_schema
Frequency penalty coefficient
Note:
-2 <= x <= 20
Presence penalty coefficient
Note:
-2 <= x <= 20
Whether to return log probabilities of output tokens
false: Do not return log probability informationtrue: Return log probabilities for each output token in the message contentNote: Deep thinking models do not support this field
Specify the number of most likely tokens to return at each output token position, each with an associated log probability
Note: Deep thinking models do not support this field
0 <= x <= 20Adjust the probability of specified tokens appearing in the model output
Note:
Note: Deep thinking models do not support this field
List of tools to be called; the model response may contain tool call requests
Whether the model response is allowed to contain multiple tool calls for this request
true: Allow returning multiple tool callsfalse: The number of tool calls returned is <= 1Whether the model response should contain tool calls for this request
String mode:
none: Model response does not contain tool callsrequired: Model response must contain tool callsauto: Model decides whether to include tool calls (default when tools are provided)Object mode: Specify the scope of tools to be called
none, auto, required Chat completion successful
Unique identifier for this request
"0217714854126607f5a9cf8ed5b018c76e4ad3dc2810db57ffb50"
Actual model name and version used for this request
"doubao-seed-2-0-pro-260215"
Response type, always chat.completion
chat.completion "chat.completion"
Service tier for this request
default: Default service tierscale: Used reserved capacity quotadefault, scale "default"
Unix timestamp (in seconds) of when this request was created
1771485416
Model output content for this request
Token usage for this request