跳转到主要内容
POST
/
grok
/
chat
/
completions
Grok Chat Completions
curl --request POST \
  --url https://api.acedata.cloud/grok/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "grok-3",
  "messages": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "n": 1,
  "stream": false,
  "max_tokens": 123,
  "temperature": 1
}
'
{
  "id": "<string>",
  "model": "<string>",
  "object": "chat.completion",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "assistant",
        "content": "<string>"
      },
      "finish_reason": "stop"
    }
  ],
  "created": 123,
  "system_fingerprint": "<string>"
}

授权

Authorization
string
header
必填

请求头

accept
enum<string>

Specifies the format of the response from the server.

可用选项:
application/json

请求体

application/json
model
enum<string>
必填

ID of the model to use.

可用选项:
grok-4,
grok-4-1-fast,
grok-4-1-fast-non-reasoning,
grok-3,
grok-3-mini,
grok-2-vision
示例:

"grok-3"

messages
object[]
必填

A list of messages comprising the conversation so far.

Minimum array length: 1
n
number | null
默认值:1

How many chat completion choices to generate for each input message.

必填范围: 1 <= x <= 128
示例:

1

stream
boolean | null
默认值:false

If set, partial message deltas will be sent, like in ChatGPT.

max_tokens
number | null

The maximum number of tokens that can be generated in the chat completion.

temperature
number | null
默认值:1

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

必填范围: 0 <= x <= 2
示例:

1

响应

OK, the request was successful.

Represents a chat completion response returned by model, based on the provided input.

id
string
必填

A unique identifier for the chat completion.

model
string
必填

The model used for the chat completion.

object
enum<string>
必填

The object type, which is always chat.completion.

可用选项:
chat.completion
choices
object[]
必填

A list of chat completion choices. Can be more than one if n is greater than 1.

created
number
必填

The Unix timestamp (in seconds) of when the chat completion was created.

system_fingerprint
string

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.