Перейти к основному содержанию
POST
/
kimi
/
chat
/
completions
Kimi Chat Completions
curl --request POST \
  --url https://api.acedata.cloud/kimi/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "kimi-k2.5",
  "messages": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "n": 1,
  "stream": false,
  "max_tokens": 123,
  "temperature": 1,
  "response_format": {
    "type": "json_object"
  }
}
'
{
  "id": "<string>",
  "model": "<string>",
  "object": "chat.completion",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "assistant",
        "content": "<string>"
      },
      "finish_reason": "stop"
    }
  ],
  "created": 123,
  "system_fingerprint": "<string>"
}

Авторизации

Authorization
string
header
обязательно

Заголовки

accept
enum<string>

Specifies the format of the response from the server.

Доступные опции:
application/json

Тело

application/json
model
enum<string>
обязательно

ID of the model to use.

Доступные опции:
kimi-k2-thinking-turbo,
kimi-k2.5,
kimi-k2-thinking,
kimi-k2-instruct-0905,
kimi-k2-0905-preview,
kimi-k2-turbo-preview,
kimi-k2-0711-preview
Пример:

"kimi-k2.5"

messages
object[]
обязательно

A list of messages comprising the conversation so far.

Minimum array length: 1
n
number | null
по умолчанию:1

How many chat completion choices to generate for each input message.

Требуемый диапазон: 1 <= x <= 128
Пример:

1

stream
boolean | null
по умолчанию:false

If set, partial message deltas will be sent, like in ChatGPT.

max_tokens
number | null

The maximum number of tokens that can be generated in the chat completion.

temperature
number | null
по умолчанию:1

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

Требуемый диапазон: 0 <= x <= 2
Пример:

1

response_format
object

An object specifying the format that the model must output. If not specified, the model will output text.

Ответ

OK, the request was successful.

Represents a chat completion response returned by model, based on the provided input.

id
string
обязательно

A unique identifier for the chat completion.

model
string
обязательно

The model used for the chat completion.

object
enum<string>
обязательно

The object type, which is always chat.completion.

Доступные опции:
chat.completion
choices
object[]
обязательно

A list of chat completion choices. Can be more than one if n is greater than 1.

created
number
обязательно

The Unix timestamp (in seconds) of when the chat completion was created.

system_fingerprint
string

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.