Przejdź do głównej treści
POST
/
openai
/
responses
Openai V1 Responses
curl --request POST \
  --url https://api.acedata.cloud/openai/responses \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-4o-mini",
  "input": [
    {
      "role": "user",
      "content": "<string>"
    }
  ],
  "n": 1,
  "background": false,
  "stream": false,
  "tools": "<array>",
  "max_tokens": 123,
  "temperature": 1,
  "response_format": {
    "type": "json_object"
  }
}
'
{
  "id": "<string>",
  "model": "<string>",
  "object": "chat.completion",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "assistant",
        "content": "<string>"
      },
      "finish_reason": "stop"
    }
  ],
  "created": 123,
  "system_fingerprint": "<string>"
}

Autoryzacje

Authorization
string
header
wymagane

Nagłówki

accept
enum<string>

Specifies the format of the response from the server.

Dostępne opcje:
application/json

Treść

application/json
model
enum<string>
wymagane

ID of the model to use.

Dostępne opcje:
gpt-5.4,
gpt-5.4-pro,
gpt-5.1,
gpt-5.1-all,
gpt-5,
gpt-5-mini,
gpt-5-nano,
gpt-4,
gpt-4-all,
gpt-4-turbo,
gpt-4-turbo-preview,
gpt-4-vision-preview,
gpt-4.1,
gpt-4.1-2025-04-14,
gpt-4.1-mini,
gpt-4.1-mini-2025-04-14,
gpt-4.1-nano,
gpt-4.1-nano-2025-04-14,
gpt-4.5-preview,
gpt-4.5-preview-2025-02-27,
gpt-4o,
gpt-4o-2024-05-13,
gpt-4o-2024-08-06,
gpt-4o-2024-11-20,
gpt-4o-all,
gpt-4o-image,
gpt-4o-mini,
gpt-4o-mini-2024-07-18,
gpt-4o-mini-search-preview,
gpt-4o-mini-search-preview-2025-03-11,
gpt-4o-search-preview,
gpt-4o-search-preview-2025-03-11,
gpt-35-turbo-16k,
o1,
o1-2024-12-17,
o1-all,
o1-mini,
o1-mini-2024-09-12,
o1-mini-all,
o1-preview,
o1-preview-2024-09-12,
o1-preview-all,
o1-pro,
o1-pro-2025-03-19,
o1-pro-all,
o3,
o3-2025-04-16,
o3-all,
o3-mini,
o3-mini-2025-01-31,
o3-mini-2025-01-31-high,
o3-mini-2025-01-31-low,
o3-mini-2025-01-31-medium,
o3-mini-all,
o3-mini-high,
o3-mini-high-all,
o3-mini-low,
o3-mini-medium,
o3-pro,
o3-pro-2025-06-10,
o4-mini,
o4-mini-2025-04-16,
o4-mini-all,
o4-mini-high-all
Przykład:

"gpt-4o-mini"

input
object[]
wymagane

A list of messages comprising the conversation so far.

Minimum array length: 1
n
number | null
domyślnie:1

How many chat completion choices to generate for each input message.

Wymagany zakres: 1 <= x <= 128
Przykład:

1

background
boolean | null
domyślnie:false

Whether to run the model response in the background.

stream
boolean | null
domyślnie:false

If set, partial message deltas will be sent, like in ChatGPT.

tools
array

An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter.

Minimum array length: 1
max_tokens
number | null

The maximum number of tokens that can be generated in the chat completion.

temperature
number | null
domyślnie:1

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

Wymagany zakres: 0 <= x <= 2
Przykład:

1

response_format
object

An object specifying the format that the model must output. If not specified, the model will output text.

Odpowiedź

OK, the request was successful.

Represents a chat completion response returned by model, based on the provided input.

id
string
wymagane

A unique identifier for the chat completion.

model
string
wymagane

The model used for the chat completion.

object
enum<string>
wymagane

The object type, which is always chat.completion.

Dostępne opcje:
chat.completion
choices
object[]
wymagane

A list of chat completion choices. Can be more than one if n is greater than 1.

created
number
wymagane

The Unix timestamp (in seconds) of when the chat completion was created.

system_fingerprint
string

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.