Skip to main content
We know that integrating some Q&A APIs on the market is relatively not so easy, such as OpenAI’s Chat Completions API, which has a messages field. To complete a continuous conversation, we need to pass all the historical context, and we also need to handle the issue of token limits being exceeded. The AI Q&A API provided by AceDataCloud has been optimized for the above situation. While ensuring that the Q&A effect remains unchanged, it encapsulates the implementation of continuous dialogue, so there is no need to worry about passing messages during integration, nor about the issue of token limits being exceeded (which is handled automatically within the API). It also provides functions for querying and modifying conversations, greatly simplifying the overall integration. This document will introduce the integration instructions for the AI Q&A API.

Application Process

To use the API, you need to first apply for the corresponding service on the AI Q&A API page. After entering the page, click the “Acquire” button, as shown in the image: If you are not logged in or registered, you will be automatically redirected to the login page inviting you to register and log in. After logging in or registering, you will be automatically returned to the current page. There will be a free quota granted upon your first application, allowing you to use the API for free.

Basic Usage

First, understand the basic usage, which is to input a question and receive an answer. You only need to simply pass a question field and specify the corresponding model. For example, asking: “What’s your name?”, we can then fill in the corresponding content on the interface, as shown in the image: Here we can see that we have set the Request Headers, including:
  • accept: the format of the response result you want to receive, here filled in as application/json, which means JSON format.
  • authorization: the key to call the API, which can be directly selected after application.
Additionally, we set the Request Body, including:
  • model: the choice of model, such as the mainstream GPT 3.5, GPT 4, etc.
  • question: the question to be asked, which can be any plain text.
After selection, we can see that the corresponding code is also generated on the right side, as shown in the image:

Click the “Try” button to test, as shown in the image above, and we get the following result:
{
  "answer": "I am an AI language model developed by OpenAI and I don't have a personal name. However, you can call me GPT or simply Chatbot. How can I assist you today?"
}
As we can see, the returned result contains an answer field, which is the answer to the question. We can input any question and receive any answer. If you do not need any support for multi-turn dialogue, this API can greatly facilitate your integration. Additionally, if you want to generate the corresponding integration code, you can directly copy the generated code, for example, the CURL code is as follows:
curl -X POST 'https://api.acedata.cloud/aichat/conversations' \
-H 'accept: application/json' \
-H 'authorization: Bearer {token}' \
-H 'content-type: application/json' \
-d '{
  "model": "gpt-3.5",
  "question": "What's your name?"
}'
The Python integration code is as follows:
import requests

url = "https://api.acedata.cloud/aichat/conversations"

headers = {
    "accept": "application/json",
    "authorization": "Bearer {token}",
    "content-type": "application/json"
}

payload = {
    "model": "gpt-3.5",
    "question": "What's your name?"
}

response = requests.post(url, json=payload, headers=headers)
print(response.text)

Multi-Turn Dialogue

If you want to integrate multi-turn dialogue functionality, you need to pass an additional parameter stateful, with its value set to true. Each subsequent request must carry this parameter. After passing the stateful parameter, the API will additionally return an id parameter, representing the current conversation ID. Subsequently, we only need to pass this ID as a parameter to easily achieve multi-turn dialogue. Now let’s demonstrate the specific operation. In the first request, set the stateful parameter to true, and normally pass the model and question parameters, as shown in the image: The corresponding code is as follows:
curl -X POST 'https://api.acedata.cloud/aichat/conversations' \
-H 'accept: application/json' \
-H 'authorization: Bearer {token}' \
-H 'content-type: application/json' \
-d '{
  "model": "gpt-3.5",
  "question": "What's your name?",
  "stateful": true
}'
You can get the following response:
{
  "answer": "I am an AI language model created by OpenAI and I don't have a personal name. You can simply call me OpenAI or ChatGPT. How can I assist you today?",
  "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"
}
In the second request, pass the id field returned from the first request as a parameter, while keeping the stateful parameter set to true, asking “What I asked you just now?”, as shown in the image: The corresponding code is as follows:
curl -X POST 'https://api.acedata.cloud/aichat/conversations' \
-H 'accept: application/json' \
-H 'authorization: Bearer {token}' \
-H 'content-type: application/json' \
-d '{
  "model": "gpt-3.5",
  "stateful": true,
  "id": "7cdb293b-2267-4979-a1ec-48d9ad149916",
  "question": "What I asked you just now?"
}'
The result is as follows:
{
  "answer": "You asked me what my name is. As an AI language model, I do not possess a personal identity, so I don't have a specific name. However, you can refer to me as OpenAI or ChatGPT, the names used for this AI model. Is there anything else I can help you with?",
  "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"
}
As we can see, it can answer corresponding questions based on the context.

Streaming Response

This interface also supports streaming responses, which is very useful for web integration, allowing the webpage to achieve a word-by-word display effect. If you want to return responses in a streaming manner, you can change the accept parameter in the request header to application/x-ndjson. Modify as shown in the image, but the calling code needs to have corresponding changes to support streaming responses. After changing accept to application/x-ndjson, the API will return the corresponding JSON data line by line. At the code level, we need to make corresponding modifications to obtain the results line by line. Python sample calling code:
import requests

url = "https://api.acedata.cloud/aichat/conversations"

headers = {
    "accept": "application/x-ndjson",
    "authorization": "Bearer {token}",
    "content-type": "application/json"
}

payload = {
    "model": "gpt-3.5",
    "stateful": True,
    "id": "7cdb293b-2267-4979-a1ec-48d9ad149916",
    "question": "Hello"
}

response = requests.post(url, json=payload, headers=headers, stream=True)
for line in response.iter_lines():
    print(line.decode())
The output is as follows:
{"answer": "Hello", "delta_answer": "Hello", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello!", "delta_answer": "!", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello! How", "delta_answer": " How", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello! How can", "delta_answer": " can", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello! How can I", "delta_answer": " I", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello! How can I assist", "delta_answer": " assist", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello! How can I assist you", "delta_answer": " you", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello! How can I assist you today", "delta_answer": " today", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
{"answer": "Hello! How can I assist you today?", "delta_answer": "?", "id": "7cdb293b-2267-4979-a1ec-48d9ad149916"}
As can be seen, the answer in the response is the latest answer content, while delta_answer is the newly added answer content, which you can use to integrate into your system. JavaScript is also supported, for example, the streaming call code for Node.js is as follows:
const axios = require("axios");

const url = "https://api.acedata.cloud/aichat/conversations";
const headers = {
  "Content-Type": "application/json",
  Accept: "application/x-ndjson",
  Authorization: "Bearer {token}",
};
const body = {
  question: "Hello",
  model: "gpt-3.5",
  stateful: true,
};

axios
  .post(url, body, { headers: headers, responseType: "stream" })
  .then((response) => {
    console.log(response.status);
    response.data.on("data", (chunk) => {
      console.log(chunk.toString());
    });
  })
  .catch((error) => {
    console.error(error);
  });
Java sample code:
String url = "https://api.acedata.cloud/aichat/conversations";
OkHttpClient client = new OkHttpClient();
MediaType mediaType = MediaType.parse("application/json");
RequestBody body = RequestBody.create(mediaType, "{\"question\": \"Hello\", \"stateful\": true, \"model\": \"gpt-3.5\"}");
Request request = new Request.Builder()
        .url(url)
        .post(body)
        .addHeader("Content-Type", "application/json")
        .addHeader("Accept", "application/x-ndjson")
        .addHeader("Authorization", "Bearer {token}")
        .build();

client.newCall(request).enqueue(new Callback() {
    @Override
    public void onFailure(Call call, IOException e) {
        e.printStackTrace();
    }

    @Override
    public void onResponse(Call call, Response response) throws IOException {
        if (!response.isSuccessful()) throw new IOException("Unexpected code " + response);
        try (BufferedReader br = new BufferedReader(
                new InputStreamReader(response.body().byteStream(), "UTF-8"))) {
            String responseLine;
            while ((responseLine = br.readLine()) != null) {
                System.out.println(responseLine);
            }
        }
    }
});
Other languages can be rewritten separately, the principle is the same.

Model Preset

We know that OpenAI related APIs have a corresponding concept of system_prompt, which is to set a preset for the entire model, such as what its name is, etc. This AI Q&A API also exposes this parameter, called preset, which allows us to add presets to the model. Let’s experience it with an example: Here we additionally add the preset field, with the content being You are a professional artist, as shown in the figure: The corresponding code is as follows:
curl -X POST 'https://api.acedata.cloud/aichat/conversations' \
-H 'accept: application/json' \
-H 'authorization: Bearer {token}' \
-H 'content-type: application/json' \
-d '{
  "model": "gpt-3.5",
  "stateful": true,
  "question": "What can you help me?",
  "preset": "You are a professional artist"
}'
The running result is as follows:
{
    "answer": "As a professional artist, I can offer a range of services and assistance depending on your specific needs. Here are a few ways I can help you:\n\n1. Custom Artwork: If you have a specific vision or idea, I can create custom artwork for you. This can include paintings, drawings, digital art, or any other medium you prefer.\n\n2. Commissioned Pieces: If you have a specific subject or concept in mind, I can create commissioned art pieces tailored to your preferences. This could be for personal enjoyment or as a unique gift for someone special.\n\n3. Art Consultation: If you need guidance on art selection, interior design, or how to showcase and display art in your space, I can provide professional advice to help enhance your aesthetic sense and create a cohesive look."
}
As we can see, we told GPT that it is a robot, and then asked it what it could do for us, and it could play the role of a robot to answer questions.

Image Recognition

This AI also supports adding attachments for image recognition, by passing the corresponding image link through references, for example, I have an image of an apple, as shown in the figure: The link to the image is https://cdn.acedata.cloud/ht05g0.png, we can directly pass it as the references parameter. It is also important to note that the model must be selected to support visual recognition, and currently, the supported model is gpt-4-vision, so the input is as follows: The corresponding code is as follows:
curl -X POST 'https://api.acedata.cloud/aichat/conversations' \
-H 'accept: application/json' \
-H 'authorization: Bearer {token}' \
-H 'content-type: application/json' \
-d '{
  "model": "gpt-4-vision",
  "question": "How many apples in the picture?",
  "references": ["https://cdn.acedata.cloud/ht05g0.png"]
}'
The running result is as follows:
{
  "answer": "There are 5 apples in the picture."
}
As we can see, we successfully obtained the corresponding answer result for the image.

Online Q&A

This API also supports online models, including GPT-3.5 and GPT-4, both of which can support it. Behind the API, there is an automatic process of searching the internet and summarizing. We can choose the model as gpt-3.5-browsing to experience it, as shown in the figure: The code is as follows:
curl -X POST 'https://api.acedata.cloud/aichat/conversations' \
-H 'accept: application/json' \
-H 'authorization: Bearer {token}' \
-H 'content-type: application/json' \
-d '{
  "model": "gpt-3.5-browsing",
  "question": "今天天气如何?"
}'
The result is as follows:
{
  "answer": "今天纽约的天气如下:\n- 当前温度:16°C (60°F)\n- 最高:16°C (60°F)\n- 最低:10°C (50°F)\n- 湿度:47%\n- 紫外线指数:6 of 11\n- 日出:5:42 am\n- 日落:8:02 pm\n\n天气阴云密布,今晚有偶尔降雨的可能,降雨概率为50%。\n更多详情,请访问 [天气频道](https://weather.com/weather/tenday/l/96f2f84af9a5f5d452eb0574d4e4d8a840c71b05e22264ebdc0056433a642c84)。\n\n还有其他想知道的吗?"
}
可以看到,这里它自动联网搜索了天气频道网站,并获得了里面的信息,然后进一步返回了实时结果。
如果对模型回答质量有更高要求,可以将模型更换为 gpt-4-browsing,回答效果会更好。