MaaS_GP_5.4 Series
Basic Information
| Project | Instructions |
|---|---|
| Base URL | https://genaiapi.cloudsway.net/v1/ |
| Authentication Method | Bearer Token (API Key) |
| Response Format | JSON |
| Request Format | JSON |
New Feature Introduction
Similar to the previous GP-5 model, GP-5.4 supports custom tools, parameters for controlling the level of detail and inference capabilities, and a list of allowed tools. GP-5.4 also introduces multiple features that make it easier to build powerful agent systems, handle larger amounts of information, and run more reliable automated workflows:
-
tool_search in the API: GP-5.4 improves tool search functionality in large tool ecosystems by lazy-loading tools. This makes tools searchable, loads only relevant definitions, reduces token usage, and improves the accuracy of tool selection in actual deployments.
-
1 million token context window: GP-5.4 supports a context window of up to 1 million tokens, enabling easier analysis of entire codebases, long document collections, or extended agent traces in a single request. Please read the "1 million token context window" section for more information.
-
Built-in computer usage functionality: GP-5.4 is the first mainstream model with built-in computer usage functionality, enabling agents to directly interact with software to complete, verify, and repair tasks within the build-run-verify-fix cycle.
-
Native Compression Support: GP-5.4 is the first mainstream model trained to support compression, capable of achieving longer agent trajectories while retaining key context.
Model Capability List
| Capability / Model | MaaS_GP_5.4 | MaaS_GP_5.4_pro | MaaS_GP_5.4_mini | MaaS_GP_5.4_nano |
|---|---|---|---|---|
| Input Support | Text, Image | Text, Image | Text, Image | Text, Image |
| /Chat Completions | ✅ | ❌ | ✅ | ✅ |
| /Responses | ✅ | ✅ | ✅ | ✅ |
| Tool search | ✅ | ✅ | ✅ | ❌ |
| Web search | ✅ | ✅ | ❌ | ❌ |
| Computer Use Support | ✅ | ❌ | ✅ | ❌ |
| Most suitable | General-purpose tasks, including complex reasoning, extensive world knowledge, and intelligent agent tasks with large codebases or multiple steps. | Tricky problems may take longer to solve and require more in-depth reasoning. | A large amount of coding, computer use, and agent workflows that still require strong reasoning abilities. | Simple high-throughput tasks where speed and cost are of utmost importance. |
Request Parameters
| Parameter | Type | Required | Instructions |
|---|---|---|---|
model |
string | is | Model Name |
messages |
array | is | Conversation Message List |
temperature |
float | No | Sampling Temperature (0-2), default 1.0 |
top_p |
float | No | Nucleus sampling parameter (0-1), default 1.0 |
max_tokens |
int | No | Maximum number of generated tokens |
stream |
boolean | No | Whether to output in streaming mode, default false |
presence_penalty |
float | No | -2.0 to 2.0 |
frequency_penalty |
float | No | -2.0 to 2.0 |
seed |
int | No | Deterministic Generation Seed |
The following parameters are only supported when using GPT-5.4 and the inference difficulty is set to none:
-
temperature -
top_p -
logprobs
Newphaseparameter
For GP-5.4 processes with long-running operations or many tools in the response API, please use the assistant messagephasefield to avoid premature termination and other undesirable behaviors.
phase is optional at the API level, but we strongly recommend using it. It is used for phase: "commentary" intermediate assistant updates (e.g., prompt messages before tool calls) and phase: "final_answer" the final answer. Do not add phase to user messages.
Request Example
/responses
/responses: Non-streaming synchronous request
Curl Request
curl --location --request POST 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5.4-mini",
"input": "askA:answerA."
}'
Python Request
import requests
url = "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses"
payload = {
"model": "gpt-5.4-mini",
"input": "askA:answerA."
}
headers = {
"Authorization": "Bearer {YOUR_API_KEY}",
"Content-Type": "application/json"
}
try:
response = requests.post(url, json=payload, headers=headers)
response.raise_for_status()
print(response.status_code)
print(response.json())
except requests.exceptions.RequestException as e:
print(f"error: {e}")
Return Example
{
"top_logprobs": 0,
"metadata": {},
"presence_penalty": 0.0,
"reasoning": {
"effort": "none"
},
"usage": {
"input_tokens_details": {
"cached_tokens": 0
},
"input_tokens": 12,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 36,
"output_tokens": 24
},
"created_at": 1775556238,
"store": true,
"tools": [],
"content_filters": [
{
"content_filter_results": {
"self_harm": {
"severity": "safe",
"filtered": false
},
"jailbreak": {
"filtered": false,
"detected": false
},
"hate": {
"severity": "safe",
"filtered": false
},
"sexual": {
"severity": "safe",
"filtered": false
},
"violence": {
"severity": "safe",
"filtered": false
}
},
"content_filter_offsets": {
"end_offset": 840,
"start_offset": 0,
"check_offset": 0
},
"source_type": "prompt",
"content_filter_raw": [],
"blocked": false
}
],
"output": [
{
"phase": "final_answer",
"role": "assistant",
"type": "message",
"content": [
{
"annotations": [],
"type": "output_text",
"logprobs": [],
"text": "Certainly. Please go ahead and send me the specific content of "Question A", and I will answer directly in English."
}
],
"id": "msg_0e27378e1f3e955e0069d4d68f1b448193870af25f1aacd3e2",
"status": "completed"
}
],
"top_p": 0.98,
"completed_at": 1775556239,
"frequency_penalty": 0.0,
"parallel_tool_calls": true,
"background": false,
"temperature": 1.0,
"tool_choice": "auto",
"model": "MaaS_GP_5.4_mini_20260317",
"service_tier": "auto",
"id": "resp_0e27378e1f3e955e0069d4d68ed438819387245017afa12e6c",
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"truncation": "disabled",
"object": "response",
"status": "completed"
}
/responses: Streaming Synchronous Request
Example of Curl Request
curl --location --request POST 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "gpt-5.4-mini",
"input": "askA:answerA.",
"stream": true
}'
Python Request Example
import requests
import json
url = "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {YOUR_API_KEY}"
}
payload = {
"input": [
{
"role": "developer",
"content": "Talk like a pirate."
},
{
"role": "user",
"content": "Are semicolons optional in JavaScript?"
}
],
"stream": True
}
with requests.post(url, headers=headers, json=payload, stream=True) as response:
response.raise_for_status()
for line in response.iter_lines(decode_unicode=True):
if line:
if line.startswith('data: '):
data_str = line[6:]
if data_str != '[DONE]':
try:
data = json.loads(data_str)
print(data)
except json.JSONDecodeError:
print(data_str)
/responses: Non-streaming asynchronous request
Example of Curl Request
curl --location --request POST 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
"background": true,
"model": "gpt-5.4-mini",
"input": "askA:answerA."
}'
Python Request Example
import requests
url = "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {YOUR_API_KEY}"
}
payload = {
"background": True,
"input": [
{"role": "developer", "content": "Talk like a pirate."},
{"role": "user", "content": "Are semicolons optional in JavaScript?"}
]
}
response = requests.post(url, headers=headers, json=payload)
print(response.status_code)
print(response.json())
Return Example
{
"top_logprobs": 0,
"metadata": {},
"presence_penalty": 0.0,
"reasoning": {
"effort": "medium"
},
"created_at": 1775617385,
"store": true,
"tools": [],
"output": [],
"top_p": 1.0,
"frequency_penalty": 0.0,
"parallel_tool_calls": true,
"background": true,
"temperature": 1.0,
"tool_choice": "auto",
"model": "MaaS_GP_5_mini_20250807",
"service_tier": "auto",
"id": "resp_04040303cf238c130069d5c569cbcc81959f0a59b62453cba4",
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"truncation": "disabled",
"object": "response",
"status": "queued"
}
/responses: Streaming asynchronous request
curl "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {YOUR_API_KEY}" \
-d '{
"background": true,
"stream": true,
"input": [
{
"role": "developer",
"content": "Talk like a pirate."
},
{
"role": "user",
"content": "Are semicolons optional in JavaScript?"
}
]
}'
Python Request Example
import requests
url = "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {YOUR_API_KEY}"
}
payload = {
"background": True,
"stream": True,
"input": [
{"role": "developer", "content": "Talk like a pirate."},
{"role": "user", "content": "Are semicolons optional in JavaScript?"}
]
}
# 流式异步请求
with requests.post(url, headers=headers, json=payload, stream=True) as response:
for line in response.iter_lines(decode_unicode=True):
if line:
print(line)
/responses: Get asynchronous request results
curl --location --request GET 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses/resp_0bd529239ed0ff590069bb8f70a8448193976feb98de696212' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{}'
Return Example
{
"top_logprobs": 0,
"metadata": {},
"presence_penalty": 0.0,
"reasoning": {
"effort": "none"
},
"usage": {
"input_tokens_details": {
"cached_tokens": 0
},
"input_tokens": 24,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 233,
"output_tokens": 209
},
"created_at": 1775627820,
"store": true,
"tools": [],
"content_filters": [
{
"content_filter_results": {
"self_harm": {
"severity": "safe",
"filtered": false
},
"jailbreak": {
"filtered": false,
"detected": false
},
"hate": {
"severity": "safe",
"filtered": false
},
"sexual": {
"severity": "safe",
"filtered": false
},
"violence": {
"severity": "safe",
"filtered": false
}
},
"content_filter_offsets": {
"end_offset": 870,
"start_offset": 0,
"check_offset": 0
},
"source_type": "prompt",
"content_filter_raw": [],
"blocked": false
}
],
"output": [
{
"phase": "final_answer",
"role": "assistant",
"type": "message",
"content": [
{
"annotations": [],
"type": "output_text",
"logprobs": [],
"text": "Arrr, aye — semicolons be **optional** in JavaScript much of the time, because the language has **automatic semicolon insertion**.\n\nBut beware:\n\n- JavaScript will **sometimes add them for ye**\n- and sometimes **not the way ye expect**\n- so omitting them can lead to weird bugs\n\n### Example\n```js\nconst a = 1\nconst b = 2\nconsole.log(a + b)\n```\nThis usually works fine.\n\n### But this can break\n```js\nreturn\n{\n ok: true\n}\n```\nJavaScript treats that like:\n```js\nreturn;\n{\n ok: true\n}\n```\nSo it returns `undefined`.\n\n### Rule of thumb\n- **Yes, semicolons are optional**\n- **No, it’s not always safe to skip them**\n- Many crews use semicolons anyway to avoid trouble\n\nIf ye want, I can show ye the main JavaScript gotchas with semicolon insertion, matey."
}
],
"id": "msg_03796fc2ac85b8190069d5ee30dbf88195898413def497ddd0",
"status": "completed"
}
],
"top_p": 0.98,
"completed_at": 1775627826,
"frequency_penalty": 0.0,
"parallel_tool_calls": true,
"background": true,
"temperature": 1.0,
"tool_choice": "auto",
"model": "MaaS_GP_5.4_mini_20260317",
"service_tier": "auto",
"id": "resp_03796fc2ac85b8190069d5ee2c38b4819582ae06f00a1817d8",
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"truncation": "disabled",
"object": "response",
"status": "completed"
}
/chat/completions
/chat/completions Non-streaming Request
curl "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {YOUR_API_KEY}" \
-d '{
"messages": [
{
"role": "developer",
"content": "Talk like a pirate."
},
{
"role": "user",
"content": "Are semicolons optional in JavaScript?"
}
]
}'
Python Request Example
import requests
url = "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer {YOUR_API_KEY}"
}
data = {
"messages": [
{
"role": "developer",
"content": "Talk like a pirate."
},
{
"role": "user",
"content": "Are semicolons optional in JavaScript?"
}
]
}
response = requests.post(url, headers=headers, json=data)
print("Status code:", response.status_code)
print("Returned result:", response.json())
Response Example
{
"id": "chatcmpl-DRvzyVchkK7ZR0WqwMsHUomRujwSL",
"choices": [
{
"index": 0,
"logprobs": null,
"message": {
"role": "assistant",
"content": "Aye, mostly — but not always.\n\nJavaScript has a feature called **Automatic Semicolon Insertion (ASI)**, which means the engine can often add semicolons for ye when they’re omitted.\n\nExample:\n\n```js\nlet x = 5\nlet y = 10\nconsole.log(x + y)\n```\n\nThat usually works fine.\n\nBut there be **dangerous cases** where leaving them out can change the meaning or break the code. For example:\n\n```js\nlet a = 1\nlet b = 2\n[a, b].forEach(console.log)\n```\n\nJavaScript might treat that `[` as continuing the previous line, which can cause trouble.\n\nAnother classic trap:\n\n```js\nreturn\n{\n name: \"Jack\"\n}\n```\n\nThis becomes:\n\n```js\nreturn;\n{\n name: \"Jack\"\n}\n```\n\nSo it returns `undefined`, not the object.\n\n## Short answer\n- **Yes**, semicolons are often optional.\n- **No**, they are not always safe to omit.\n\n## Best practice\nMany crews choose one of these:\n- **Always use semicolons** for safety and clarity, or\n- **Omit them consistently** only if ye understand ASI rules well and use a formatter/linter like **Prettier** or **ESLint**.\n\nSo: **optional by syntax in many cases, but not truly optional in practice unless ye be careful.**",
"refusal": null,
"annotations": [],
"images": null,
"reasoning_content": null,
"function_call": null,
"tool_calls": null,
"reasoning_details": null
},
"finish_reason": "stop",
"native_finish_reason": null
}
],
"logprobs": null,
"created": 1775550174,
"model": "MaaS_GP_5.4_20260305",
"object": "chat.completion",
"system_fingerprint": null,
"service_tier": null,
"usage": {
"prompt_tokens": 24,
"completion_tokens": 299,
"total_tokens": 323,
"completion_tokens_details": {
"accepted_prediction_tokens": 0,
"audio_tokens": 0,
"image_tokens": 0,
"reasoning_tokens": 0,
"rejected_prediction_tokens": 0
},
"prompt_tokens_details": {
"audio_tokens": 0,
"cached_tokens": 0
},
"cache_creation_input_tokens": null,
"cache_creation": null,
"gemini_cache_tokens_details": null
}
}
/chat/completions Streaming Request
curl "https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {YOUR_API_KEY}" \
-d '{
"messages": [
{
"role": "developer",
"content": "Talk like a pirate."
},
{
"role": "user",
"content": "Are semicolons optional in JavaScript?"
}
],
"stream": true
}'
Tool Search - Tool_search
tool_search does not support/chat/completions, only supports /responses
Reference Document: https://developers.openai.com/api/docs/guides/tools-tool-search
Request Example
curl --location --request POST 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
"input": "List open orders for customer CUST-12345.",
"parallel_tool_calls": false,
"tools": [
{
"type": "namespace",
"name": "crm",
"description": "CRM tools for customer lookup and order management.",
"tools": [
{
"type": "function",
"name": "get_customer_profile",
"description": "Fetch a customer profile by customer ID.",
"parameters": {
"type": "object",
"properties": {
"customer_id": { "type": "string" }
},
"required": ["customer_id"],
"additionalProperties": false
}
},
{
"type": "function",
"name": "list_open_orders",
"description": "List open orders for a customer ID.",
"defer_loading": true,
"parameters": {
"type": "object",
"properties": {
"customer_id": { "type": "string" }
},
"required": ["customer_id"],
"additionalProperties": false
}
}
]
},
{ "type": "tool_search" }
]
}'
Expected to see something like this in the output:
-
tool_search_call
-
tool_search_output
-
function_call(如 list_open_orders)
Return Example
x{
"top_logprobs": 0,
"metadata": {},
"presence_penalty": 0.0,
"reasoning": {
"effort": "none"
},
"usage": {
"input_tokens_details": {
"cached_tokens": 0
},
"input_tokens": 593,
"output_tokens_details": {
"reasoning_tokens": 19
},
"total_tokens": 636,
"output_tokens": 43
},
"created_at": 1775627905,
"store": true,
"tools": [
{
"type": "tool_search"
},
{
"description": "CRM tools for customer lookup and order management.",
"type": "namespace",
"tools": [
{
"description": "Fetch a customer profile by customer ID.",
"type": "function",
"name": "get_customer_profile",
"strict": true,
"parameters": {
"type": "object",
"required": [
"customer_id"
],
"additionalProperties": false,
"properties": {
"customer_id": {
"type": "string"
}
}
}
},
{
"description": "List open orders for a customer ID.",
"type": "function",
"defer_loading": true,
"name": "list_open_orders",
"strict": true,
"parameters": {
"type": "object",
"required": [
"customer_id"
],
"additionalProperties": false,
"properties": {
"customer_id": {
"type": "string"
}
}
}
}
],
"name": "crm"
}
],
"content_filters": [
{
"content_filter_results": {
"self_harm": {
"severity": "safe",
"filtered": false
},
"jailbreak": {
"filtered": false,
"detected": false
},
"hate": {
"severity": "safe",
"filtered": false
},
"sexual": {
"severity": "safe",
"filtered": false
},
"violence": {
"severity": "safe",
"filtered": false
}
},
"content_filter_offsets": {
"end_offset": 2559,
"start_offset": 0,
"check_offset": 0
},
"source_type": "prompt",
"content_filter_raw": [],
"blocked": false
}
],
"output": [
{
"execution": "server",
"type": "tool_search_call",
"arguments": {
"paths": [
"crm"
]
},
"id": "tsc_04a644e35c4f876d0069d5ee82bacc8194be92a9b2487c4027",
"status": "completed"
},
{
"execution": "server",
"type": "tool_search_output",
"tools": [
{
"description": "CRM tools for customer lookup and order management.",
"type": "namespace",
"tools": [
{
"description": "List open orders for a customer ID.",
"type": "function",
"defer_loading": true,
"name": "list_open_orders",
"strict": true,
"parameters": {
"type": "object",
"required": [
"customer_id"
],
"additionalProperties": false,
"properties": {
"customer_id": {
"type": "string"
}
}
}
}
],
"name": "crm"
}
],
"id": "tso_04a644e35c4f876d0069d5ee82c86c819491796ba0edb3faf1",
"status": "completed"
},
{
"type": "function_call",
"call_id": "call_gIstyUK0Wj4G6n79UpI8EhT8",
"name": "list_open_orders",
"namespace": "crm",
"arguments": "{\"customer_id\":\"CUST-12345\"}",
"id": "fc_04a644e35c4f876d0069d5ee83642481948f8f7e0d73831ed7",
"status": "completed"
}
],
"top_p": 0.98,
"completed_at": 1775627907,
"frequency_penalty": 0.0,
"parallel_tool_calls": false,
"background": false,
"temperature": 1.0,
"tool_choice": "auto",
"model": "MaaS_GP_5.4_mini_20260317",
"service_tier": "auto",
"id": "resp_04a644e35c4f876d0069d5ee81f9d8819499ad10d4b53bf86e",
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"truncation": "disabled",
"object": "response",
"status": "completed"
}
Testing the use of GPT 5.4 and GPT 5.4 Mini computer tools
1) First request (start computer loop)
You will see a computer_call in the response (usually first take a screenshot, or return a batch of actions[]).
curl https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses \
-H "Authorization: Bearer {YOUR_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"input": "Please open https://example.com in a browser and tell me the page title.",
"tools": [
{ "type": "computer" }
],
"parallel_tool_calls": false
}'
2) After executing actions, return the screenshot (critical)
Assume you have already:
Executed the actions[] returned by the model
Captured the latest screen and obtained BASE64_PNG
Then continue the same session.
curl https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses \
-H "Authorization: Bearer {YOUR_API_KEY}" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"previous_response_id": "resp_xxx",
"input": [
{
"type": "computer_call_output",
"call_id": "call_xxx",
"output": {
"type": "input_image",
"image_url": "data:image/png;base64,BASE64_PNG",
"detail": "original"
}
}
],
"tools": [
{ "type": "computer" }
],
"parallel_tool_calls": false
}'
3) Repeat until no more computer_call is returned
Continue executing actions[]
Take another screenshot
Send another round of computer_call_output
Until the output becomes plain text result / final answer.
background (background mode)
Both GPT-5.4 mini and GPT-5.4 nano support the background mode
Example of background asynchronous request is as follows:
curl --location --request POST 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
"background": true,
"model": "gpt-5.4-nano",
"input": "askA:answerA."
}'
An example of the response result is as follows:
{
"top_logprobs": 0,
"metadata": {},
"presence_penalty": 0.0,
"reasoning": {
"effort": "none"
},
"created_at": 1773899632,
"store": true,
"tools": [],
"output": [],
"top_p": 0.98,
"frequency_penalty": 0.0,
"parallel_tool_calls": true,
"background": true,
"temperature": 1.0,
"tool_choice": "auto",
"model": "MaaS_GP_5.4_nano_20260317",
"service_tier": "auto",
"id": "resp_0bd529239ed0ff590069bb8f70a8448193976feb98de696212",
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"truncation": "disabled",
"object": "response",
"status": "queued"
}
An example of obtaining the result of an asynchronous request is as follows:
curl --location --request GET 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/responses/resp_0bd529239ed0ff590069bb8f70a8448193976feb98de696212' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{}'
Return Example
{
"top_logprobs": 0,
"metadata": {},
"presence_penalty": 0.0,
"reasoning": {
"effort": "none"
},
"usage": {
"input_tokens_details": {
"cached_tokens": 0
},
"input_tokens": 24,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 352,
"output_tokens": 328
},
"created_at": 1775629269,
"store": true,
"tools": [],
"content_filters": [
{
"content_filter_results": {
"self_harm": {
"severity": "safe",
"filtered": false
},
"jailbreak": {
"filtered": false,
"detected": false
},
"hate": {
"severity": "safe",
"filtered": false
},
"sexual": {
"severity": "safe",
"filtered": false
},
"violence": {
"severity": "safe",
"filtered": false
}
},
"content_filter_offsets": {
"end_offset": 870,
"start_offset": 0,
"check_offset": 0
},
"source_type": "prompt",
"content_filter_raw": [],
"blocked": false
}
],
"output": [
{
"phase": "final_answer",
"role": "assistant",
"type": "message",
"content": [
{
"annotations": [],
"type": "output_text",
"logprobs": [],
"text": "Aye, they be *optional* in JavaScript, matey—mostly.\n\n- **Semicolons (`;`) aren’t required** because JavaScript has *automatic semicolon insertion* (ASI). If the parser can figure out where a statement ends, it’ll chuck in a semicolon for ye.\n- **But they’re not optional in every case**, because ASI can’t always read yer mind—and that’s when bugs be born.\n\n### When semicolons matter (gotchas)\nIf ye skip ’em, ASI can sometimes insert them in the “wrong” spot. Two classic troublemakers:\n\n1) **Line breaks before `(`, `[` , `+`, `-`, etc.**\n```js\nreturn\n{\n name: \"Jack\"\n}\n```\nThat can turn into something like:\n```js\nreturn; \n{\n name: \"Jack\"\n}\n```\n\n2) **Starting a line with `(` or `[` after a statement**\n```js\nlet x = 1\n(2 + 3)\n```\nASI might treat `(2 + 3)` as a new statement, not part of the previous expression.\n\n### The practical rule o’ thumb\n- **You can omit semicolons**, especially if ye use a linter/formatter like **Prettier** and follow consistent style.\n- Still, **adding semicolons often makes code more predictable**, and many folks keep ’em for safety.\n\nIf ye tell me what style guide ye follow (Airbnb? Standard? Prettier?), I can recommend the usual approach."
}
],
"id": "msg_06b47ab625f269470069d5f3deaa88819488b5ff47f694bf40",
"status": "completed"
}
],
"top_p": 0.98,
"completed_at": 1775629294,
"frequency_penalty": 0.0,
"parallel_tool_calls": true,
"background": true,
"temperature": 1.0,
"tool_choice": "auto",
"model": "MaaS_GP_5.4_nano_20260317",
"service_tier": "auto",
"id": "resp_06b47ab625f269470069d5f3d5d2b8819494932d297a151aa5",
"text": {
"format": {
"type": "text"
},
"verbosity": "medium"
},
"truncation": "disabled",
"object": "response",
"status": "completed"
}
Note: Session Persistence Policy
The GP response API involves multi-round conversations, and if there is cross-account access, there may be issues with request failures. To address this problem, you can add X-Conversation-Id to the request header, with the value being the conversation ID, and the ID for the same round of conversation needs to be consistent. This conversation persistence also has a time limit, with a maximum duration of 30 minutes.
response request
curl --location --request POST 'https://genaiapi.cloudsway.net/v1/ai/{your endpoint}/responses' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'X-Conversation-Id: conversation-test1' \
--header 'Content-Type: application/json' \
--data-raw '{
"input": "Tell me another brain teaser with the answer. Return all of the following together.",
"previous_response_id": "resp_04825fd1c08c48820069c12ebdd7948190ac42abb8234f1a83",
"stream": true
}'
Chat completion
curl --location --request POST 'https://genaiapi.cloudsway.net/v1/ai/{endpointPath}/chat/completions' \
--header 'Authorization: Bearer {YOUR_API_KEY}' \
--header 'X-Conversation-Id: conversation-test1' \
--header 'Content-Type: application/json' \
--data-raw '{
"messages": [
{
"role":"user",
"content":"hi"
}
]
}'