Skip to main content
POST
/
chats-streaming
Create or continue a chat with a streaming response
curl --request POST \
  --url https://{subdomain}.withrealm.com/api/external/alpha/chats-streaming \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "content": "<string>",
  "assistant_id": "<string>",
  "agent_id": "<string>",
  "chat_id": "123",
  "research_mode": false,
  "citation_style": "remove",
  "output_format": "markdown",
  "prompt_variables": {},
  "answer_only": false
}
'
{
  "type": "generation_tokens",
  "text": "<string>",
  "id": "<string>",
  "full": true
}

Documentation Index

Fetch the complete documentation index at: https://docs.withrealm.com/llms.txt

Use this file to discover all available pages before exploring further.

Approval-required events

If the agent reaches an ask-first action while streaming, the stream emits a tool_approval_request event and then ends normally. Resume the paused chat with POST /chats/{chat_id}/approvals.

Rate Limits

600 requests per minute.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
content
string
required

The content of the user message.

assistant_id
string | null

Deprecated. Use agent_id instead.

agent_id
string | null

The ID of the agent to use for the chat. You can find the ID by clicking on the agent in the Agents page.

chat_id
string

The ID of the chat to continue. If not provided, a new chat will be created.

Example:

"123"

research_mode
boolean
default:false

Enable research mode for this chat. Only works if the agent has research mode enabled ('optional' or 'always_on').

citation_style
enum<string>
default:remove

The style of citations to use. 'remove' will not show any citations, 'link' will format them as URLs.

Available options:
link,
remove
output_format
enum<string>
default:markdown

The format of the content returned. 'markdown' includes formatting, 'text' returns plain text.

Available options:
markdown,
text
prompt_variables
object

Prompt variables as input to the agent.

answer_only
boolean
default:false

When enabled for non-streaming responses, return only the final answer text, excluding intermediate reasoning, thinking steps, and agent narration. Has no effect on streaming responses.

Response

Success

type
enum<string>
required

Partial chat

Available options:
generation_tokens
text
string
required

The content of the partial chat

id
string

The ID of the chat

full
boolean

Whether the chat is complete