dxflow automatically optimizes data delivery using streaming or buffered responses depending on your needs, ensuring fast response times and efficient resource usage.
The dxflow engine implements a sophisticated dual-mode HTTP response system supporting both streaming (chunked) and non-streaming (buffered) responses. This architecture optimizes data delivery based on content type and client requirements.
The engine determines response mode through multiple mechanisms:
| Priority | Method | Example | Purpose |
|---|---|---|---|
| 1st | Query Parameter | ?stream=true | Explicit client control |
| 2nd | Accept Header | Accept: application/stream+json | Content negotiation |
| 3rd | Default | No indicator | Non-stream (buffered) |
dxflow intelligently chooses between two response modes:
Stream Mode
Progressive Data Transfer
Non-Stream Mode
Complete Response Buffering
All responses use a unified JSON array format with typed chunks:
| Chunk Type | Payload | Purpose | Position |
|---|---|---|---|
| status | HTTP code & message | Response status | First |
| total | Result count | Total items | Optional second |
| entity | Data items | Actual content | After metadata |
Streaming mode activates automatically for:
Add the stream parameter to any API request:
# Enable streaming for file listings
curl "http://localhost/api/object/fs/?stream=true"
# Enable streaming for workflow logs
curl "http://localhost/api/workflow/logs?stream=true"
The web console automatically uses streaming for:
| Aspect | Streaming | Buffered |
|---|---|---|
| Response Start | Immediate | After completion |
| Memory Usage | Very Low | Full dataset size |
| Best For | Large data, Live updates | Small queries, Simple data |
# Large directory listings automatically use streaming
dxflow object ls /large-directory
# File uploads show real-time progress
dxflow object upload /path/to/large-file
# Live workflow logs automatically stream
dxflow workflow logs my-workflow --follow
# Workflow status updates in real-time
dxflow workflow status
# Network connectivity testing with streaming
dxflow ping
# Bridge connection listings
dxflow bridge list
When to Use Streaming
When to Use Buffered
Both streaming and buffered responses use the same structured JSON array format with three types of chunks:
Status Chunk
Always First
Total Chunk
Optional Count
Entity Chunk
Actual Data
[
{
"kind": "status",
"payload": {
"code": 200,
"message": "OK"
}
},
{
"kind": "total",
"payload": 1500
},
{
"kind": "entity",
"payload": {
"name": "file1.txt",
"size": 1024,
"type": "file"
}
},
{
"kind": "entity",
"payload": {
"name": "file2.txt",
"size": 2048,
"type": "file"
}
}
]
Progressive Delivery
Complete Assembly
The payload field contains different data depending on the chunk type:
| Chunk Kind | Payload Structure | Example |
|---|---|---|
| status | {code: number, message: string} | {code: 200, message: "OK"} |
| total | number | 1500 |
| entity | object | {id: 1, name: "item"} |
Error responses maintain the same structure:
[
{
"kind": "status",
"payload": {
"code": 404,
"message": "File not found"
}
}
]
?stream=false to compareResponse appears incomplete:
curl or modern browsers?stream=false as fallbackStreaming not working as expected:
curl to test streaming behavior directly