Streaming
dxflow automatically optimizes data delivery using streaming or buffered responses depending on your needs, ensuring fast response times and efficient resource usage.
Architecture Overview
The dxflow engine implements a sophisticated dual-mode HTTP response system supporting both streaming (chunked) and non-streaming (buffered) responses. This architecture optimizes data delivery based on content type and client requirements.
How It Works
Response Mode Detection
The engine determines response mode through multiple mechanisms:
| Priority | Method | Example | Purpose |
|---|---|---|---|
| 1st | Query Parameter | ?stream=true | Explicit client control |
| 2nd | Accept Header | Accept: application/stream | Content negotiation |
| 3rd | Default | No indicator | Non-stream (buffered) |
Automatic Mode Selection
dxflow intelligently chooses between two response modes:
Stream Mode
Progressive Data Transfer
- Uses HTTP chunked transfer encoding
- Sends data immediately without buffering
- Memory efficient with O(1) per chunk usage
- Optimal for large datasets and real-time updates
- Perfect for logs, file listings, SSE events
Non-Stream Mode
Complete Response Buffering
- Assembles full response in memory
- Sends complete payload in single operation
- Provides Content-Length header
- Allows error recovery before transmission
- Better for small queries and transactional data
Response Structure
All responses use a unified JSON array format with typed chunks:
| Chunk Type | Payload | Purpose | Position |
|---|---|---|---|
| status | HTTP code & message | Response status | First |
| total | Result count | Total items | Optional second |
| entity | Data items | Actual content | After metadata |
When Streaming Activates
Streaming mode activates automatically for:
- Large file directory listings
- Workflow execution logs
- Real-time status updates
Using Streaming
Enable Streaming Manually
Add the stream parameter to any API request:
# Enable streaming for file listings
curl "http://localhost/api/object/fs/?stream=true"
# Enable streaming for workflow logs
curl "http://localhost/api/workflow/logs?stream=true"
In the Web Interface
The web console automatically uses streaming for:
- Large directory browsing
- File search results
- Bulk file operations
- Archive contents
- Live log streaming
- Container status updates
- Resource usage metrics
- Execution progress
- Bridge connection lists
- Proxy status monitoring
- Shell session management
- System health checks
Performance Benefits
Streaming vs Buffered
| Aspect | Streaming | Buffered |
|---|---|---|
| Response Start | Immediate | After completion |
| Memory Usage | Very Low | Full dataset size |
| Best For | Large data, Live updates | Small queries, Simple data |
Real-World Performance
Common Use Cases
File Management
# Large directory listings automatically use streaming
dxflow object ls /large-directory
# File uploads show real-time progress
dxflow object upload /path/to/large-file
Workflow Monitoring
# Live workflow logs automatically stream
dxflow workflow logs my-workflow --follow
# Workflow status updates in real-time
dxflow workflow status
System Administration
# Network connectivity testing with streaming
dxflow ping
# Bridge connection listings
dxflow bridge list
Best Practices
When to Use Streaming
- File directory with >100 items
- Live log monitoring
- Long-running operations
- Real-time dashboards
- Large data exports
When to Use Buffered
- Quick status checks
- Small configuration queries
- Single file information
- Simple API calls
- Mobile applications
Response Format
Both streaming and buffered responses use the same structured JSON array format with three types of chunks:
Chunk Types
Status Chunk
Always First
- Contains HTTP status code and message
- Indicates success or error state
- Required in every response
Total Chunk
Optional Count
- Provides total number of items
- Helps with progress tracking
- Useful for pagination
Entity Chunk
Actual Data
- Contains the real response data
- Multiple entities for lists
- Each item wrapped individually
Example Response Structure
[
{
"kind": "status",
"payload": {
"code": 200,
"message": "OK"
}
},
{
"kind": "total",
"payload": 1500
},
{
"kind": "entity",
"payload": {
"name": "file1.txt",
"size": 1024,
"type": "file"
}
},
{
"kind": "entity",
"payload": {
"name": "file2.txt",
"size": 2048,
"type": "file"
}
}
]
Response Flow
Progressive Delivery
- Status chunk sent immediately
- Total chunk sent if known
- Entity chunks sent as processed
- Each chunk flushed to client instantly
Complete Assembly
- All chunks collected in memory
- Complete response built
- Entire response sent at once
- Client receives full JSON array
Payload Contents
The payload field contains different data depending on the chunk type:
| Chunk Kind | Payload Structure | Example |
|---|---|---|
| status | {code: number, message: string} | {code: 200, message: "OK"} |
| total | number | 1500 |
| entity | object | {id: 1, name: "item"} |
Error Handling
Error responses maintain the same structure:
[
{
"kind": "status",
"payload": {
"code": 404,
"message": "File not found"
}
}
]
Troubleshooting
If Streaming Seems Slow
- Check network conditions - Streaming needs stable connections
- Verify client support - Some tools don't handle streaming well
- Try buffered mode - Add
?stream=falseto compare - Check server load - High activity can affect streaming performance
Common Issues
Response appears incomplete:
- Client may not support chunked encoding
- Try with
curlor modern browsers - Use
?stream=falseas fallback
Streaming not working as expected:
- Verify endpoint supports streaming
- Check client HTTP library compatibility
- Use
curlto test streaming behavior directly