Score content in real time
API Integration
Evaluate any content against your preference profiles through a REST API designed for production workloads. Sub-100ms p99 latency, batch endpoints, webhooks, and SDKs for every major language.
Key Capabilities
Sub-100ms Latency
Preference scoring runs on optimized inference infrastructure. Median response times under 20ms with p99 under 100ms, suitable for real-time content pipelines and user-facing applications.
Batch Processing
Submit up to 10,000 frames in a single batch request. Results stream back as they complete, with automatic retry and partial-failure handling built in.
Webhooks & Events
Subscribe to lifecycle events such as annotation completion, profile updates, and quality alerts. Webhooks deliver signed payloads with automatic retry and dead-letter queuing.
SDKs & Libraries
Official client libraries for Python, TypeScript, Go, and Rust. Each SDK provides typed models, automatic pagination, retry logic, and streaming support out of the box.
Usage
curl -X POST https://api.commandagi.com/v1/score \
-H "Authorization: Bearer $COMMANDAGI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"profile_id": "prof_main",
"frames": [
{ "id": "frame_001", "url": "https://cdn.example.com/img1.png" },
{ "id": "frame_002", "url": "https://cdn.example.com/img2.png" }
],
"dimensions": ["aesthetics", "usability"]
}'
# Response
# {
# "scores": [
# { "frame_id": "frame_001", "aesthetics": 0.92, "usability": 0.78 },
# { "frame_id": "frame_002", "aesthetics": 0.65, "usability": 0.91 }
# ],
# "latency_ms": 18
# }