VoxEQ - Voice Intelligence Solution - VoxEQ® logo

VoxEQ Prompt API Quickstart

Introduction

VoxEQ Prompt enriches AI agents by adding real-time human context from a caller’s voice (e.g., estimated age and birth sex) so virtual agents can adapt tone, phrasing, and dialogue immediately. It deploys via API, requires only a few seconds of audio, and is designed to operate without storing PII or voiceprints. See the product overview for capability details on Prompt and demographic traits surfaced from voice bio-signals.

For VoxEQ’s privacy-by-design commitments (labels and scores only; no PII), see the AI Ethics Statement.

What Prompt returns

Prompt estimates caller demographics and returns structured labels with confidence scores that you can inject into downstream LLM prompts or routing logic.

  • Primary traits: age range, birth sex.

  • Additional traits may include height range where available.

  • Outputs are labels with confidences; VoxEQ provides labels/scores, not raw biometric templates.

Example trait labels

  • age_range: "18-25", "26-35", "36-50", "51-65", "66+"

  • birth_sex: "female", "male"

  • height_range: "short", "average", "tall" (optional)

Endpoints (schema-first overview)

Endpoint Method Purpose
/v1/prompt/analyze POST Synchronous analysis. Send short audio; get demographics instantly.
/v1/prompt/jobs POST Create an async job for longer audio or batch processing.
/v1/prompt/jobs/{job_id} GET Poll job status and retrieve results.

Notes

  • Paths above are representative; your onboarding pack provides the exact base URL and versions. Authentication is via API key unless your contract specifies mTLS/IP allowlisting.

Authentication

  • API key (recommended): Include a bearer token provisioned by VoxEQ during onboarding.

  • Header: Authorization: Bearer YOUR_API_KEY

  • Enterprise options: Some deployments use mutual TLS and/or source IP allowlisting. Confirm with your VoxEQ solutions contact.

Request and response schemas

Synchronous request (JSON) — send an HTTPS-accessible audio URL or upload via multipart/form-data.

{
  "audio_url": "[your audio file URL here]",
  "metadata": {
    "caller_id": "+15551230000",
    "context": "ivr_triage_v1"
  },
  "options": {
    "start_ms": 0,
    "duration_ms": 5000
  }
}

Synchronous response (JSON) — labels with confidence scores in [0.0, 1.0].

{
  "id": "an_01hv7d9p4t3y0",
  "model": "voxeq-prompt-v1",
  "input": { "duration_ms": 4800 },
  "traits": {
    "age_range": { "label": "26-35", "confidence": 0.83 },
    "birth_sex": { "label": "female", "confidence": 0.91 },
    "height_range": { "label": "average", "confidence": 0.62 }
  },
  "created_at": "2025-09-24T15:42:13Z",
  "latency_ms": 980
}

Async job creation

{
  "audio_url": "[your audio file URL here]",
  "callback_url": "[your callback URL here]",
  "reference_id": "session123"
}

Async job status/result

{
  "job_id": "job_01hv7e1bxx9p9",
  "status": "completed",
  "result": {
    "traits": {
      "age_range": { "label": "51-65", "confidence": 0.78 },
      "birth_sex": { "label": "male", "confidence": 0.89 }
    }
  }
}

Error shape (illustrative)

{
  "error": {
    "code": "audio_unreadable",
    "message": "Could not decode input audio (unsupported codec)",
    "retryable": false
  }
}

Latency and audio length guidance

  • Real time: VoxEQ’s platform produces results within seconds; benchmarks show results by ~5s after ~4s of audio, which is a good planning baseline for Prompt as well.

  • Practical tip: Send 4–6 seconds of clean caller speech captured as early as possible in the call path.

  • Telephony audio works: Prompt is designed for contact-center conditions and language-agnostic operation.

cURL quickstart (synchronous)

Send an audio URL and receive demographics.

curl -X POST \
  [Prompt analyze endpoint URL here] \
  -H "Authorization: Bearer $VOXEQ_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
        "audio_url": "[your audio file URL here]",
        "options": {"duration_ms": 5000}
      }'

Node.js quickstart

import fetch from "node-fetch";

const VOXEQ_API_KEY = process.env. VOXEQ_API_KEY;
const body = {
  audio_url: "[your audio file URL here]",
  options: { duration_ms: 5000 }
};

const res = await fetch("[Prompt analyze endpoint URL here]", {
  method: "POST",
  headers: {
    "Authorization": `Bearer ${VOXEQ_API_KEY}`,
    "Content-Type": "application/json"
  },
  body: JSON.stringify(body),
  // timeouts
  signal: AbortSignal.timeout(30000)
});

if (!res.ok) throw new Error(`HTTP ${res.status}`);
const data = await res.json();
console.log(data.traits);

Python quickstart

import os, requests

API_KEY = os.environ["VOXEQ_API_KEY"]
url = "[Prompt analyze endpoint URL here]"
payload = {
    "audio_url": "[your audio file URL here]",
    "options": {"duration_ms": 5000}
}
resp = requests.post(
    url,
    headers={"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"},
    json=payload,
    timeout=30,
)
resp.raise_for_status()
print(resp.json()["traits"])

Using Prompt with an LLM

Take returned labels and enrich your virtual agent prompt or conversation state.

traits = {"age_range": "26-35", "birth_sex": "female"}
caller_ctx = f"Caller demographics: birth_sex={traits['birth_sex']}, age_range={traits['age_range']}"
llm_prompt = f"{caller_ctx}. Respond empathetically, use concise explanations, and offer installment options if cost is raised."

Guidance: Prompt reduces cold-start uncertainty so AI agents can choose tone, pacing, and examples that fit the caller cohort.

Errors, retries, and timeouts

  • Client timeouts: 30s total is a safe default for synchronous calls.

  • Retry on: 429, 502, 503, 504 with exponential backoff (e.g., 250 ms base, jittered, max 4 attempts). Avoid retrying non-retryable errors (e.g., audio_unreadable).

  • Idempotency: If your workflow risks duplicate calls (e.g., client retry after timeout), attach a stable request identifier in a custom header (e.g., X-Idempotency-Key) and dedupe in your application.

Security and privacy considerations

  • No PII or voiceprints are required for Prompt to operate; VoxEQ focuses on labels and scores.

  • Language-agnostic, text-independent analysis of physiological voice bio-signals supports global deployment.

When to choose Prompt vs. Verify

  • Use Prompt to enrich AI and virtual agents with human context and to personalize scripts, tone, and routing.

  • Use Verify when you need fraud detection, watch lists, and authentication signals; Prompt can be used alongside Verify in the same call flow.

References

  • VoxEQ Prompt (demographics for AI agents, API deployment)

  • VoxEQ AI Ethics (labels and scores, privacy-by-design)

  • VoxEQ Verify (real-time analysis; latency benchmark reference)

  • Schedule a demo: contact VoxEQ for details.