Providers

An honest provider matrix.

Eleven providers ship with real usage data today. Eight more are detected but expose no usage API — we mark them so, rather than pretending. Confidence labels live on every row.

  1. exact

    Exact

    Numbers come from the vendor's own API or local logs — counted, not guessed.

  2. estimated

    Estimated

    Numbers come from a local pricing table or a token estimate. Good for trends. Bad for tax audits.

  3. unavailable

    Unavailable

    The vendor doesn't expose this. We mark it so, instead of pretending.

Real usage data · today

Eleven providers ship with live data.

Each one names exactly where data comes from, what we can know about cost, and how quota is shaped. Notes capture limitations we'd rather you read than discover later.

Provider Where the data comes from Cost Quota Credential
Claude Code Anthropic's CLI agent — reads ~/.claude/projects/**/*.jsonl
Local JSONL + statusline bridge exact yes none (local)
Codex (OpenAI CLI) ChatGPT's coding CLI — reads ~/.codex/sessions/rollout-*.jsonl
Local rollout JSONL exact yes none (local) or ~/.codex/auth.json for hosted refresh
OpenAI Organization-wide usage from the admin API
/v1/organization/usage/completions exact partial Org admin key (sk-…)
GitHub Copilot Per-seat premium-interaction + chat limits
api.github.com/copilot_internal/user estimated yes GitHub OAuth or PAT (read:user)
Cursor Plan usage in USD straight from Cursor's web API
cursor.com/api/usage-summary exact yes Workos session token (auto-extracted)
Factory (Droid) Plan tier + rolling 5h/7d/30d windows, lane-aware
factory.ai org subscription + local session settings exact partial WorkOS browser session captured by FactoryLoginHelper
MiniMax Coding Plan remaining quota per model
minimax.io coding-plan endpoint exact yes Coding Plan key sk-cp-…
Z.ai (GLM) Token + MCP limits from BigModel monitor API
api.z.ai monitor/usage/quota/limit exact yes API key
Warp Request credits, refresh windows, bonus grants
app.warp.dev GraphQL exact yes wk-… API key
Ollama Local models cost zero; Cloud routing optional
localhost:11434 + ollama.com (cloud) exact partial none (local); Ollama Cloud API key (cloud)
Kimi (Moonshot) Weekly tokens + requests from kimi.com billing service
kimi.com BillingService exact yes JWT bearer from kimi.com session or KIMI_AUTH_TOKEN
OpenRouter Per-call cost in USD straight from the vendor
openrouter.ai /v1/activity exact no API key sk-or-…
Anthropic Console Org-wide messages usage report
api.anthropic.com /v1/organizations/usage_report/messages estimated partial sk-ant-admin-… (org admin)
Aider Local analytics — tokens only, no vendor quota
~/.aider/analytics.jsonl exact no none
Forge Counts from ~/forge/.forge.db; routes through local gateway
Local SQLite estimated no none
Detection-only

Vendors that don't expose data.

These get a presence in OpenBurnBar — Installed / Not installed — but no usage rows. The vendor has to expose data for us to surface it.

Provider Where the data comes from Cost Quota
Gemini CLI Per-session disk tokens exist, but Google AI Studio has no quota API.
Local session files only unavailable no
Cline Detection only; no usage API exposed
Install detection unavailable no
Roo Code Detection only
Install detection unavailable no
Kilo Code Detection only
Install detection unavailable no
Augment Detection only
Install detection unavailable no
Windsurf Detection only
Install detection unavailable no
Goose Detection only
Install detection unavailable no
OpenClaw Detection only
Install detection unavailable no
Caveats worth reading

Where it gets messy.

  • OpenAI & Anthropic admin keys

    Both use org admin keys (sk-… / sk-ant-admin-…), not regular API keys. Daily-granularity usage with ~24h vendor lag. We compute cost locally from a public pricing table — accurate for trends, not for tax audits.

  • Cursor & Factory

    Cursor's plan-usage and Factory's billing both rely on unofficial endpoints behind your authenticated session. Cursor's data is captured from the editor's local state DB; Factory uses a WorkOS browser session you complete via the in-app login helper. If the vendor changes shape, refresh breaks. We won't pretend it's covered by an SLA.

  • Z.ai endpoint

    The BigModel monitor/usage/quota/limit endpoint is undocumented. It works today and returns clean numbers; the vendor can change it tomorrow.

  • Warp

    Requires a spoofed User-Agent upstream — Warp's edge limiter returns 429 otherwise.

  • Gemini

    Google AI Studio has no programmatic quota API. Per-session disk tokens exist on Gemini CLI; aggregate billing requires Vertex BigQuery exports, which we don't implement today.

  • Claude Code & Codex stay self-hosted

    Their real data sources live in your local filesystem. A cloud function has no lawful way to read them without an agent on your Mac. Plus, Anthropic's policy disallows third-party hosting of Claude.ai credentials — and we agree with that line.

If your vendor is missing, file an issue.

We add providers when there's a real data source and a real user.