VibeIQ
Private beta · v0.1

Measure the AI in your workflow.

VibeIQ is a desktop control plane for Claude Code, Cursor, Codex and the rest of your LLM stack. See the time, money, and risk your AI tools actually move — and tune the ones that do not.

Also on Windows · Linux · Local-first · Your prompts never leave your machine

Cache hit-rate
73% ↑ 12 pts
vibeiq · acme
Saved / week
2.1h
Guarded
48
$ / dev / day
$0.18
Recent activitylive
  • guardedBlocked secret in auth.ts2m
  • cacheRe-warmed opus/review6m
  • routeSent 14 PRs to opus-4.711m
  • skillInstalled migrate-prisma22m
0 secrets shipped
past 30 days

Works with everything you already use

Claude Code
Cursor
Codex
Gemini CLI
Aider
Copilot
Ollama
LM Studio

By the numbers

What teams in the beta actually moved.

Aggregate medians from 214 developers across 12 teams running VibeIQ for at least 30 days. Numbers refresh quarterly; this snapshot is from April 2026.

0.0×
Faster code review
Avg PR review: 18m → 9m
−0%
Autocomplete cost
Per dev / day spend
+0pts
Agent task completion
Without manual rescue
0
Secrets shipped
Past 30 days, all teams

Where the time goes back

Four workflows, four numbers.

VibeIQ does not replace your editor. It tunes the four loops your team already runs every hour — and reports what it changed.

Code review

2.0× faster

Without VibeIQ0 min
With Opus routing + skills0 min

Pull-request reviews route to your highest-context model with a curated review skill. Reviewers stop re-prompting and stop re-reading.

Median PR-to-first-review time across 9.4k merged PRs.

Autocomplete

−57% cost

Default routing0.00 $/dev/day
VibeIQ cache + Haiku0.00 $/dev/day

We route inline completions to a fast small model and cache the high-frequency prefixes per repo. The savings show up in the bill, not the latency.

Cost per developer per day, across the 12 beta teams.

Agent tasks

+35 pts

Stock agents0 % completion
VibeIQ skills + hooks0 % completion

Standard hooks (pre-prompt context, post-edit format, error-recovery loop) lift your agent's first-shot success without you babysitting.

Tasks completed without a human re-prompt or rollback.

Commit guard

0 secrets

Pre-VibeIQ0.0 /team/week
With diff-layer guard0 /team/week

Diffs from any agent — Claude, Cursor, Codex, Copilot — pass through one allowlist before they reach origin. Secrets, license drift, and unsafe paths are blocked at the editor.

Secrets shipped per team per week, before vs. after rollout.

Configuration as code

One file. Every editor. Every machine.

Define your team's routing, hooks, and allowlists once. VibeIQ compiles it down to whatever each editor expects — MCP servers, .cursorrules, agent skills, you name it. Numbers you saw above are the side effect of this config landing on every developer's machine.

vibeiq.config.ts
import { defineWorkspace } from "vibeiq";

export default defineWorkspace({
  name: "acme",
  models: {
    review:       "anthropic/opus-4.7",
    autocomplete: "anthropic/haiku-4.5",
    agents:       "local/qwen3-coder",
  },
  hooks: {
    "pre-commit": ["block-secrets", "lint-diff"],
    "post-edit":  ["format"],
  },
  guard: {
    blockSecrets: true,
    requireSignoff: [".env", "infra/**"],
  },
});

Field data

30 days of VibeIQ, in aggregate.

Anonymised, opt-in. Your workspace data never leaves your machine.

0
Teams in beta
0
Developers
0.0k
PRs reviewed
0.0M
Cache hits / week

FAQ

Common questions.

Do not see what you are looking for? hi@vibeiq.dev

How are these numbers measured?

Each metric is a 30-day median across the 12 beta teams. We instrument timing, cost, and guard events locally on each machine and aggregate them via opt-in workspace sync. No prompt or code content is collected.

Does VibeIQ replace Cursor / Claude Code / Codex?

No. VibeIQ runs alongside them. It manages the operational layer — skills, hooks, cache, routing — so each editor stays focused on what it is good at.

Where does my code live?

On your machine. VibeIQ never reads your repos and never sends prompts to our servers. The web app only stores workspace settings (members, billing, allowlists).

Can I bring my own API keys?

Yes — keys live in your OS keychain, not our database. The workspace allowlist controls which providers your team can talk to.

What about local models?

First-class. Ollama and LM Studio are wired in by default. Route any task to a local model via your config.

Private beta

See your own numbers in 7 days.

Sign in with Google, Microsoft, or GitHub to claim a workspace. We'll mail you a build for your platform; the dashboard starts measuring on day one.

We never push code. We never read your repos. We never train on you.