Analyze context efficiency. Set guardrails. Let agents self-optimize.
$47/day
avg. savings
62%
context reduced
2 lines
to integrate
$ npm install @neurameter/core @neurameter/openai2 lines of code to start tracking costs across all your agents.
82% was old conversation history
Agents re-read everything they already knew. Paying to forget nothing.
No guardrails to stop it
Nothing warned you. Nothing stopped it. The bill just kept growing.
No way to fix it automatically
Existing tools show what happened. None fix it.
import { NeuraMeter } from '@neurameter/core';
import { withMeter } from '@neurameter/openai';
const meter = new NeuraMeter({ apiKey: 'nm_xxx', projectId: 'proj_xxx' });
const openai = withMeter(new OpenAI(), meter);
// That's it. Costs are now tracked per agent, per task, per customer.
const response = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'Hello' }],
}, {
agentName: 'SupportAgent',
taskName: 'classify-ticket',
customerId: 'cust_123',
});Dashboard Trace View
[OrchestratorAgent] ─── $0.082 ─── 3,200ms ├── [ClassifierAgent] ─── $0.003 ─── gpt-4o-mini (450 tokens) ├── [ResearchAgent] ─── $0.031 ─── claude-sonnet (2,100 tokens) └── [DraftAgent] ─── $0.048 ─── gpt-4o (3,200 tokens)
From passive monitoring to active optimization — choose your level of control.
See exactly what's filling your context window — system prompts, conversation history, tool results.
Notify, block, or auto-optimize. You choose how aggressive to be.
MCP server lets agents check their own cost and compress context autonomously.
Track costs per agent, per task, per customer. Know exactly who spent what.
Visualize cost flow through multi-agent hierarchies with full span traces.
Works with OpenAI, Anthropic, LangChain, CrewAI, Vercel AI SDK.
Free tier includes 10K tracked calls/month. No credit card required.