SDK Reference
Anthropic Provider
Drop-in replacement for anthropic.Anthropic. One import change — everything else is identical.
Usage
# Before
from anthropic import Anthropic
# After
from kostrack import Anthropic
client = Anthropic(
tags={
"project": "openmanagr",
"feature": "invoice-extraction",
"environment": "production",
}
)
response = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "..."}],
)
Constructor parameters
| Parameter | Type | Description |
|---|---|---|
tags | dict | Attribution tags applied to every call from this client. |
pricing_model | str | "per_token" (default) or "batch" — affects cost calculation. |
**anthropic_kwargs | any | All other kwargs passed directly to anthropic.Anthropic() — api_key, base_url, etc. |
Per-call tag override
Override tags for a specific call without changing the client:
client.messages.create(
model="claude-sonnet-4-6",
max_tokens=512,
messages=[...],
kostrack_tags={"feature": "override-for-this-call"},
)
Supported models and pricing
| Model | Input / 1M tokens | Output / 1M tokens | Cache read / 1M |
|---|---|---|---|
| claude-sonnet-4-6 | $3.00 | $15.00 | $0.30 |
| claude-opus-4-6 | $15.00 | $75.00 | $1.50 |
| claude-haiku-4-5-20251001 | $0.80 | $4.00 | $0.08 |
Batch API pricing is 50% of standard rates on all models. Cache write tokens are priced at 125% of input rate.
Token breakdown
Anthropic-specific tokens are stored in token_breakdown JSONB and the flat columns:
| Field | Description |
|---|---|
input_tokens | Standard input tokens |
output_tokens | Standard output tokens |
cached_tokens | Cache read hits (flat column) |
token_breakdown.cache_write | Tokens written to the prompt cache |
token_breakdown.thinking | Extended thinking tokens (estimated) |