LLM Prompt Cost Estimator
Paste your prompt to estimate token count and costs across 14+ AI models. Compare prices instantly and find the most cost-effective option.
Cost Comparison Across Models
Sorted by monthly cost (cheapest first)
| Model | Provider | Per Request | Monthly |
|---|---|---|---|
CheapestGemini 2.0 Flash | $0.000200 | $0.60 | |
GPT-4o Mini | OpenAI | $0.000300 | $0.90 |
DeepSeek V3 | DeepSeek | $0.000550 | $1.65 |
Claude 3.5 Haiku | Anthropic | $0.002000 | $6.00 |
o1-mini | OpenAI | $0.002200 | $6.60 |
o3-mini | OpenAI | $0.002200 | $6.60 |
Gemini 1.5 Pro | $0.002500 | $7.50 | |
Mistral Large | Mistral | $0.003000 | $9.00 |
Token Summary
Cost with GPT-4o
Potential Savings
Switch to Gemini 2.0 Flash to save:
Track Real Costs
Burnwise shows your actual LLM spending in real-time with automatic cost tracking.
Start Free TrialHow Token Counting Works
What is a token?
A token is roughly 4 characters of English text, or about 0.75 words. LLM APIs charge based on the number of input and output tokens processed.
Why do input and output have different prices?
Output tokens require more computation (generation) than input tokens (just reading). Output typically costs 2-5x more than input across most providers.
How accurate is this estimate?
This tool uses a 4-character approximation. Actual tokenization varies by model and can be 10-20% different. For precise counts, use the provider's tokenizer.
How can I reduce my LLM costs?
Use shorter prompts, choose cheaper models for simple tasks, implement caching, and use batch APIs when possible. Our ROI calculator can help estimate savings.