Free Tool

LLM Prompt Cost Estimator

Paste your prompt to estimate token count and costs across 14+ AI models. Compare prices instantly and find the most cost-effective option.

0 characters~0 input tokens

Cost Comparison Across Models

Sorted by monthly cost (cheapest first)

ModelProviderPer RequestMonthly
CheapestGemini 2.0 Flash
Google$0.000200$0.60
GPT-4o Mini
OpenAI$0.000300$0.90
DeepSeek V3
DeepSeek$0.000550$1.65
Claude 3.5 Haiku
Anthropic$0.002000$6.00
o1-mini
OpenAI$0.002200$6.60
o3-mini
OpenAI$0.002200$6.60
Gemini 1.5 Pro
Google$0.002500$7.50
Mistral Large
Mistral$0.003000$9.00
+6 more models available

Token Summary

Input Tokens0
Output Tokens500
Total Tokens500

Cost with GPT-4o

Per Request$0.005000
Daily (100 req)$0.50
Monthly$15.00
Yearly$180.00

Potential Savings

Switch to Gemini 2.0 Flash to save:

$14.40/mo
96% reduction

Track Real Costs

Burnwise shows your actual LLM spending in real-time with automatic cost tracking.

Start Free Trial

How Token Counting Works

What is a token?

A token is roughly 4 characters of English text, or about 0.75 words. LLM APIs charge based on the number of input and output tokens processed.

Why do input and output have different prices?

Output tokens require more computation (generation) than input tokens (just reading). Output typically costs 2-5x more than input across most providers.

How accurate is this estimate?

This tool uses a 4-character approximation. Actual tokenization varies by model and can be 10-20% different. For precise counts, use the provider's tokenizer.

How can I reduce my LLM costs?

Use shorter prompts, choose cheaper models for simple tasks, implement caching, and use batch APIs when possible. Our ROI calculator can help estimate savings.