Token Counter + LLM Cost Calculator

Count tokens for GPT-4o, Claude, Gemini, and other LLMs. Estimate API costs per request with real-time pricing.

100% client-side — your data never leaves your browser

Token Count

0tokens

Text Stats

Characters0
Words0
Lines0

Estimated Cost

As input< $0.0001
As output< $0.0001
Context used0.00%

GPT-5.5 · 1.1M context

Pricing as of 2026-05

Related Tools

LLM Token Counter and Cost Calculator

Count tokens and estimate API costs for GPT-4o, Claude, Gemini, and other large language models. Tokenization runs locally in your browser using tiktoken — your text stays private.

How to Use

  1. Select a model from the dropdown (grouped by provider)
  2. Paste or type text into the input area
  3. View token count and cost estimates in the stats panel

Understanding Tokens

Language models do not process text character by character. Instead, they use a tokenizer to split text into tokens — chunks of varying length that the model treats as individual units. Common English words are often a single token, while uncommon words or non-English text may be split into multiple tokens.

The tokenization process uses Byte Pair Encoding (BPE), which learns common byte patterns from training data. This means that frequently occurring words or subwords get their own token, while rare text requires more tokens to represent.

Pricing Notes

Prices shown are standard API rates as of early 2026. Actual costs may vary based on batch pricing, prompt caching discounts, or enterprise agreements. Most providers offer significant discounts for cached prompts (up to 90% off input costs) and batch API usage (50% off).

Input and output tokens are priced differently because generating output requires more computation than processing input. Output tokens are typically 2-5x more expensive than input tokens.