Blog
Updates on our LLM compression models and technology.
Case Study
One of the biggest token consumers globally improved quality by removing context bloat
Processing 193B tokens/month, Pax Historia ran a 268K-vote model arena with bear-1.1 compression. Compressed models scored higher and A/B tests showed +5% purchase amount lift.
February 2026
NewIntroducing bear-1.1: Improved LLM Compression
bear-1.1 is the latest compression model from The Token Company. An improved version of bear-1 with better accuracy preservation and faster compression speeds.
February 2026
Modelbear-1: First LLM Input Compression Model
bear-1 compresses LLM input tokens by 66% without sacrificing accuracy. Learn how semantic compression reduces AI costs by 3x.
November 2025