A URL shortener is one of those problems that looks trivial until you start thinking about what actually happens at scale. I built this to work through some real system design questions: how do you keep redirects fast under heavy traffic? How do you track analytics without slowing down the hot path?
The Problem
At its core, a URL shortener needs to do three things: store a mapping, redirect fast, and track clicks. The redirect speed is user-visible, so that has to be as fast as possible. Click tracking is important but can afford some latency. These two requirements pull in different directions if you treat them the same way.
- Redirects need to be near-instant — every added millisecond shows up in user experience
- Synchronous database writes on every redirect don't scale
- Analytics need to be accurate, but don't need to be real-time to the millisecond
The Design
The system is built around three components: FastAPI for the backend, Redis for caching and counters, and PostgreSQL as the source of truth. Redirects follow a cache-first path. Analytics are handled asynchronously by a background worker.
Request → Redis (cache hit?)
├── HIT → return long URL immediately
└── MISS → query PostgreSQL → populate cache → return URL
Click tracking → Redis counter (atomic INCR)
Background worker → flush counters to PostgreSQL in batchesThe Redirect Path
When a user hits a short URL, the system checks Redis first. On a cache hit, the long URL comes back immediately with no database involvement. On a miss, it falls through to PostgreSQL, fetches the URL, populates the cache, and returns it. After the first request, subsequent redirects for the same code never touch the database.
Click Tracking
Rather than writing to PostgreSQL on every redirect, each request atomically increments a Redis counter for that short code. A background worker runs on an interval, reads those counters, flushes them to PostgreSQL in batches, and resets the Redis values. The stats endpoint combines the persisted count with whatever is still pending in Redis, so the numbers stay accurate without adding latency to the redirect path.
Web UI
I added a lightweight frontend using HTML, CSS, and vanilla JavaScript so the system is easy to demo and validate. You can create shortened URLs with optional custom aliases, and the stats panel auto-refreshes to show click counts without a manual reload.
Deployment
Everything runs in Docker Compose: the API, Redis, PostgreSQL, and the background worker. One command to bring it all up locally.
docker compose up --build # → API: http://localhost:8000 # → Docs: http://localhost:8000/docs # → Dashboard: http://localhost:8000/ui
What I Learned
- Cache-first reads change everything. Once the cache warms up, almost no redirects hit the database. The latency difference is significant.
- Decouple your write paths. Moving click tracking off the redirect path and into a background worker is a pattern that shows up constantly in production systems.
- Simple problems are good vehicles for real design thinking. There's nothing novel about a URL shortener, but working through the tradeoffs forces you to actually reason about the architecture.