Upstash Solved My Dumbest Problem
February 24, 2026 · 770 words · 4 min read
I spent a weekend building rate limiting from scratch and then deleted all of it.
I have a small app, a bookmark manager I built for myself and a few friends, that started getting hammered by bots. And it's not a lot of traffic, maybe a few hundred requests a minute, but it's enough that my API routes on Vercel were burning through function invocations and the bot was scraping every public endpoint I'd left exposed. The sensible thing would have been to add rate limiting, but the thing I actually did was spend an entire Saturday building rate limiting from scratch using a Map in memory.
Oops, that didn't work, Vercel functions are stateless, so the Map resets on every cold start, which means the rate limiter forgets everyone the moment the function spins down. I knew this going in! I just thought I could work around it with some clever timing. Well, I could not, and the bots kept hitting, the Map kept resetting, and I sat there watching my usage dashboard climb while my "rate limiter" functioned as a very fancy welcome mat.
The next idea was to use my existing PostgreSQL database to track request counts per IP. I wrote a quick table, an upsert query with a timestamp window and a check at the top of each route, and it worked, technically, but it added a database round trip to every single request, which made everything noticeably slower. A tool I built specifically because I wanted something fast was now slow because I was protecting it from being fast for bots. See the irony, yes?
I saw someone speak about Upstash on YouTube and I felt the very specific annoyance that comes from realizing you've been failing to solve a solved problem. Upstash is serverless Redis with an HTTP API. It took me about fifteen minutes to tear out my Postgres rate limiter and replace it with this. I skipped having to build persistent connections, there's no infrastructure to manage and there's pay-per-request pricing that scales to zero.
npm install @upstash/redis @upstash/ratelimitI created a Redis database on the Upstash console, grabbed the REST URL and token, dropped them into my .env, and wrote the middleware.
import { Redis } from "@upstash/redis";
import { Ratelimit } from "@upstash/ratelimit";
import { NextRequest, NextResponse } from "next/server";
const redis = Redis.fromEnv();
const ratelimit = new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(30, "60 s"),
analytics: true,
});
export default async function proxy(request: NextRequest) {
const ip =
request.ip ?? request.headers.get("x-forwarded-for") ?? "127.0.0.1";
const { success, limit, remaining, reset } = await ratelimit.limit(ip);
if (!success) {
return NextResponse.json(
{ error: "Slow down." },
{
status: 429,
headers: {
"X-RateLimit-Limit": limit.toString(),
"X-RateLimit-Remaining": remaining.toString(),
"X-RateLimit-Reset": reset.toString(),
},
},
);
}
const response = NextResponse.next();
response.headers.set("X-RateLimit-Limit", limit.toString());
response.headers.set("X-RateLimit-Remaining", remaining.toString());
return response;
}
export const config = {
matcher: "/api/:path*",
};That's it! Thirty requests per minute per IP, sliding window, and it's running at the edge before any of my API routes even fire. The slidingWindow algorithm is smoother than a fixed window because it doesn't create those annoying bursts right at the reset boundary. The analytics flag gives you a dashboard on Upstash's console showing blocked versus allowed requests over time, which is super useful for tuning the limits as you go.
For my bookmark search endpoint, which hits Meilisearch and is the most expensive route in the app, I added a tighter limit directly in the route handler.
import { Redis } from "@upstash/redis";
import { Ratelimit } from "@upstash/ratelimit";
const redis = Redis.fromEnv();
const searchLimit = new Ratelimit({
redis,
limiter: Ratelimit.slidingWindow(10, "30 s"),
prefix: "ratelimit:search",
});
export async function GET(request: NextRequest) {
const ip = request.ip ?? "127.0.0.1";
const { success } = await searchLimit.limit(ip);
if (!success) {
return NextResponse.json({ error: "Too many searches." }, { status: 429 });
}
const query = request.nextUrl.searchParams.get("q");
if (!query) {
return NextResponse.json({ error: "Missing query." }, { status: 400 });
}
const results = await searchBookmarks(query);
return NextResponse.json(results);
}The whole thing is two layers. First, the middleware catches broad abuse before it reaches the routes, then the per-route limiter protects the expensive operations from getting hammered even by legitimate users who are refreshing too fast. The prefix option keeps the keys separate so the two limiters don't interfere with each other.
I also started using Upstash Redis for a few other things that were previously awkward. The session tokens that used to live in a Postgres table now expire naturally with TTL. I can also add a page view counter for my blog that doesn't need a whole database behind it. I won't, because it's embarrassing how little traffic I get, but I could. Or a cache layer for API responses that change infrequently. And each and every single one is just a few lines of code and the mental overhead is basically zero.
// cache an API response for 5 minutes
const cached = await redis.get(`cache:weather:${city}`);
if (cached) return NextResponse.json(cached);
const fresh = await fetchWeatherData(city);
await redis.set(`cache:weather:${city}`, fresh, { ex: 300 });
return NextResponse.json(fresh);The part that makes all of this work for my setup is the HTTP API. Traditional Redis needs a persistent TCP connection, which is just annoying in serverless because the connection either gets torn down between invocations or you have to manage a connection pool. Upstash just uses REST, so you send an HTTP request, you get a response. Simple as that, no more surprises after a cold start. For personal projects running on Vercel, that's more or less the right trade-off.
The free tier covers 10,000 commands per day, which is more than my apps will ever need. I've been running it for a few weeks and I haven't paid a cent, and if I ever have to, I'll do it happily. The bots are gone (or at least blocked), the app is fast again, and I deleted about 80 lines of Postgres rate limiting code that I'm embarrassed I wrote.
It's counter intuitive at first, but for most projects the best infrastructure decision is sometimes admitting you don't need to build the infrastructure.