Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I run several SaaS apps on a single big OVH server. It handles 6 million non-cached requests per day. The backend stack is pretty basic: Django/Python, MySQL, redis (pub/sub) for websockets. But the secret sauce is OpenResty. I use Lua scripts to do more sophisticated page caching (because the builtin nginx caching is so primitive), DDoS protection, handling websockets, offloading long running requests, and routing between my unix socket upsteams. It's a poor man's Cloudflare in 1500 lines of Lua.

The apps were made long before Docker was a thing, so they just run as regular ol' processes, locked down as much as possible with systemd magic. I originally used uwsgi as my wsgi server, but it turns out gunicorn is vastly more efficient so I use it exclusively now.

I run a warm standby server at Hetzner so I can route traffic there in a pinch. I have a second warm standby running at my house because I'm truly paranoid about automated account bans (despite the very innocuous nature of my business). Backups are at rsync.net.

My single point of failure is DNS. I had a good relationship with DNSMadeEasy so I was not too worried about automated bans. But they were just bought by DigiCert, so that's a problem now.

Payments are handled with Stripe and PayPal. I added PayPal (despite my hatred of the company) just because I'm scared Stripe will ban me without warning, for no reason, and won't communicate with me.

For user uploads, I have an aiohttp Python server that streams files to Wasabi and Backblaze, and caches them in nginx at the same time. So my cloud bandwidth bill is usually 0.

The websocket layer is kind of wonky. Originally, I used the Python websockets asyncio library to do everything. It worked for a while, and then I had to make it multi-process to spread the load. But it was just eating resources like crazy. I decided to use OpenResty's websocket stuff to handle the connections, but I didn't want to write all the complex application logic in Lua. So I used Redis pub/sub to pass messages back and forth from OpenResty-land to a pool of (sync) Python processes. It worked much better. That said, I'm a novice with asyncio, so I could very easily be to blame for the original performance problems.

And sorry, I won't tell you the name of my apps (I don't need any more competitors!)



> I use Lua scripts to do more sophisticated page caching (because the builtin nginx caching is so primitive)

curios to know more on this - what you could do better using Lua logic?


When a non cached request comes in to the Python layer, I set a response header with that object's modified date. Lua intercepts that response header, and stores the modification date in a shared dictionary under that object's cache_key.

When the next HTTP request comes in to view that object, I lookup the object's date in the shared dict. If the modified date is > now(), I set the bypass flag to 1, so nginx updates its cache.


Correct me if I'm wrong, but isn't X-Accel-Expires the same basically?

From docs:

Parameters of caching can also be set directly in the response header. This has higher priority than setting of caching time using the directive.

    The “X-Accel-Expires” header field sets caching time of a response in seconds. The zero value disables caching for a response. If the value starts with the @ prefix, it sets an absolute time in seconds since Epoch, up to which the response may be cached.
    If the header does not include the “X-Accel-Expires” field, parameters of caching may be set in the header fields “Expires” or “Cache-Control”.


When a resource is cached, how do you know ahead of time how long to cache it for? If you set X-Accel-Expires to 5 minutes, but the resource is edited 3 minutes later, how do you evict the item from the nginx cache?

You can figure out where the item is in the nginx cache directory and delete it. But that is complicated by the fact that your app and nginx run as different users. Or you can send a specially crafted HTTP request to nginx assuming you have some kind of backdoor proxy_cache_bypass setup. But that's ugly too. You either have a race condition, or you have to hang your app's response until the invalidation request completes.

If there's another way to evict an item from the cache, I'm all ears.


Based on your words, you do not invalidate cache anyways:

> When the next HTTP request comes in to view that object, I lookup the object's date in the shared dict. If the modified date is > now(), I set the bypass flag to 1, so nginx updates its cache.

I see it exact as stock behavior and asking what the difference Lua logic brings here.


Isn’t a poor man’s Cloudflare just Cloudflare? The DDOS and caching are free.


OpenResty has a clever answer to just about every problem.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: