We tried removing "async" -- thinking it would force sequential processing -- but it unexpectedly seemed to cause parallel processing of requests, which caused CUDA memory errors.
Before removing "async", this is the weird behavior we observed:
* Hacker blasts 50-100 requests.
* Our ML model processes each request in normal time and sequentially.
* But instead of returning individual responses immediately, the server holds onto all responses -- sending responses only when the last request finishes (or a bunch of requests finish).
* Normally, request 1 should return in N seconds, request 2 in 2N seconds, but with this, all requests returned in about N50 seconds (assuming batch size of 50).
1. Any suggestions on this?
2. Mind clarifying how sync vs aync works? The FastAPI docs are unclear.
Any chance the entire thing can be offloaded to a task queue (Celery/etc)? This would decouple the HTTP request processing from the actual ML task.
The memory errors you're seeing could suggest that you may not actually be able to run multiple instances of the model, and even if you could it may not actually give you more performance than processing sequentially.
Seems like ultimately your current design can't gracefully handle too many concurrent requests, legitimate or malicious - this is a problem I recommend you address regardless of whether you manage to ban the malicious users.
@headlessvictim2 search for "Asynchronous Request-Reply pattern" if you want more information about this kind of architecture. You will remove any bottleneck from the API server and can easily scale out from the task queue.
You would still have the same bottleneck but the API request would return straight away with some sort of correllation ID. Then the workers that handle the GPU bound tasks would pull jobs when they are ready. If you get a lot of jobs all that will happen is the queue will fill up and the clients will wait longer and hit the status endpoint a few more times.
Python async is co-operative multi-tasking (as opposed to per-emptive)
There is an event loop that goes through all the tasks and runs them.
The issue is the event loop can only move on to the next task when you reach an await. So if you run a lot of code (say an ML model) between awaits no other task can advance during this time.
This is why it is co-operative, it is up to a task to release the event loop, by hitting an await, so other tasks can get work done.
This is fine when you have async libs that often hit awaits at things that are IO related like say db, or http calls.
FastAPI will spawn controllers that are not defined as async functions on a thread pool but it is still a python so GIL and all that.
You should do as the sibling comment says and decouple your http from your ML and feed the ML with something like Celery. This way your server is always there to respond to things (even if just a 429) to hit a cache or whatever else.
The freemium service provides access to machine learning models on GPU instances, served with FastAPI.
Each request invokes a compute-intensive ML model, but perhaps there is something wrong with the FastAPI configuration as well?