Let me preface my comment with a statement that this is very interesting work and it's appreciated. Nevertheless, my worry with the advice given here is that it provides comfort to those unwilling to evaluate other options: platforms or frameworks that may provide the necessary performance characteristics without as many rounds of optimization. Avoidance of premature optimization and avoidance of selection of a well-fitting platform and architecture are not the same thing, though they are often confused for one another.
If you are familiar with Django, and better yet familiar with Django REST, and do not have the time to invest in considering other options (learning about, experimenting with, and taking the necessary time to properly digest), this article gives useful data and demonstrates how to reach performance levels that may be acceptable for your use-case, with some concessions (notably, caching).
However, if you are willing to evaluate other options, there are many platforms that achieve the synthetic target established by the "third bar"--that is, 25ms of server-side processing while doing so with client-side concurrency and without resorting to caching or other potentially complicating/limiting concessions.
If you are a Django developer, by all means leverage advice such as this. If you are not, consider looking to alternatives that meet your performance target out of the box. Perhaps you will find something new that you'll eventually be glad you looked at.
> If you are a Django developer, by all means leverage advice such as this. If you are not, consider looking to alternatives that meet your performance target out of the box.
Before you do any of this, consider what your performance needs are. I think we as developers get caught up in this performance/benchmarking game a bit too much at times. If you like Django+DRF, chances are it will probably be "fast enough" for your project. DRF is extremely easy source to read through and work with, the documentation is great, and the maintainability/usability/devspeed wins are huge perks to us.
Unless you have very specific, strict requirements to deliver things at the tiniest fraction of a second possible, choose what you think you can be comfortable with and worry much less about performance (within reason). The article hints at this, showing you that you're going to spend a lot more time waiting on IO than in framework land.
Since from his charts it looks like the bulk of his processing time is due to the database connection and transfer, what other framework/language is going to eliminate that. I am sure many alternatives have faster routing, serialization, and response generation, but that doesn't seem to matter with a database involved.
>Since from his charts it looks like the bulk of his processing time is due to the database connection and transfer, what other framework/language is going to eliminate that. I am sure many alternatives have faster routing, serialization, and response generation, but that doesn't seem to matter with a database involved.
Oh, it really does. Not all database drivers, not the rest of the pipeline from DB to your data object, are the same.
Particularly interesting is how PHP, from being quite low as raw speed is concerned, gets way higher when multiple DB queries are used (due to the heavily optimized MySQL, 3rd generation IIRC, drivers I'd assume).
That will include Postgres' work, the Python Postgres driver's work, and the ORM's work. Elsewhere, we've already seen that Postgres and MySQL can provide a response to such a trivial query over Ethernet in less than 5ms, under heavy concurrent load, on commodity hardware. But his measurement is 90ms.
For the sake of argument, let's say that hardware variations mean his local Postgres instance (that is, an instance without Ethernet latency to contend with) requires 20ms to fulfill the query. Maybe his laptop is from 2006. That still leaves 70ms unaccounted for.
for the most part it works really well. for a while the development was sort of stagnant/dead while the developer was looking for people to take it over. recently (2-3 months ago) a team has formed around the project and is actively maintaining and pushing new features again.
Some good advice there, but I believe that plugging a middleware like https://errormator.com/ would give even better insight into what is happening in your applications.
Your carousel needs to stop after a using manually clicks an arrow. I tried clicking back a slide to read it and it just kept right on spinning. I can't even read about your features.
If you are familiar with Django, and better yet familiar with Django REST, and do not have the time to invest in considering other options (learning about, experimenting with, and taking the necessary time to properly digest), this article gives useful data and demonstrates how to reach performance levels that may be acceptable for your use-case, with some concessions (notably, caching).
However, if you are willing to evaluate other options, there are many platforms that achieve the synthetic target established by the "third bar"--that is, 25ms of server-side processing while doing so with client-side concurrency and without resorting to caching or other potentially complicating/limiting concessions.
If you are a Django developer, by all means leverage advice such as this. If you are not, consider looking to alternatives that meet your performance target out of the box. Perhaps you will find something new that you'll eventually be glad you looked at.