Hacker News new | past | comments | ask | show | jobs | submit login

I used to like Kong a lot but I noticed that it started under performing when they released version 1. I blogged about that in http://glennengstrand.info/software/performance/springboot/d...



Do you mean under performing as in proxying performance? We do know there are some code paths that need to be optimized. For 1.2 we will be working on those issues, and for many of them, we already have solutions (in many cases more than one approach). A warmed up Kong usually runs with sub-millisecond latency, the plugins can usually add more to it, but sure there are rough edges, especially in p99. We'll just need to pick up the best ideas that have already been proposed, either by us or our community. For some issues we have a public pull requests in place for discussion. And some of them we have developed in close collaboration with the community or customers. We sure want to be lean and fast. Especially now that Kong is used as a sidecar proxy in service mesh.


The load test would call kong which would proxy each request to the service being tested. Kong would be configured with the http-log plugin where performance data would get sent to another service that would collect that data then update elasticsearch in bulk.

Last August, I tested the Dropwizard service with this setup which recorded about 20,000 RPM on GKE. This year, the same setup sent about 4,000 RPM to the Dropwizard service and the http-log plugin sent only about 200 RPM to the other service.


Uh, that is a deep drop. Just thinking, could it be related to this: https://github.com/Kong/kong/commit/de4a002565a1723bff9014cb...

Definitely something for us to investigate! Thank you for feedback (I'll collect this information to our backlog).


After a large major release there could be problems in specific use-cases, I would try again with a 1.0.x release or 1.1.x - Kong prouds itself to be very fast, so I would like to get in touch with you to dig down into any problem you have experienced. You can find me on Twitter at @subnetmarco


It is easy to repro this issue as I have open sourced the kubernetes manifests. There are instructions on https://github.com/gengstrand/clojure-news-feed/blob/master/... for how to provision the cluster on either Amazon or Google's cloud and spin up the services and the load test application. Be sure to create the kong service (commented out) instead of the proxy service (which replaced it) and use the feed3-deployment.yaml instead of the feed-deployment.yaml manifest.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: