I'm working in a small/medium business in Montreal, Canada. I've been hear for almost 3 years and AFAIK no body here takes stimulant.
Of course I might be wrong, but I know a lot of these people well enough to call them my friends and I have not see anyone taking stimulant nor overheard anyone talking about their consumption.
Disregarding the article completely, I'll share my opinion on Kong because we use quite a bit at my workplace. We use an old version (0.9.x as of right now). The things I shared below might not be true anymore in new versions but I can't tell and are regarding the plugin system.
IMO, the idea of taking things like authentication, rate limiting, etc in a proxy is a wonderful idea (https://2tjosk2rxzc21medji3nfn1g-wpengine.netdna-ssl.com/wp-...). So in theory, the approach Kong takes is wonderful, but I think the implementation is not so much.
Kong is layer over nginx and uses lua as a scripting language to add all sorts of stuff. Quickly, you reach the limit of the plugins capability so you think, well, I will write my own plugins. Well, to me it was a very unpleasing experience. I found that the thing was hard to test and hard to maintain. Maybe my criticism is more lua than Kong but since Kong relies heavily on lua, there is not much I can do.
There is also magic happening. You declare a schema.lua files to configure the store to hold your data. Then automatically you've got a DAO interface available with a bunch of method on it to work with the store. You don't know what methods are available or what arguments should be passed into these functions.
Anyway, this is my take after spending quite a few hours working on home made plugins in lua for Kong.
In the end, I'm glad Kong is open source and it's a great piece of software. It helped us reduce our applications complexity but make sure you don't start to shift to much logic into it because the plugin system can be hard to work with.
Marco here, Kong's CTO. We hear you, we are aware that plugin development is not as easy as we wish, and we are planning to rollout a few improvements next year to make it mainstream, including:
- A built-in Lua plugin API that abstracts away the underlying complexity of NGINX and OpenResty. Among other things the plugin API will make it easy to interact with the lifecycle of request/response objects.
- An official testing framework for plugins (unit and integration tests).
- A new DAO to get rid of some magic and make the overall system more robust and extensible (with an in-memory DAO implementation for example).
- Support for remote gRPC plugins to support plugin development in any gRPC supported languages.
- And finally supporting plugin installation/versioning within Kong itself using the HTTP Admin API to install/uninstall plugins cluster-wide on any Kong node (as opposed to installing plugins on the file system).
NGINX and Lua (on top of LuaJIT http://luajit.org/) were chosen for their reliability and performance, the next step for Kong is to focus more on making the overall Kong ecosystem bigger therefore simpler plugin development is a very high priority item in our roadmap.
Not OP, but we are using Tyk (tyk.io) at my workplace. It's definitely more user friendly than Kong, but we've had other issues with it. Having not used Kong extensively but knowing what it is, I'll go out on a limb and say Kong is likely more performant.
Actually it should be the opposite. Definitely do your own benchmark, but take a look at the latest numbers from Tyk and compare it with the latest numbers from Kong to get a general idea.
This analysis was done by BBVA comparing Tyk and Kong performance https://www.bbva.com/en/api-gateways-kong-vs-tyk/ - disclosure: I work for Kong, but Kong (the company) had no involvement with the benchmarking in that article.
Those BBVA benchmarks look very very wrong. BBVA achieved 3822 requests per second with a 16Gb server running Tyk.
I used a 16Gb Digital Ocean box, and a separate box to run benchmarks from.
Tyk CE with keyless api.
docker run --rm williamyeh/wrk -t4 -c100 -d1m --latency --timeout 2s http://10.131.45.89:8080/tykbench/get
Running 1m test @ http://10.131.45.89:8080/tykbench/get
4 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.79ms 6.97ms 206.76ms 87.28%
Req/Sec 6.35k 0.93k 10.89k 74.33%
Latency Distribution
50% 3.19ms
75% 6.28ms
90% 15.93ms
99% 31.14ms
1516573 requests in 1.00m, 185.13MB read
Requests/sec: 25244.99
Transfer/sec: 3.08MB
As per above results, I got 25,244 requests per second. 90% of requests were served with sub 15ms latency.
Another thing - you can freely scale with Tyk CE as much as you want at no-cost. I currently run 3 gateways in prod and it doesn't cost a penny. The BBVA article states that you need to pay Tyk to scale - simply not true:
>However, if we want to deploy a significantly larger number of microservices, or simply ensure high-tolerance to failure, we need to scale up the number of gateways. In the case of Kong, all it takes is adding a new node and connecting it to the database. With Tyk we can do the same, although it will require us to pay for the service. This could be a determining factor when choosing what API Gateway to use.
I went through process picking API gateway about 3 months ago. Tyk is not without it's warts, but has far more out-of-box features baked in (inc distributed rate limiting) than Kong. And if custom plugins are required, I can choose between lua, python, js or anything that supports gRPC. With Kong - stuck with Lua.... no thank you.
I guess my point here is that if Tyk not performant enough for you, just add another gateway. And if I need custom functionality - I don't need to learn Lua.
Hey Bob - you'll be pleased to hear Tyk took that feedback on-board. There is now very comprehensive documentation and user-guides. And true to the Open Source approach, anyone can contribute to them, the community have been great in working to improve them.
As you know, there are many plugins to do authentication in Kong. We started with jwt, then a coworker decided that we needed a more flexible approach so he basically forked the jwt plugin to add stuff for our needs. It quickly became confusing and hard to maintain.
When we tried to introduce a new feature (tokens similar to what Github offers with personal token, that is a revocable token with a given set of permissions), we had a rough time.
In the end, the decision to fork the plugin was maybe not the good one and the decision to bring the token feature into this plugin were maybe not the right one. But still, working in the plugin code was unpleasant in my opinion.
By the way, keep up the good work, it's a solid piece of software!
From the folks at Ziverge, who've worked on ZIO in Scala.
They use a similar approach I believe. It's discussed in this podcast: https://podcasters.spotify.com/pod/show/happypathprogramming...