Kong is like an enterpise version of openresty, i.e nginx turbo-charged with luajit.
A lot of really cool and performant things can be built on top of it.
At a previous place we wanted to implement an auth & rate limiting ingress to a consul based service discovery.
We looked at kong but in the end rolled our own using openresty.
No one on the team really liked the ”centralised” approach used by kong.
It’s probably preferred in an older schooled enterprise setting tho (awesome project non the less!)
We've been using openresty in production at vwo.com at our edge nodes for quite a few years. It's really fantastic, allowing you to write custom logic and embed it within nginx itself, making things super fast.
The only real disadvantage is the lack of a good library ecosystem within lua and openresty. Over the years, as we moved some more logic to edge because $reasons, we ended up rolling our own libs quite a few times. Documentation can be made much easier as well.
That, and the occasional limitations of Lua as a language.
Since Kong 1.0 it can run in both centralized or decentralized mode as a sidecar for Service Mesh. And since v1.1 the database can even be optional. We are executing very fast with the feedback of the community and our customers.
I once had to implement a giant ip address blacklist for an api in the fastest and cheapest way possible. Openresty and redis saved me that day. That has been my favorite project that I've ever worked on. Two amazing pieces of software. Kong is great, too. We were just unable to use it because its blacklist feature wasn't easy to implement.
This was about 3 years ago. We needed to get millions of IP addresses into the blacklist, with daily updates. We tried using Cassandra for the backed. Not sure if it's easier now but yeah that would have been more of a pain to accomplish.
This was also my immediate thought. Naming is hard. Globally unique names that actually convey something about what you do, are recognizable, and sound pleasant to speakers of most languages seems even harder.
Yes! I've been looking everywhere for a complex bundle of enterprise vaporware for my company to get permanently locked into.
I love the crispy crust of enterprise complexity wrapped around a delicious warm gooey open source core.
Most of the Kong adoption is free and comes from the open-source version, which you can download for free [1] and perhaps even contribute to [2]. We support an extensibility framework [3] that the community has been using to extend Kong (more than 500+ plugins on GitHub), including about 30 plugins that are shipped for free in the open-source distribution [4].
Kong is modular by design so that we could walk away from bundled complexity, and that's really why we created the concept of Plugins. Plugins can be installed, distributed, and used independently and they can be removed anytime (not just disabled, but entirely removed from the actual runtime).
I frankly have no clue about enterprise IT but the funding and their growth sound impressive. Would someone ELI5 to me what problem Kong solves (bonus points for business model because it sounds like Kong is Open Source)?
The simplest way to think of Kong is as a piece of software that controls traffic going in and out of an API. At its simplest form, it helps make sure that traffic gets to the right API, is secure, etc.
Where Kong really differentiates itself is its ability to support decentralized software architecture patterns like microservices, service mesh, etc as well as traditional monoliths, regardless of the underlying platform or hardware. Microservices make deploying software a lot faster, and we're the connective tissue that lets those microservices work together smoothly and with older legacy systems.
Kong is indesed open source. We also have an enterprise version that adds a lot of features that make managing kong in an enterprise much easier.
I think you are raising a valid point. What is considered a basic feature is a constantly moving target. The proxy-caching in Kong Enterpise is implemented as a plugin, and it is not that complex. All our new plugins are developed outside Kong open source repository. Some of them are public and some are private. Some of them we include in our default packaging. Ultimately this is a product decision. I think we have some features in open source that could have been enterprise only, and vice versa. I work mostly on developing the Kong core (open source), but at the same time, I think that having a healthy business, will help the open source too. Who knows, your wish of proxy caching plugin , we will endup moving from enterprise to open source.
Thanks much! Never too late to learn something new...
So from what I know about cloud technologies (not much) your description makes it sound like Kong is a Service Mesh. But then you also say you support Service Meshes as an architecture, so I'm not sure I got that right. Would you mind to clarify?
Had a brief look at it when considering an API gateway frontend for several microservices; as such it solves things like rate limiting, centralised authentication etc. Based on tried and tested tech like nginx, so has maturity as a benefit. If I remember the pro version offers some conveniences, secures the control pane better than the O/S version, and I imagine is supported commercially for those who need it.
Used Kong but don’t know too much about the internals other than I’ve heard it’s a wrapper on Nginx intended to be used as a proxy. Looking at some features around caching I was thinking about implementing they seem to have implemented a typical upsell tier around “plugins” or “features” not core to the product to charge for.
Calling Kong _just a wrapper_ is dismissive of the features and value it brings to just a web server. Kong is an ecosystem in it's own right, with a plugin development kit and an open source community supporting creation of an intelligent proxy.
Kong is indeed a lot more than just a load balancer or proxy. We provide an ecosystem of functionality on top of high performance proxy core to allow you to better manage how your APIs, services, and applications interact with each other and with the world.
You could use us as an alternative to Istio, or you could integrate us with Istio. We provide additional control capabilities on top of what Istio does, and also service decentralized and centralized deployments. Basically, you could use Istio for a mesh (or use kong for the same mesh) and then use Kong to connect all your legacy apps and services into the same control platform. That way, you get global visibility and control at multiple layers - so you can get as granular as you want or as macro as you want.
Hopefully this isn't made into a problem that doesn't really exist ala elastic, redis, etc. To me this is a real opensource success story -- openresty / the lua bits of nginx that came out of taobao really upped the level of what you could do with nginx. Kong is a huge, nontrivial addition that builds on top of that work, the result of carving out a product focus area and really iterating on what businesses / enterprises need. This is really the best of open source, IMO.
Do you mean under performing as in proxying performance? We do know there are some code paths that need to be optimized. For 1.2 we will be working on those issues, and for many of them, we already have solutions (in many cases more than one approach). A warmed up Kong usually runs with sub-millisecond latency, the plugins can usually add more to it, but sure there are rough edges, especially in p99. We'll just need to pick up the best ideas that have already been proposed, either by us or our community. For some issues we have a public pull requests in place for discussion. And some of them we have developed in close collaboration with the community or customers. We sure want to be lean and fast. Especially now that Kong is used as a sidecar proxy in service mesh.
The load test would call kong which would proxy each request to the service being tested. Kong would be configured with the http-log plugin where performance data would get sent to another service that would collect that data then update elasticsearch in bulk.
Last August, I tested the Dropwizard service with this setup which recorded about 20,000 RPM on GKE. This year, the same setup sent about 4,000 RPM to the Dropwizard service and the http-log plugin sent only about 200 RPM to the other service.
After a large major release there could be problems in specific use-cases, I would try again with a 1.0.x release or 1.1.x - Kong prouds itself to be very fast, so I would like to get in touch with you to dig down into any problem you have experienced. You can find me on Twitter at @subnetmarco
It is easy to repro this issue as I have open sourced the kubernetes manifests. There are instructions on https://github.com/gengstrand/clojure-news-feed/blob/master/... for how to provision the cluster on either Amazon or Google's cloud and spin up the services and the load test application. Be sure to create the kong service (commented out) instead of the proxy service (which replaced it) and use the feed3-deployment.yaml instead of the feed-deployment.yaml manifest.
> Using AI and Machine Learning to Automate the Full API and Service Development Lifecycle, Kong Is Reinventing the Software Industry and Accelerating Global Innovation Cycles
I Literally Have No Idea What This Is, Even After Reading This Word Salad Of A Press Release. But hey, they got $43 million...
neither Kong nor its infrastructure (i.e. nginx) have any future on the long term against envoy. Simply time is against them even if it is going to be very slow downfall. I guess that's the real reason nginx sold itself right in the peak of the economic cycle and before envoy gets the sufficient popularity
Based only on technical merits I agree; Envoy and other API type software/service should probably eat its lunch. At most enterprises though nobody will have heard of Envoy though and I don't know of anyone selling directly to enterprise in this space. However, investing in enterprise infrastructure software is hardly about tech at somme point. The investors must really like Kong's enterprise sales team.
At Kong we actually like Envoy a lot, because we can deliver our superset features and ecosystem on top of any underlying proxy (NGINX, Envoy or anything that can process a stream or bytes).
That's what I thought you would do anyway whenever nginx loses its place in favor of Envoy not only because its obvious technical superiority but also its licensing situation, congratulations on making money off other people's work
We have been contributing back to the community with open-source projects and improvements to the other open-source projects we utilize, and using respectfully “other people’s work” in agreement to their licenses. Kong itself is open-source, and we have a long history of contributing back.
The comparison doesn’t fully translate because Envoy is a proxy, and Kong is a control platform that comes built on top of a proxy. We actually love envoy here, and think it’s a great product. The value that Kong brings is more up the information and abstraction stack versus just the core proxy we’re built on, which is rapidly becoming commoditized.
Kong would be more comparably to Istio, yes and we're big fans of it! We provide additional control capabilities on top of what Istio provides and can integrate with Istio for folks using it. Basically, if you're using Istio for a mesh, and you want to have that mesh also work with all your other non-mesh services, we help make that happen.
You can also use Kong as an ingress controller for K8s or inject it as a sidecar proxy in K8s, which would give you very similar capabilities to using Envoy + Istio.
do founders really use these deceptive metrics in order to raise money from investors? most of these downloads (docker, git or even npm) are simply automated a zillion times by a much fewer number of projects
https://openresty.org/en/
Kong is like an enterpise version of openresty, i.e nginx turbo-charged with luajit. A lot of really cool and performant things can be built on top of it.
At a previous place we wanted to implement an auth & rate limiting ingress to a consul based service discovery.
We looked at kong but in the end rolled our own using openresty.
No one on the team really liked the ”centralised” approach used by kong. It’s probably preferred in an older schooled enterprise setting tho (awesome project non the less!)