I hate sites like this. Im probably stupid but I have no idea what precisely it is after reading that page. I just know that marketing team wants me to believe it's going to be my saviour.
Right. And what is the relationship to nginx? What is the license? Maybe hidden somewhere, but not obvious when quickly trying to get a first impression on the phone screen.
While it is a fluffy marketing sentence, it seems clear to me. It's an application server.
Then the next section explains that you can run different kinds of applications and even different versions of the base software like different versions of PHP or python all in the same server.
This looks pretty cool, and makes me sad that Mongrel2 never became popular. In short: Mongrel2 solves the same problem, but does it by letting your application handle requests and websocket connections over ZeroMQ instead of eg FastCGI.
I guess it lost momentum when ZeroMQ did. Anyone know why? Sounds like a dream solution in the current microservice hype.
Yeah, Mongrel2 looked like a good idea… but turns out it's kinda pointless. Why talk to your app via HTTP-reencoded-as-ZMQ when you can just talk straight up HTTP? Pretty much all languages have very fast and concurrent HTTP servers these days.
Regarding websockets (admittedly off-topic re Mongrel though), I recently found out about Pushpin[0], which seems to be an elegant way to translate WS into HTTP, should it be of interest to someone. Basically a proxy-server that takes care of accepting either websockets or HTTP on the front, and talking only HTTP on the other side.
It's multi-process. The core logic is a Qt application, but it delegates the external protocol I/O to separate processes. Mongrel2 handles inbound and Zurl handles outbound (Zurl is a project of ours that is basically the inverse of Mongrel2).
Parts of mongrel2 were sadly a solution in search of a problem. Mostly the "let's redo FastCGI via ZMQ". Was still immensely fun working on and with it.
Oh the "service bus" <strike>xml</strike> json is coming back... its called lambda architecture.
And again nothing new except someone else takes care of some server software for you with the promise of reduced price and maintenance but the reality eventually becomes tight proprietary coupling and eventual price gauging.
Amazon's own Lambda is that, yes. But the Lambda architecture it inspired is the opposite: a de-facto standard (based on the way Amazon's works, but probably eventually an open standard) for servers any org can use to stand up their own public or private FaaS cloud, which developers can deploy Lambda functions onto rather than having to build an entire container/VM just to slot it into OpenStack.
I doubt it will ever be a standard. Amazon loves vendor lock in. Plus most of the cloud services love to do their own thing for each service type. The main exception seems to be Kubernetes. Google has it in GCE, and Amazon has said they are working on their own Kubernetes service. If that happens, I bet Azure will follow if they aren't already working on it.
I (and others) are not so much imagining a standard between cloud vendors, as we're imagining a standard "FaaS server function API" (sort of like how the web has a standard DOM API) supported by several FOSS FaaS server implementations (sort of like how the web has several FOSS Javascript engines.)
Given such a standard API and compatible servers, you'd then deploy a FaaS server cluster to your public/private cloud of choice, the same way you deploy e.g. a Kubernetes cluster, or a Riak cluster.
There would likely by small public clouds attempting to be "FaaS native" by exposing only such servers in a multitenant configuration (like small public clouds like Hyper are currently doing with CaaS.) Their implementations wouldn't always be exactly compatible, and might have some lock-in.
However, once FaaS "caught on" with the enterprise, a FaaS server would likely make its way into the OpenStack architecture.
At that point, you'd see medium-sized public cloud providers like OVH and DigitalOcean set up their own multitenant FaaS clusters as well, probably with custom code, but built to be compatible with the OpenStack FaaS tooling, to allow enterprises the freedom to move FaaS functions freely between public and private clouds.
And, eventually, the other major cloud providers would feel the need to support the API.
---
This path has already been followed: it's what happened to Amazon S3—first cloned (but not compatibly) in FOSS by tools like Riak CS; then standardized by OpenStack Swift; then cloned compatibly in FOSS by tools like Minio; then picked up by medium-scale clouds like Rackspace; and then, eventually, picked up by Azure and GCP as secondary APIs to address their equivalent offerings (that originally had quite different APIs.)
You can definitely do microservices that way but in reality they tend to be more granular both functionality wise and density-wise.
With old skool SOA you'd typically have a monolith app with a bunch of endpoints. With microservices, especially in a containerized environment they tend to be more lightweight.
Microservices is just SOA rebranded for the cool kids. The fact that modern orchestration and tooling makes it easier to have more granular services changes the equation for how you factor the services, to be sure, but it's an evolution not a revolution.
Unfortunately, the founder of ZeroMQ, Pieter Hintjens passed away (due to cancer) [1]. He was a regular on HN [2].
ZeroMQ still works great and the open source community is still maintaining it on GitHub [3]. I just think people are also looking at other technologies. A lot of interest popped up in things like Apache Kafka and Samsa. I still think ZeroMQ holds a unique place due to its lightweight and simple nature.
I have been curious how the community would hold up after Pieter's death. This project is a unique case because of how much work went into building community and welcoming contributions. That said, the world is a different place than in zeromq's heyday. Other commenters refer to Martin leaving the project, C++ regret, and a poor fit with node.js. Maybe in the face of all those changes zeromq's mature community is primarily why it lives as a project.
I recently switched from zeromq to straight libuv sockets with jsonl (\n-separated json) payloads. Because I'm working inside a Node process, combining zmq's threading model with Node's threading model was a pain. Now, there's a single IO thread which is the same as the Javascript engine thread, and I can use uv_work to run CPU-intensive tasks on multiple cores.
Do you not allow \n inside your JSON or encode your JSON as base64? If not you might have problems with disambiguating frame ends from line breaks inside frames.
A common way for framing is to prepend each frame with it's encoded length. That's easier, faster and less error-prone than searching for ASCII delimeters.
I'm generating the JSON, either with custom C++ marshaling routines or with JSON.stringify which doesn't include newlines unless you give it extra arguments. I believe that any valid JSON can be converted to a single line by changing any '\n' bytes to ' '. Literal '\n' bytes are not allowed inside strings, and outside strings any whitespace is equivalent.
Looks like there is a path hardcoded in the build files causing problems. After some reflection on the msvc/README, renaming the project directory to libzmq (was: zeromq-4.2.2 from the release or libzmq-master from github zip download), and launching cmd.exe using the Developer Command Prompt for VS2015 link, libzmq/builds/msvc/build/build.bat successfully builds all configurations.
When was the last time you heard something about zlib? At a certain point - libraries are basically done. They are widely distributed, everyone knows what they are, there is no reason to talk about them but they are still maintained and heavily used.
Libraries can be done, but that has got nothing to do with momentum. Momentum depends on mindshare, on the willingness of people to use and to keep using it. Most programmers don't choose technology based purely on merits, they choose it based on "I heard X talk about Y and s/he said good things, so I guess I'll use it". We programmers aren't as rational as we think.
Like it or not, popularity and momentum are important merits of a technology. They lead to all sorts of benefits, like healthy maintenance and further development, better documentation, and support when you run into trouble. It is rational to consider these things when choosing technology.
I'm using it for a project now. It's a bit weird, but it does work. Cool thing: you can slot a file descriptor into the zmq provided poll ... point is that you can poll on both zmq and sockets in the one loop.
Confusing description. After seeing the Github README (https://github.com/nginx/unit#integration-with-nginx), it looks to be Nginx's alternative to low-level, language-specific, app servers, e.g. PHP-FPM or Rack, with the benefit that a single Unit process can support multiple languages via its dynamic module architecture, similar to Nginx web server's dynamic modules.
It's still intended to run behind Nginx web server (or some other web server), much like you'd run something like PHP-FPM behind a web server.
It's a polyglot app server with microservice orchestration. It's definitely needed.
Some things to look for, such as registration/discovery of services, intra-cluster load balancing (where it started, no doubt), identity propagation & authn/z
The biggest issue to my mind though is distributed transactions and logging/debug/development. My biggest stumbling blocks with this sort of thing.. stepping through code over microservices is such a PITA.
because you can work with individual microservices across clusters without a ton of overhead (or use a monolithic app server), aiding in deployment, rollback, debugging, development.
How exactly does having an app server reduce overhead, compared to running each service directly without app server?
And how does having an app server compare to putting each microservice in its own Docker container and orchestrating them in Kubernetes, which is what more and more companies seem to be doing?
having to deal with e.g. php-fpm, fcgi, tomcat, and unicorn separately in the same stack is a nightmare. even if they run in separate locations/clusters/nodes/machines, it's still several different configuration and deployment paradigms you have to deal with.
some people simply don't like containers or aren't tooled for it.
You would be able to merge your services under a single server and have them talk to each other internally sans latency overhead. It also allows you to easily scale up and down and segment things on demand.
At a glance, I think this is an alternative to docker/kubernetes. The general idea seems to be to cut the middleman/topman out and let the bottom man (app server) be the "unit" of configuration. Like a sort of integrated docker/<YourLang>-runtime.
> It is not recommended to expose unsecure Unit API
why do people always use "not recommended" when they actually mean "do not ever do this or you'll end up the laughing stock in the tech press"
Exposing this otherwise awesome API to the public will amount to a free RCE for everybody. So not ever expose this to the public, not even behind some authentication.
It's very cool that by design it's only listening on a domain socket. Don't add a proxy in front of this.
Technically, NOT RECOMMENDED is the same as SHOULD NOT in RFC2119 - i.e. "the full implications should be understood and the case carefully weighed before implementing any behavior described with this label". Not that this document uses those definitions, but.
Thanks for linking that. Typically, if you know what you are doing, a setup of this nature would be segmented out from the rest of internal network.
I did compliance work for a lot of start-ups and never came across a company that understood this concept. The majority thinks that their wireless router is already doing this via the Guest account.
I am biased, but call me underwhelmed. It seems that with every "new" feature, nginx is copying Apache httpd, even now claiming to be the "swiss army knife" of web-servers. Embedded languages. Dynamic modules. Support of uWSGI. gracefull restarts. Thread pools... and yet people eat it up. Just goes to show what having corporate-backed marketing and PR can do.
When I started with apache, I thought it was great, but after moving to nginx, the speed and simplicity made me never look back. While these new features to nginx aren't new to the world, they are a nice welcome addition to a system that IMO is far superior to apache.
I never found Nginx especially simple to setup, the config files were always messy. Caddy seems to have knocked this out of the park for me, especially considering automated https, and redirection.
I use Caddy on all my small projects right now. I haven't used it long enough to install enough faith for production sized systems yet, but hopefully I will get there because it is much easier to setup. Still, nginx is a breeze compared to apache IMO
I share your love for Caddy, but having worked with all three, I do agree that nginx is easier than Apache. The config file isn't perfect, but I wouldn't call it messy, and I prefer it's syntax httpd's. But to each their own.
Simpler than apache doesn't mean simple. As someone who sets up HTTP servers rarely, I had trouble when I tried out NGINIX.
But I suspect for people who do it more seriously, then it's nginix config hits the sweet spot. To me the language seems sophisticated, well documented and fairly well behaved if you pay attention to the rules.
That's can make it too hard for someone casually trying to quick-start some experimental project. But it's exactly what you want if you are maintaining a long-lived set up that is likely to grow and become complicated over time.
Because people have a hard time figuring out what it is. Could you explain what it is? What benefits does it have to make it worth exploring? To me it looks like a rather invasive but flexible and dynamically configurable inetd. But it forces you to use its own libraries to receive http requests.
It's a lot like OpenResty (https://openresty.org/en/), which is Nginx with a Lua interpreter embedded and bridged to its request-response cycle (the OpenResty page explains the point of that pretty well); but instead of Lua, Unit has a bunch of other language runtimes embedded.
I haven't found any embedded interpreters or runtimes here. Quite the opposite, I see they have libraries they ship with other languages that a user has to use in order to receive http requests.
Why remove Lua, though? I'm a heavy Lua user, which is why I use the openresty bundle of nginx. There's no reason for me to try this out. This is unfortunate!
I agree. Apache has great module support. I think their worst sin was that their debian package defaulted to a small number of workers and using a forking mpm leading people to believe apache was slow.
Their eventing/threaded mpm is basically nginx.
And now nignx is starting to gain the features of apache.
Could anyone explain to me why I would want to use this? What exactly is the use case and benefits of it when I am for example running a go web application?
NGINX allows you to proxy a back-end applications giving you the ability to load balance, handle upstream failures with custom maintenance pages, employ server blocks (virtual hosts), and much more. However, you always need to do the leg work to get your specific application language up and running. This new unit system makes that job easier as you would no longer need to employ separate middleware, like PHP-FPM for PHP applications, or use a separate init system like systemd to run Go or Node applications. Now NGINX would assume those responsibilities and provide you with a consistent interface.
Here you can see the configuration of workers and user/group permissions for a Go application:
That's how my reading of it goes. You provide an "endpoint" for the library to call, configure the Unit framework, and their Manager connects the nginx frontend to that Unit framework.
No real idea if it does so using fcgi or some other socket-based proxying, or if the unit is spun up as a separate process and handed the raw socket and some shared memory after the headers are parsed (closer to how mod_php works).
Yes, you can generally think of it as a replacement for mod_php as Unit would parse requests from NGINX, pass them along to the PHP parser, then return the responses back to NGINX. That's the same job mod_php does for Apache and what PHP-FPM (essentially) does for servers like NGINX.
Upon first reading I thought that Unit needed to be behind NGINX to function. When actually it listens for requests as a separate server, entirely. It only provides an API for configuration purposes.
However, If you want to use the other features of NGINX, like providing static files, you will need to put it in front of Unit.
It's worth noting that it's rarely necessary or desirable to put an app server like nginx in front of Go HTTP server applications. The Go standard library http and TLS stack are production quality and rock solid. Putting something in front is mostly cargo culting from people more used to the worlds of PHP/Python/Ruby/etc.
Pushing back on this a bit...for example, securely exposing a JSON endpoint to the public internet requires extra machinery that applications like nginx bring for free. If you simply set the router to your handler, then you accept arbitrarily large request sizes, wide open for DoS attacks. You have to either manually add limits or pull in some library. nginx caps these by default. Want throttling or load balancing? Again, things that haproxy and nginx do well, but require more cruft in your application.
I would argue that all is part of security-aware software engineering. If you aren't thinking of these things you have no business writing publicly-exposed HTTP applications.
Secure software isn't useful? Insecure software isn't eventually value-destroying?
Really what this sub-thread is arguing is that security Isn't My Job(TM) as application developer. I disagree. Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
Not accepting unbound input and sane rate-limiting are kind of basic stuff, no? I'm not saying every app developer needs to be a Defcon wizard, just that they should have some fundamental awareness of secure coding standards for web apps if that's what they're building.
> Insecure software isn't eventually value-destroying?
Nowhere in this sub-thread is anyone suggesting otherwise.
> Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
Nobody said this. But while we're on the topic the more likely false security blanket comes from telling app devs "just use 'net/http' and 'crypto/tls' and everything will be fine without a reverse proxy."
In any case the straw men you've raised are distracting and not driving the conversation forward.
> > Furthermore telling app devs not to worry about it because nginx takes care of everything is a false security blanket that will bite you eventually.
> Nobody said this.
That seems dishonest to say... From the grandparent:
> Or... you spend your time building something useful, leveraging skills you do have, and let nginx leverage its own strengths.
Really sounds like at least one person in this thread is advocating for app devs not to worry about things that nginx takes care of.
Agree that making straw men doesn't help. There's advice on either side regarding which one to use and realistically both are equally 'false security blankets'. The correct answer is to educate yourself on the benefit and drawbacks of each and make a conscious decision about where to implement your security.
What if I have an application that needs to be deployed internally and externally in separate instances. Identical application, but different security contexts. Using Nginx to handle these concerns is easy.
It's a common myth that internal networks are a more secure environment. You are better off implementing the philosophy behind something like Google's BeyondCorp¹ effort.
I find it useful for filtering and caching. Things like redirecting traffic to /.well-known/acme-challenge/ to your certificate management host, providing an endpoint for machine status or filtering requests to dot-files. Or telling Nginx to cache responses and allow it to serve stale content when the backend server returns 4xx/5xx status codes during deploys or high load. Handling things like client certificate authentication in Nginx instead of doing it in every backend application is another thing I've found useful.
It's useful to put Varnish in front of the app server for caching and to serve static content from a separate process (and domain) running a light/tiny httpd server instead of Apache/Nginx
I don't use Go, but D (dlang), vibe.d, varnish and lighttpd are working real well for my latest venture.
It could be that nginx is more efficient at static file serving, but that'd be down to being specifically designed and optimised for it rather than some "sync vs async" thing.
Minor quibble, in the context of serving static files (ie. from disk), go doesn't use async I/O, the file I/O blocks the thread until it's complete. But since go's scheduler is M:N this doesn't lock up the whole program, so your point stands.
Err, no, this is a misconception. All IO in Go is async - there is no sync IO in Go (as sync IO would block an entire OS thread). There is an internal registry mapping blocked file descriptors to goroutines - when a kernel IO function returns EAGAIN, the goroutine throws the file descriptor + goroutine info onto the registry and yields back to the scheduler. The scheduler occasionally checks all descriptors on the registry to mark goroutines that were waiting on IO as being alive. The scheduler is, therefore, essentially a multithreaded variation on a standard "event loop" - the only difference is that "callbacks" (continuations of a goroutine) can be run on any of M threads rather than just one.
From a Go programmer's perspective, this looks like "blocking a thread", but because goroutines are relatively lightweight in comparison to actual threads, it behaves similarly resource-wise to callback-based async IO. (Although yes, nginx is likely optimised so that it throws out data earlier than Go can free stack space and so can save some memory. Exactly how much is up to benchmarking to find out.)
Basically, the only differences between Go and e.g. a libev-based application as far as IO is concerned is a different syntax - the event loop is still there, just hidden from the programmer's point of view.
Note that this doesn't mean you shouldn't put nginx in front of Go to serve static files - nginx is likely more optimised for the job than Go's file server, might handle client bugs a little better, is more easily configurable (e.g. you can enable a lightweight file cache in just a few settings), you don't have to mess around with capabilities to get your application listening on port 80 as a non-root user, and so on and so forth.
I'm referring specifically to disk IO, which on linux using standard read(2) and write(2) is (almost) always blocking. What you describe is true of socket fds and some other things, but on most systems a file read/write which goes to a real disk will never return EAGAIN.
This is why systems like aio[1] exist, though afaik most systems tend to solve this with a thread pool rather than aio, which can be very complicated to use properly.
Ah, absolutely, I forgot that the state of disk IO on Linux is terrible - although this still isn't quite the case, since there's a network socket involved in copying from disk to socket, so if the socket's buffer becomes full the scheduler will run.
It seems that nginx can use thread pools to offload disk IO, although doesn't unless configured to - by default disk IO will block the worker process. And FreeBSD seems to have a slightly better AIO system it can use, too.
I love Warp for Haskell, but I would still be hesitant to expose it directly. It’s simply not used as much as Nginx or Apache. Less people have spent time trying to break it.
Perhaps it's rarely necessary, but it is often desirable. For instance if you are serving any static content along with your application, nginx is quite handy and is probably better at compressing and caching.
Your choice is force a timeout and kill streaming requests, but defend against slow client DOS, or support streaming requests and suffer from a trivial slow client DOS.
For this and other reasons I still recommend fronting golang with something more capable on this front.
Same. I really want to like (and use) uWSGI, for many reasons, but I find it's lacking severely in the department of documentation (searching "uwsgi" on Amazon gives zero hits!).
A properly edited book would be awesome. I would pay for it of course.
> I really want to like (and use) uWSGI, for many reasons, but I find it's lacking severely in the department of documentation (searching "uwsgi" on Amazon gives zero hits!).
uWSGI definitely needs more concise tutorials on how to accomplish some tasks (e.g. creating Hello World with python and uWSGI, or how the uWSGI emperor works).
However I disagree with "lacking severely in the department of documentation"
Sure, it's not as easy as some other projects to dive into (e.g. Django) but IMHO the documentation is not lacking, it's just not forthcoming.
If you sit down and read through the uWSGI documentation, you'll discover a lot of very useful functionality and a reasonable description of how to utilise it.
What's lacking is the tl;dr way to bash something out quick and dirty.
I think their documentation is quite thorough. It's just as the other commenter indicated an app that extensible doesn't have cookie cutter simplistic configs out of the box.
>Same. I really want to like (and use) uWSGI, for many reasons, but I find it's lacking severely in the department of documentation (searching "uwsgi" on Amazon gives zero hits!).
Agreed. After recently testing out Python for a web dev project I was really dismayed at the fragmentation and lack of usability in the landscape of application servers. Here's hoping this might lead to some standardization.
I initially thought it would allow to dynamically handle upstreams list (and other configuration) like hipache is doing [1], which would be awesome for dokku or other container management systems which rely on system nginx. But after seeing languages mentioned, I'm confused.
Is it supposed to replace language specific servers, like unicorn and puma for rails (but then, I'm confused about what such kind of support would be for Go, since the server is directly embedded in the program)? Does it embeds interpreter for interpreted languages, like mod_* did for apache?
I don't like it at all :( I usually put plain nginx in front of my app, to handle static files and simple load-balancing, but this seems to be oriented towards handling issues best handled elsewhere.
I'm having a hard time seeing what niche this fills. It seems to be both a process manager and TCP proxy. What am I missing here? What makes this better than, for example, using docker-compose?
I think a "how it works" or "design doc" would be really helpful.
That said, the source files do make for pleasant reading. The nginx team has always set a strong example for what good C programming looks like.
EDIT: Their blog post [0] makes this more clear... nginx unit is one of four parts in their new "nginx application platform" [1]
Seems to be a standardized replacement for language specific app servers like fpm for php. I guess that makes it a little easier to deploy stuff, although recently with docker containers, that hasn't been such a big deal anymore. You can just take an off the shelf fpm container and deploy that.
Seems like a simple C app would take much less resources than a docker container and have a lot lower latency, though. How much computing power would you need for each, given the same number of users?
Interesting. I like the restartless configs idea. This is becoming more common these days with short lived microservices. This week I just switched my load balancer setup from HAProxy to Traefik - very nice API based setup. https://traefik.io/
downvoters: I'm deadly serious. I've seen plenty of deployment systems which were unbearably slow because it gave more time for a human to spot a bad deploy and cancel it, and who were afraid to replace it with something faster because it would lack this safety net.
That they deployed a broken config file & forcefully stop-started nginx instead of reloading it (and bypassing nginx's built-in protection: it will test a config and refuse to load it if it's broken on reload. on restart it's stuck with whatever busted config you give it).
So it looks like they basically rewrote uwsgi and slapped a rest api on top of it..
(as a big fan of uwsgi, that seems like a reasonable thing to do...)
I can't speak for the other languages (PHP, Go, Python) but I have some reservations about it helping Java (as well as Erlang and other (J)VM languages) as FastCGI like stuff has been attempted for Java in the past with not very good success with the exception of Resin.
I guess it would be interesting though if they did a native Servlet 3.0+ implementation like Resin but I doubt that is is what will happen. Regardless Netty, Undertow and even Jetty have caught up speed wise to Resin (well at least according to techempower).
I have a small flask application which basically is a rest get post API server. I'm struggling to make deployment easy. With PHP, i just push to the application server and rsync that folder into var www html for Apache httpd but what would I do for flask python 3?
Use a webserver that proxies requests to a wsgi server. We tend to put Caddy in front of Gunicorn which works really well. Also, look into running Gunicorn under supervisord.
As with most things, there is more than one way to do it. Push to the application server and hook it to your flask application using [uWSGI](http://flask.pocoo.org/docs/0.12/deploying/uwsgi/), for example.
Personally, I have an AWS instance running a Node.JS server on (blocked) port 8000, a Django uWSGI app on 8001, and a static resume site, all being reverse-proxy served by nginx. So I don't really see the advantages of Nginx Unit yet.
your docker workflow in the future looks like this:
1. test the application on your laptop inside a docker container
2. push container to docker hub
3. "docker update" your stack
Here will go their REST config api to force reloads.
BTW, it is a good idea to always do API versioning on production runs. That will eliminate the possibility that different API versions (files stuck in the cache, or simply people who kept browser open for a long time) use the same endpoint
You can run gunicorn (that loads your flask app) as a service using systemd on e.g. port 9000 and then have nginx (also run as a systemd service) proxy port 80 traffic to that port and handle static files etc.
I think that's more intended as demo server to get you started quickly while you're developing.
You'll probably want to switch to uwsgi or gunicorn before you actually deploy anything.
I haven't actually used Bottle, but with Flask the development web server seems to fall over if a client cancels ones of its HTTP requests, for example. It's really just a simple, light thing for mucking around with.
For Go, does anyone have opinions on how is this is advantageous than using the in-built HTTP server (net.Listen() from net/http) that can fronted by a regular nginx/proxy_pass?
It's open source at the moment at least and I think it's reasonable to expect at least that the parts that are open source today will remain so in the future. Certainly they could have a commercial version with extra features like they do with Nginx, but as long as they have a useful version of this Nginx Unit available open source I will be happy to use it.
Any use for that on small scale (of 1 instance)? If you'd need to run nginx in front of it anyway, does it provide any use in case where you'd normally use php-fpm and some proxy_pass?
For almost every new product you see, the answer is: none.
It's not about making something impossible possible. It's about improving possible things in some dimension - like speed, safety, flexibility, or - in this case - standardization and integration with already used tool.
Honest question, not being snarky: when did the existence of other products handling the same use-cases ever stopped people from creating another?
For one, it's not just "handling a use-case" it's also _how_ you handle it. And within what ecosystem you handle it. And what kind of support etc you offer with it. Etc...
AFAIK nginx unit would still require to have an nginx in front, so they are in different weight category with openresty.
It looks like it's more of a replacement to good old NGINX+Apache set up where there would be mod_php, mod_cgi, mod_perl and .htaccess on backend to serve the app.
This type of question is indication that NGINX Inc. salesmen did fail horribly to conceive of what the product actually is in layman engineering terms. Too much buzzword compliance.
I recently tried to deploy a python flask application, and it was quite a mess. It relied on some services I had never heard of, and the documentation was a mess (not the documentation of Flask but of how to deploy it properly).
If Nginx Unit could host flask applications, it would be great news.