Hacker Newsnew | past | comments | ask | show | jobs | submit | kodfodrasz's commentslogin

Most of the research coming out of SV tech companies nowadays is yet annother chat platform, and reimplementing broken C code in broken JavaScript code.

Is it making the World a better place? I don't think so. Is it making it a worse place? Only for other developers :)


> Most of the research coming out of SV tech companies nowadays is yet annother chat platform, and reimplementing broken C code in broken JavaScript code.

I don't think so, there is a lot of stuff happening in a place that called SV. Maybe you are referring to the cool startup bunch?


Indeed I do, but also many other unnecessary innovations came out of SV, as well as some interesting Research, which end up used for solving problems which don't exist.


What's wrong with unnecessary innovations? There were lots of inventions/discoveries that didn't solve any existing problem during their time of discovery.


Their marketing is annoying me.


Actually the gear shifting would also be done by logic, so nothing gained by that.


Not quite. In order to lower the torque on a stepper motor you need to have the logic controlling the motor control the torque for every single step the motor takes and that can switch from low to high between two steps. You can ward against this either with continuously checking logic or the torque limiters Animats mentioned: https://www.mayr.com/en/products/torque-limiters

If the inception drive is set to low torque a second motor needs to actively set it to a higher torque, which you can guard against by cutting power to the second one while in low torque mode, and having mechanical interlocks that cut power entirely if the shape of the drive changes to a high torque configuration while it's supposed to be low torque. Neither of these require continuous logic.


How cannot you have a continous checking logic in the driver logic of a gearless drive?

Let me help: you can, and I have taken part in developing one.


Link to patent?


Lol?

Electric servos for steering wheels use safety critical code, with lot of safety checks, and use direct drive. No need for silly patent for common sense out of the United States.


What's the difference between continous checking logic and safety critical code with lot of safety checks?

I'm not a mech engineer, i do code. I'm asking to learn and if you can prove me wrong by teaching something, yay.


Those safety checks need to be run continously, actually, or the software will not get the certifications necessary to be released on the roads. There are passive means: practices and coding guidelines, static checks, but also active measures: defensive coding, redundancy, continously active safety check logics.

So my original point was: it doesn't matter if you use direct drive or a transmission, as both will be controlled by software, and ultimately the safety of that software will determine whether the System is safe overall. The same design principles and safeguards will need to be implemented in both cases to provide the needed integrity.

But I doubt there are any patents on this. I guess it would be illogical to demand vendors to use patents by someone. But there are lots safety regulations on the topic.


Fair enough, and it makes sense. Thanks for explaining. :)


Also, if it was xenophobic. So what? Doesn't a businessman have the right to choose who to do business with?

I believe free trade is a good thing, but seems like many americans mistook free trade for gunboat diplomacy (see opening of Japan). In my understanding free trade is free as in free will: both parties voluntarily take part in the exchange of goods and services.

Also one can be patriotic in a way that he/she (no xe!) wants to preserve the cultural heritage, which is not only buildings and artifacts in museums, but also customs. Customs can be preserved by people sustaining them by continuously acting according to them. Local customs can be ruined pretty quickly via a huge influx of tourists.

Example: In my younger day I could see elderly people sitting in the small parks around playgrounds and children playing at some parts of the city. Now in these parts of the city what I can see is playgrounds have been closed. Public places have been closed, no elderly talking, playing chess on a summer afternoon. Instead there are "party tourists" littering, being loud and drunk an sometimes acting atrociously as early as 2pm on weekdays. A local custom (socialization of the locals) has been wiped by tourism. Slowly locals are freeing the "party district".

I can understand why one wants to avoid such situation (even as Michelin -tourists are not this troublesome, but starts a trend, which can even lead here). Is this xenophobic, then be so. I understand why people want to be xenophobic then.


All: if you want to understand how not to comment on Hacker News, take a look at this flamewar and never, ever behave like this on HN. It's emblematic of all that this site is not for. Note also the bannage occurring below.

https://news.ycombinator.com/newsguidelines.html


> Doesn't a businessman have the right to choose who to do business with?

Depending on the country no. In France at least, that's not how it works.


> So what?

Lots of people don't like xenophobia.

> have the right to choose who to do business with?

Ah here we go. Like the hoteliers who don't allow gay people to stay ?


The only other option is to government-mandate that, right? Not saying there aren't other unfortunate consequences of allowing a business owner to choose who they serve, but I'd say the larger problem is just that people can be... well... humans aren't great. However, putting the power in the hands of the law probably doesn't fix that.

Ultimately, why would you want to support someone who doesn't want you to be there?

Regardless, you've given me a new thing to think about. It's a complex issue for sure.


It surely helped to change how black people are treated in America, no government-mandate would mean "no blacks allowed" signs still hanging somewhere to this day...


Treating a part of your own society like shit is a bit different from society as a whole deciding they want less outsiders.


In the era of modern travel outsiders are a part of your society.


No, they are not.


Yes, they are.


Yet we don't allow random traveler to vote in local elections.

Modern travel is big specifically because the world is diverse place. If all societies would try to include anyone from anywhere, they'd become similar and traveling wouldn't be as great.


By that logic you'd have to argue that children, inmates, mentally ill and those that for whatever reason didn't register to vote in states where it's required are not the part of society.

Including outsiders in the society doesn't make societies less diverse. If anything it's making them more diverse because otherwise similar societies might include different outsiders and differ more because of that.


Not every argument is supposed to be double-sided.

Mix-mashing all societies just get them to lowest common denominator. Similar societies usually receive similar outsiders. Sometimes even outsiders from same place. Which just makes them even more similar. For example, north Germany and Bavaria are quite different places and have different culture. Yet same set of immigrants come to both. Local food is different, but kebabs are the same.

If each society can choose who they want to accept and who they don't want... Then we can agree that each society can judge for itself.


> Lots of people don't like xenophobia.

And lots of people don't like foreigners.

> Like the hoteliers who don't allow gay people to stay ?

Even that is OK in my opinion. The market will sort it out. There are clubs where gay people can have a good time. If gay accomodation becomes a niche market, there will be people seizing that opportunity. The law shall not discriminate people, but telling people what to think and forcing them doing stuff they don't agree with is dictatorship.


If you leave it to the market you'll get segregation, and whites only stores and buses on no Jews and dogs allowed signs.

Allowing this kinds of things in public space wouldn't advance humanity.


A private business is not a public place.


I've never understood that argument; if you invite unkown customers you are inviting the public. I mean if you really want to keep it private you would have to have a guest list imho. Which is fine, but you can't have a guest list that says "not that kind of people". Private is for things like Sento imperial palace, where you guide people through your property, but when you let people in freely it is my belief that at some point that will have to be interpreted as public space (like trademarks).

If you have a book that gives more nuance to this "a private buisness is not a public space" I would be glad to read it.


Actually you can have a place like that. There are places where you cannot enter unless dressed in a way, but there is no specific guest list.


How so? Just because it has word "private" in its customary english name? If you translate it to other languages word private often disappears. And if it remains it serves to separate state owned buisnesses and privately own buisnesses. Both participate in public market, have random customers from the public and are regulated so that they need to display prices and not refuse service to some people based on owners stupidity.


But nevertheless in intrudes on the public sphere, to simply deny its effect on the culture surrounding it and the society which uses it is naive. The idea that businesses exist apart from everyone and everything is a pernicious one.


> And lots of people don't like foreigners.

And that's morally wrong.


Its not. Hurting them is wrong.

I also don't like junkies. Is that also wrong? Why do you tell me what i should like, and based on what ethics do you tell me what is morally right?

What if I tell you that liking foreigners is morally wrong where I live? We don't hurt them, but we don't like them because they keep telling us what we should do, and keep trying to conquer us for a millennia? Still we live in peace since a long time, we treat our guests well, but we don't categorically like them. Each and every guest can become an individually liked person, and even unliked ones are treated fairly (given a fair trial before execution ;) Hint for autists: It was a joke)

Jokes aside: Is this immoral? Why?


It's morally wrong to hate something for an attribute they have no control over.

Is it morally wrong to hate someone who chooses to drink? Not really.

Is it morally wrong to hate someone born addicted to opioids, or born with fetal alcohol syndrome? Of course.

Being born "foreign" is no different, especially if you, as this chef does, live in a country where "nationality" is coterminous with "ethnicity" to the point multi-generational immigrant groups (ethnically Korean Japanese, for example) are considered "Not Japanese" by many Japanese people.


Nice strawman you have there!

But actually not like and hate are the same only in your dictionary.


> But actually not like and hate are the same only in your dictionary.

You're just being deliberately obtuse, now.


Personal attacks will get you banned here regardless of how bad someone else's comments are.

So will flamewars, which you've unfortunately made this site worse by participating in.

Please read the rules (https://news.ycombinator.com/newsguidelines.html) and don't do these things again.



Is singling out one person to attack always how things work?

Maybe anti-racism is simply unpopular around here.


> And lots of people don't like foreigners.

I thought we were talking about this comment. You created a strawman, and now use derogatory comment toward me. This makes me end this discussion here.


wow, I forget that people like you exist.


We've banned this account for repeatedly violating the site guidelines and ignoring our request to stop.

If you don't want to be banned on HN, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.


wow, just wow!


We've banned this account for taking this thread into a wretched flamewar, as well as for violating the site guidelines repeatedly and ignoring our requests to stop.

If you don't want to be banned on HN, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future.


Do I understand correctly that the wishful thinking value of 75% photovoltaic conversion efficiency is plotted in the diagram?


It means the conversion factor of yearly yield in kWh to installed PV in kWp. "Yearly sum of solar electricity generated by 1 kWp system". It's not about the physical PV efficiency.


The 75% performance ratio represents system losses not module efficiency. So a system rated at 1kW(peak) which may be 5-6 square meters in size, and which is exposed to annual sunlight of 1200 kWh/m2 might only produce 900 kWh of usable power.

So efficiency has to do with the rated power per surface area and the performance ratio reflects all the system electrical and environmental factors that reduce the amount of usable power below what would be expected based upon module ratings and annual sunlight exposure.


So nowadays app means a website.


It's a web app if it works on the WWW and requires JS to work. Otherwise it's a website. That is how I classify them anyway.


So an electron based JS client for a cloud service Accessed over the WWW is a web app (not a simple app. although I'd prefer if those were also not referred to as apps)?

But a JS heavy website is an app.


simply install "an USB firewall" into the periscope controller (not the xbox, but the usb host).

but my thought exacty: put a prepared controller to "game room" (sure there is such a thing on a nuclear sub). sabotage controllers. wait until one in "game room" gets plugged into periscope. Profit.


It is not about the 30k$, I guess (around 10 subs, so circa 300k$ savings is not a bug thin for the army), but the spirit is the key, as the mindset will spread, and can potentially lead to huge savings.


Is anybody still using Apache?

I'm really iterested on the whys (in addition to the obvious legacy reasons).


It's still a really nice web server. Easy to configure, incredibly reliable, twenty years of security fixes under its belt, endless docs/info available for it online. I've been meaning to learn nginx for a while, but with Apache working just fine, it doesn't feel very urgent.

Besides Apache and nginx, there aren't really any servers I'd trust for production use. Conceptually, I really like what Caddy is doing, but it's simply too young, not to mention the concerns that came up when Let's Encrypt had an outage back in May: https://github.com/mholt/caddy/issues/1680


The concerns about Caddy handling an ACME server outage are not really well founded these days, because:

1) Caddy has the most robust OCSP implementation of any web server. It caches staples locally and refreshes them halfway through the validity period so it can endure days-long OCSP responder outages.

2) Even if a certificate needs to be renewed while the ACME server is down, Caddy can endure 2-to-3-week-long outages of the ACME server because it renews 30 days out from expiration and tries twice per day until it succeeds, logging its actions along the way.

And because Caddy is written in Go, memory leaks like what Apache suffers are much less likely, if not impossible.

(Source: I implemented it.)


> 1)...

Two calls to OpenSSL and a few lines of shell script will fetch staples from LE whenever I want (ie more often than your defaults that have caused problems before) for HAProxy to serve. How's that for robust.

>2)...

great so you're as reliable as.. any other LE client that tries to renew before expiry, but without the ability to choose when to renew certs?


1) Caddy has never had OCSP outage problems. When LE's OCSP responders went down for a day a few months ago, Caddy was the ONLY server that kept sites online (unless nginx was explicitly configured for it; almost all Apache sites with it went down, including gnu.org).

2) Ours is the only LE client built directly into the web server, so "reliability" isn't really a comparable factor. By doing renewals automatically, Caddy's HTTPS implementation is more reliable and robust than doing it manually.


> 1)

Ok fine, you want to pretend HAProxy isn't a thing that exists, we can do that.

> 2)

That isn't necessarily a good thing. Have you ever heard of separation of concerns?

You specifically mentioned how Caddy can manage 2-3 week outages, as if it's something unique to your software.

Every LE client that can be run from cron/a systemd timer renews certificates autagically.

So where's the manual part that you keep claiming is not "robust"?


As someone who occasionally helps people with their web server/TLS/Let's Encrypt configuration, my experience is that there is a world of difference between a web server merely giving you the tools you'd need for a (relatively) sane OCSP and auto-renewal configuration, and one that takes care of everything. You should not need the amount of specialized knowledge you currently need to correctly configure OCSP on HAProxy or nginx, for example.

Failure to renew a certificate (for any number of reasons) and missing intermediate certificates are the most common issues, and both of these are essentially solved with a web server that supports ACME natively. The same applies to OCSP stapling - we're not going to achieve ubiquitous stapling support on the server side if every server admin needs to deploy a bash script calling to openssl because the native OCSP implementation in their web server is broken.


There is literally zero configuration required for HAProxy. You just store .ocsp files alongside the .pem's

Yes you need to fetch them regularly. I use the aforementioned shell script to do this.

Why do you people keep up this weird narrative that only caddy renews certificates automatically?

The native ocsp handling in HAProxy isn't broken: it just expects to have TLS material provided to it.

If someone setting up HAProxy on a server can't install a shell script calling two OpenSSL commands and setup cron to call it regularly, I don't think your big problem is "HAProxy doesn't handle ocsp". It's PEBKAC.


> There is literally zero configuration required for HAProxy. You just store .ocsp files alongside the .pem's

> Yes you need to fetch them regularly. I use the aforementioned shell script to do this.

It does sound like there's some configuration involved after all, namely the configuration of a cronjob, and perhaps adjustments to a bash script. This is something that the majority of server operators will not bother with, though these numbers might look slightly better for HAProxy since that's generally something you see used by more experienced sysadmins. Even with nginx, we're talking about one line of configuration for the actual stapling file - that's not the blocker, it should just work, by default. Fun fact: IIS does this.

> Why do you people keep up this weird narrative that only caddy renews certificates automatically?

I'm not sure who "you people" are, and I never made any such claim. What I'm saying is that web servers that support ACME natively leave a lot less room for the types of misconfigurations that I commonly see. I'm aware of an ACME module for apache and lua scripts for nginx which provide similar features (regarding renewal and certificates, not for the OCSP aspect), but they're not part of what you run by default when you use those web servers.

> The native ocsp handling in HAProxy isn't broken: it just expects to have TLS material provided to it.

I have yet to see you explain why this should be necessary. The web server has all the information it needs to do this. Other web servers can do this.

> If someone setting up HAProxy on a server can't install a shell script calling two OpenSSL commands and setup cron to call it regularly, I don't think your big problem is "HAProxy doesn't handle ocsp". It's PEBKAC.

This is like arguing MongoDB should continue to bind to all interfaces by default because everyone should have firewalls in place. Yes, they should, but it's never going to happen, so why can't we just have a safe default?


> It does sound like there's some configuration involved after all

How is "schedule a script to run @daily" or whatever schedule you want, at all match up to what you suggested is required:

>> You should not need the amount of specialized knowledge you currently need to correctly configure OCSP on HAProxy or nginx, for example.

If you're configuring a public facing server, a cronjob should not be considered "specialised knowledge".

> it should just work, by default

HAProxy and nginx do a fucking shit ton more than Caddy does. HAProxy doesn't really have a "default" you have to tell it what you want to do, specifically. This is a good thing.

> I'm not sure who "you people" are, and I never made any such claim.

>> By doing renewals automatically, Caddy's HTTPS implementation is more reliable and robust than doing it manually.

>> Failure to renew a certificate (for any number of reasons) and missing intermediate certificates are the most common issues,

You're both giving the impression that Caddy is able to automatically renew certificates, while no other software does this.

> I have yet to see you explain why this should be necessary.

Off the top of my head:

- Separation of concerns. I don't ask my mechanic to build me a car, or to drive me to the shops in it.

- Not every organisation uses LetsEncrypt certificates. Some will even use self-signed certificates.

- TCP or DNS load-balanced environments will want the exact same certificate & key on multiple servers.

- Caddy has already shown that trusting some "automagic" solution can bite you in the ass. Remember that time Caddy wouldn't even start, even though it had perfectly valid certificate material in it's cache, because LE was down?

> The web server has all the information it needs to do this.

HAProxy doesn't know the email address I'd want LetsEncrypt to use for notifications. It also shouldn't be agreeing to LE's terms on my behalf.

> This is like arguing MongoDB should continue to bind to all interfaces by default because everyone should have firewalls in place.

No, it isn't.

I'm sorry but you simply don't get the choice of arguing anything about how creating a fucking cronjob is "too hard" when Caddy doesn't even ship with a configuration to start as a service.

"I know I just had to google the shit out of how to create a systemd unit, but fuck it, it's better than writing a damned cron job for oscp staples!"


> If you're configuring a public facing server, a cronjob should not be considered "specialised knowledge".

Knowing how HAProxy's OCSP implementation works (i.e., by looking for a $cert.ocsp file) is the specialized knowledge. Odds are, you don't know about it and just won't use OCSP stapling. This is especially true since every web server does this differently.

> HAProxy and nginx do a fucking shit ton more than Caddy does. HAProxy doesn't really have a "default" you have to tell it what you want to do, specifically. This is a good thing.

I'm not arguing that it doesn't do a lot of things Caddy doesn't. I'm arguing for sane defaults. If I tell HAProxy to enable TLS and provide a certificate that contains an OCSP AIA extension, a sane default would be to use that information to manage OCSP stapling.

> You're both giving the impression that Caddy is able to automatically renew certificates, while no other software does this.

You're leaving out the part where I say these issues "[...] are essentially solved with a web server that supports ACME natively". I'm not referring to Caddy specifically. The web server doing this natively is far less error-prone. I have also mentioned alternatives.

> - Separation of concerns. I don't ask my mechanic to build me a car, or to drive me to the shops in it.

Where do you draw the line? Most deployments let their web servers handle TLS. OCSP stapling should be part of a modern TLS deployment. You could similarly argue that using HAProxy for TLS is wrong (and that you should rather put pound or stunnel in front of it).

> - Not every organisation uses LetsEncrypt certificates. Some will even use self-signed certificates.

That's fine; you can continue to manually configure certificates, even with Caddy, and I'm definitely not arguing that this option should be removed. ACME is also an open standard, so switching to other CAs can be as simple as configuring a different ACME server url.

> - TCP or DNS load-balanced environments will want the exact same certificate & key on multiple servers.

That's why you can replace the certificate and key storage in more complex environments[1].

> - Caddy has already shown that trusting some "automagic" solution can bite you in the ass. Remember that time Caddy wouldn't even start, even though it had perfectly valid certificate material in it's cache, because LE was down?

Software can break, what a surprise. You could similarly have a broken OCSP updater script that puts empty or expired OCSP responses in your stapling file because the check is missing or incomplete and break the site for your visitors. Or your ACME client could fail on renewal because of a backend change on Let's Encrypt's end that exposes a bug in it[2].

> HAProxy doesn't know the email address I'd want LetsEncrypt to use for notifications. It also shouldn't be agreeing to LE's terms on my behalf.

You can ... provide your email address? You keep arguing that this stuff is easy, surely adding a single line with your email address won't break the camel's back. Accepting the TOS can be solved as part of the installation or configuration process, much like ACME clients do it.

Anyway, you seem to be having some kind of problem with Caddy. I'm not interested in that discussion (which is to say I'm not interested in discussing this further, I think I've made my point) - my goal is to improve the user experience of server admins when they enable HTTPS, and I have no doubt that native ACME support and automated OCSP management, both with sane defaults, would both help HTTPS adoption and improve the quality of the TLS configuration of your average web server.

[1]: https://godoc.org/github.com/mholt/caddy/caddytls#Storage

[2]: https://community.letsencrypt.org/t/acme-sh-standalone-fails...


My issue is with caddy.

FYI, the issue a few months back wasn't "software breaks".

It worked exactly as they designed and implemented it. The original issue was closed, before (as seems to be a pattern lately) they changed direction when they received so much negative feedback about their highly opinionated choices for the project.

To this day the same logic is still in caddy, but with a longer grace period. Try starting caddy offline with a cert that expires in less than 7 days.

I don't trust someone to develop a server if they think failing to start at all, is better than starting with close to expired or even expired certificates.


It's called the Ostrich Solution. You stick your head in the sand to ignore alternatives to buttress the argument that your solution is the sole solution to a problem. You then hope that no one does even the most rudimentary investigation, which is, unfortunately, getting more and more prevalent nowadays.


re: "only LE client".

https://github.com/icing/mod_md

this is being folded into the official Apache httpd repo.


According to Netcraft, 21% of the sites on the public Internet run Apache:

https://news.netcraft.com/archives/2017/07/20/july-2017-web-...

There are lots of legit reasons to run Apache:

- it works like a champ

- it's widely known

- it's got a big ecosystem (plugins, etc.)

- it has a fairly complete WebDAV implementation

- it's widely supported

TL;DR "legacy" sometimes means "good" and not just "old."


Yeah, right. It's also still prone to slowloris and you can take down a shared hosting server with a laptop in a coffee store.

In this case legacy is either "I don't know anything else", "well it worked so far" or "meh".


Apache has mod_reqtimeout which mitigates the slowloris-attack. It's been a while, but in my tests that worked pretty well.


This is your second comment that mentions slowloris. You know slowloris isn't specific to Apache, right?


I know.


Lots of reasons. First of all, it is fast and reliable. The choice of MPMs allow for sys-admins to choose exactly how they want httpd to scale. Unparalleled RFC compliance. Load balancing with dynamic, runtime configuration. Failover. .htaccess file. Full support for FPM/FastCGI. Fully Open Source and community driven (not Open Core). mod_rewrite capability. Hundreds of modules for almost every and all situation. Most people simply take the "lazy" route and not even investigate httpd, instead relying on the FUD about much older versions (you still get, for example, the screed that "Apache forks off a new process for each request")... Apache httpd 2.4 is just as fast and reliable as any other web-server out there, nginx included. It's just not seen as the "cool and hip" web-server. I guess some people are more concerned about marketing.


One driver is popular software that depends on the .htaccess construct.

WordPress, for example. It can run on other servers, but their own docs call out the issue..."Since Nginx does not have .htaccess-type capability and WordPress cannot automatically modify the server configuration for you, it cannot generate the rewrite rules for you"


... and this is why WP has it's own rewrite engine, which works perfectly with nginx.


The quoted text is from their docs, and does not read that way. Some more:

"With Nginx there is no directory-level configuration file like Apache's .htaccess or IIS's web.config files. All configuration has to be done at the server level by an administrator, and WordPress cannot modify the configuration, like it can with Apache or IIS"


https://codex.wordpress.org/Rewrite_API

What you are saying is, on it's own, true. It's also a solved problem for WP.


does it work with all plugins too?


It runs before the plugin hook, so yes, as far as I was able to test it a while ago, it does.


Apache is the standard solution in web hosting.

Its "killer feature" are htaccess files. It provides a possibility to give users a limited way to configure their virtual hosts, without giving them too much power. (Unfortunately there are some unfixed systemic security issues with that, namely symlink attacks...)


I'd imagine one could crush the server with pathological rewrite regexps too.


You can, and it's very unfortunate that the apache devs don't consider these things vulnerabilities. (I reported a DoS via password hashes a while ago.)


Are you kidding? Apache is by far the most used web server for dynamic content-driven sites. It's "killer" feature is its robustness, in particular for (shared or otherwise) CGI hosting (and PHP hosting, of course). Outside the web app bubble, CGIs eg. per-request process spawning isn't a performance problem at all when you're using caching, which Apache httpd conveniently provides with mod_cache; mod_cache, unlike eg. Varnish and other dedicated caching solutions, has full RFC7324 caching with re-validation, not just "Expires:".

Apache httpd in many ways is a much more complete web server solution compared to, say, nginx (which I like as well). I'm prefering Apache httpd for the depth of available documentation alone, because everything is developed openly, C source is accessible and not hard to develop with, and because of the breadth of available options (certbot/letsencrypt/acme reference implementation on Apache, WebDAV, clustering etc. etc.).


Almost every DO, vultr, Linode, etc guide I've seen for LAMP server config uses Apache. I also suspect apache, being the old guard, is the defacto standard in various appliances, routers, firewalls, IoT, etc devices. Its our generation's BIND. Its probably never going away, even if better competitors come and go.

As far as stats go, according to netcraft MS's IIS is leading with Apache second and nginx a distant third. Of course, this data isn't terribly reliable as netcraft can only see front ends, which may hide whats behind the load balancer, but IIS/Apache are still kings.


So it's surprising, what the A in LAMP stands for?!


There are plenty of docs on linode for nginx configurations.


Plenty =/= best


It's pretty robust these days and is battle-hardened by the many attacks on it. Plus it has support for just about everything under the sun - that is, if someone has written something that extends or relies upon a web server, there will almost certainly be apache support.

(And I'm writing this as a dev who worked on a competing web server which IM-biased-O was vastly better :)


I use Apache specifically on shared web servers in order to run each customer's web site under a separate uid/gid (for security purposes). I use it on other $work web servers just for consistency/ease of management (performance isn't an issue).

Mostly I use it because, after probably 20 years or so, I'm quite comfortable with and accustomed to it.

That said, I have started putting a few of my own web sites on nginx just to try it out a bit and get some familiarity with it.


Speed is one of the reasons to stay on apache. Using nginx to serve static and using php_mod vs fpm-php will be faster as long as you disable directory level .htaccess


If you're using php-fpm over unix socket, I doubt this is true, please show some stats.




Does this apply to anything other than PHP? Not saying it's not significant, just curious. Does mod_wsgi outperform uwsgi with uwsgi_pass for example?


Nginx is faster than Apache, you're talking about PHP which is very different. php-fpm is not part of Nginx.


Nginx should be faster in theory, but it's kind of hard to show in practice as the actual http parsing is just so much faster than anything else. You have to really tune them for for benchmark scenarios to see any kind of difference.

What your vendor supports and what you are most comfortable with configuring should decide which one you run.

PHP, as you say, is something else entirely. It's so much slower than your http server so the latter doesn't matter at all.


Not all the time... Before, when nginx was event and Apache was prefork or worker, yeah. But with the Event MPM, performance is pretty much neck-and-neck.


Running apache in front of tomcat is still a very common pattern. Mostly due to connection handling and configuration around it.


I use Apache all the time. It's great software.

It's kept up to date and it works for everything I want it to work with. This idea that because it's old and not hip therefore it's use is questionable is kinda dumb. I was disappointed when OBSD wrote their own httpd that was initially filled with stupid bugs.

If it ain't broke don't fix it.


No other server implements complete support for server side includes.


nginx with lua or perl module?


Please explain exactly how that provides complete support for the server side includes standard.


a popular CDN vendor has noted that SSL with apache was faster than nginx at very large scale (although was later replaced with something else).


Thanks to Allah the almighty Jihad is still an allowed surname in the EU.

I guess it is a sign of the health of the economy of the EU.


They are deep in denial here on cultures whos main job it was to guarantee continous existance and not technological advances. Nice try.


I'm not sure how you reached this conclusion from this news, given some insider infos from other comments.

Maybe you have a relevant story to tell about DDG?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: