Hacker Newsnew | past | comments | ask | show | jobs | submit | more TimJYoung's commentslogin

That is a fantastic read, thank you. It's amazing how many structures and institutions have evolved from that simple concept.


No, I think what people are implying is that commercial licensing of software solves the trust/responsibility aspect of software. With proprietary software, there are legal remedies to malfeasance.


In which case that is still both obviously nonsense and irrelevant on two counts.

It's irrelevant because this was about FOSS vs. closed source, not about commercial licencing vs. noncommercial licencing. Even if commercial licencing were the solution, that says nothing about whether the commercial licence should be FOSS or closed source.

And it is also irrelevant because there are broadly the same legal remedies to malfeasance in all cases. If you are breaking the law, you are sill breaking the law if you are publishing your source code, and you are still breaking the law if your are doing it non-commercially.

And in so far as you mean liability for defects rather than malfeasance, it is obviously nonsense that there are any generally applicable effective legal remedies against terrible proprietary code if you look at the real-world quality of products in the market. You might be able to put together a contract that helps with that, but (a) that is far from the norm and (b) is obviously still irrelevant to whether the code should be open or closed.


Two things:

1) The PC software world did run for quite a few years on the model of predominantly commercial/proprietary software, most of it being closed-source, so it's not like it is some far-fetched idea that doesn't work in economic terms.

Personally, I prefer the commercial license/source-included model, with the emphasis on the author/company getting paid to ensure that the situations like the one described here are avoided. You can then have additional educational licenses for ensuring access to developer tools for educational purposes, but that's up to the author/company.

2) If you directly pay someone to write software, I would expect any such arrangement to include the source code as part of the work product, regardless of the ultimate visibility of the source code to outside parties.


The history of cryptocurrency in particular and business law in general shows that adding money to the system doesn't automatically result in trustworthiness. Even the giant corporations providing cloud computing do decide to abandon products and discontinue services, or dramatically raise prices.

As someone else suggested, maybe the way to go is to rely on foundations. Maybe individuals shouldn't be taking on the burden of maintaining software alone? Maybe JavaScript needs a more slow-moving organization like Debian to handle package integration, with all the bureaucracy that entails?


Absolutely - adding money doesn't automatically result in trustworthiness. What it does do, however, is make the transaction fall under legal commerce, which gives the purchaser/user rights and remedies that they do not have with free (as in cost) software.

With foundations or any other form of over-arching bureaucracy, you risk stultifying software developers and harming innovation. It's really, really hard to beat the self-organizing aspects of free markets combined with commercial legal frameworks.


Well okay, but these aren't opposites. Large businesses cooperate internally using bureaucracy. They cooperate externally using (and funding) foundations and other open source work.

There is market demand for stability and it can be a competitive advantage over innovative but unstable alternatives. (Consider why Go and Docker are so popular.)

And why do companies start and fund foundations? Because their customers have doubts. It's better for stability than a market that's not based on standards.


Do you have any rights with paid devices? A kindle comes with 8 hours, 59 minutes of disclaimers

https://www.youtube.com/watch?v=sxygkyskucA


>The PC software world did run for quite a few years on the model of predominantly commercial/proprietary software, most of it being closed-source, so it's not like it is some far-fetched idea that doesn't work in economic terms.

And now Microsoft uses linux on the majority of their own cloud offerings. Open source beats propriety software on economic terms a lot of the time. It doesn't matter that both can work in economic terms, it matters which one is better in economic terms.


I don't think this book has been completely written yet. I think we're just now starting to see some of the major issues with FOSS, so don't throw up that "Mission Accomplished" banner yet.

FOSS is very much like the internet, in general: it was great when it was a small group of technical, like-minded, dedicated individuals working towards common goals. It starts falling apart, however, once you introduce the rest of the world into the system because the world primarily works on the basis of ruthless self-interest.


FOSS was going nowhere until the rest of the world got introduced to it. It was of more or less purely academic interest for a very long time.


Of course it was constructed. Aliens didn't just drop it from the heavens - the US specifically released the internet to the public and commercial interests, whereas previously it was limited to academic interests. Once commerce was up and running on the internet, it was simply a matter of companies getting big quickly and inserting themselves as middlemen between customers and producers/distributors. The fact that most of these companies eschewed profits for many, many years in favor of growth makes it self-evident that they understood that capturing the majority of the network (winner-take-all) was the primary goal.


Not just house - Wax Trax! Records was very much responsible for exposing quite a few people to industrial music in the 80s, also:

https://en.wikipedia.org/wiki/Wax_Trax!_Records

Chicago often doesn't get the cultural recognition that it deserves. My wife and I love it there, and visit as often as possible.


That title is pretty misleading. The breakage is being reported as being in the default program file associations (Default Programs under Control Panel/Settings):

https://support.microsoft.com/en-us/help/4028161/windows-10-...


The file association bug is actually present in 1803 as well, not just 1809. A cumulative update some months ago introduced the bug and subsequent updates never fixed it, and somehow it got integrated into the ISO for 1809 so you'll be fucked on a fresh install. A fresh install of 1803 will be fine... so long as you assign all extensions before updating to the latest build.

It's a complete clusterfuck and makes me wonder why I'm so lazy and incapable of moving to Linux.


Yea the title is total clickbait.

On the other hand, this is a comical fail on microsoft's part:

> some Win32 programs can't be set as the default program for a given file type. So if you want certain files to always open in Notepad, for example, you're currently out of luck.


It's a good thing that my windows 10 N hasn't had a successful update since 1803.


"Your whole reply is a straw man. The author is claiming that nothing explodes b/c the error is correctly handled, not b/c it was silently swallowed."

Yes, but I think that this is exactly what pilif is objecting to: the fact that there is no way for the callee to force the caller to deal with the error condition (I'm not sure that this is entirely correct, given that Go has the panic/recover keywords that are later discussed in the article, but then you might as well use exceptions).

IMO, this is very similar to what happens with use-after-free and other types of memory reference errors: they are extremely hard to debug because the symptoms/results of the error can be totally disconnected from the original source of the error, in some cases by minutes or longer. You simply don't want to have a situation where improper state is allowed to linger and further pollute/corrupt any subsequent computations.


And yet forcing the caller to deal with errors still doesn't prevent improper state in any way.


It does if they don't handle the exception, which is the situation being described.


Errors cause side effects, which is how you get to improper state. How does forcing the caller to handle errors helps with removing side effects?


Because the caller either deals with the error (and fixes the state) or they don't, but there's never a case where the callee was unable to force the caller to fix the state. If the caller then goes on to just handle/suppress the exception and continue with the invalid state, then at least they're making an explicit decision that is on them.


That's the point, forcing to handle errors doesn't force to do it properly.


It's still strictly better than just letting people not bother to handle them, which is never proper.


Is that true? What if you have an error that happens once every 10^9 requests and the service failure isn't critical (no one dies). Isn't it better to just not bother with the error, let the service keep running and don't worry about it?


That depends on whether you are going to be the person who has to find out why corrupted data has been written, when it was corrupted, what the original value was or even worse, whether a given record is corrupted or not because the corrupted ones look exactly the same as some value of non-corrupted ones.

As someone who has been in that boat, I can tell you that request termination due to an uncaught exception is infinity times better than the horror of debugging data corruption, even (or especially) when it only happens ever 10^9 requests.


If it's 10^9 requests, then obviously they are low-cost processes in the first place, so you're better off just re-doing the job that failed.


That's what I mean. Yes.

But when the caller missed checking some error code, then nothing won't notice it has failed and now you have corrupt data to deal with. That's why I think blowing up at the time when something goes wrong is so valuable.


"perfect is the enemy of good"


Linux "won" in the sense that every single one of those devices is special-purpose, and not for general-purpose computing. Linux is helping those that want to lock down computing so that it is consumption-only, which is completely antithetical to the goals of FOSS. Creation happens on the desktop, and right now the desktop is primarily Windows and Mac.


You are very correct. I've often wondered why some of the kind of people who are ambivalent about having Linux on the desktop don't see the huge benefits to their cause.

Example: I have an older laptop with a 5400RPM hdd that I've since bought a SSD to image and put a new OS on. Currently it runs Windows 7. Hate Windows 10 after being a lifelong Microsoftie (DreamSpark worked as it turns out), been learning Linux at work, so let's try it at home. I work on computers for a living, I can handle the learning curve, right? I will be reinstalling Windows 7 so I can AskWoody Group B it until it is out of support, then move it to Windows 7 POS Ready to get security updates until the last possible second. From there? I guess a mac because I refuse to use Windows 10 on personal devices aside from maybe a LTSB version. Different rant though, back to Linux on desktop:

I _want_ to run NeonOS or Debian KDE on that laptop, but cannot as the tools I use are not available on those platforms. The "fundamentalists" as portrayed in this article would say that I shouldn't have chosen to use the tools I chose. That doesn't help my problem though and won't do anything to get the 'unwashed masses' using the better software en mass. Here better software = FOSS since we are being a hypothetical fundamentalist a la Stallman.

If these Stallmen could "hold their nose" long enough to build Linux such that it could run most rando Windows programs without huge fuss and individual WINE config tweaks, the long term benefit would be that they would be positioning themselves to overtake Windows in usage on the desktop as well as the server side.

Knock-on effects are strong:

1. More users, more developers to create software

2. More developers, more improvements made to system codebases, package maintenance, dev tooling

3. More Linux, more Linux compatibility out of the box

4. More Linux on desktops, more people who've never been tainted by MS DreamSpark (like me) and due to baby duck syndrome prop up MS Windows, Office, VS and the Microsoft way

5. MS might even open source Windows itself (a core product) if it is no longer profitable. Making the job of dealing with Windows compatibility A) easier and B) nearly redundant

So the FOSS people win the war, at least on the desktop Linux vs Windows front.


That's not quite true. You can easily decompile any binary and glean all sorts of valuable information from it. On Windows you can find out which API calls it makes and can even use detours/trampolines to redirect such calls:

https://github.com/Microsoft/Detours/wiki/Using-Detours

At one point I wrote a little piece of software as a proof of concept that used detours to redirect any file I/O in IE that was deemed "unsafe" to a special in-memory file system.

The bottom line is that you don't need the source code in order tell if an application is "phoning home" or if it makes suspicious API calls. In fact, it is often easier to just simply monitor how the software interacts with the host system to determine if it is performing (possibly) malicious actions. IOW, if you're concerned about a certain piece of software, then auditing the source code isn't going to be as good of a solution as just sandboxing the application so that it's impossible for the application to do something bad.


Nobody is going to use software for anything critical that doesn't have someone "waiting around" to fix any bugs that appear, even if they are several years down the line from the original release (and yes, this happens a lot). So, what you call "rent" is, in actuality, the cost of keeping the lights on and the original developers around so that such long-term support can be provided. And yes, these same developers can and do provide new features (which can also introduce new bugs, thus restarting the clock) during that same time.

This isn't to say that software companies don't exist that just milk the hell out of their customer base. But, that's a problem with any industry, not just software, and is an economic phenomenon, not something intrinsically bad about proprietary software. Typically, it's a problem with a lack of competition and the ability of incumbents to erect artificial barriers to such competition.


> Nobody is going to use software for anything critical that doesn't have someone "waiting around" to fix any bugs that appear

This is exactly why you're often able to make money and sustain yourself out of free software.


Most companies do not want to deal with the hassle of finding someone to perform support/bug fix work on their software. They want to be able to call up a vendor and get an answer today about a bug fix or support. Most open source projects do not have this level of support.


...and those that do are often making money out of it.


Not really, because there are thousand others that can out offer you.


That's also a selling point, though, especially if you're a small company. Knowing that they weren't locked in to us, and therefore helpless if we closed or pivoted, helped sell our services many times.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: