Hurd's current status should be considered in the context of how the GNU project is run. In the early days GNU had some paid developers, but now their projects are run purely on a volunteer basis. Or in the case projects like GCC, developers that are paid by companies to work on GNU projects.
Hurd is late because there isn't any need for it anymore, we now have plenty of free kernels. And GNU isn't sponsoring it because of that, just like it isn't sponsoring a GNU MTA.
But work continues because a few people are still interested. There's a lot of neat ideas in Hurd. You get a lot of nice things once you move most of the kernel to user-space.
The best example of this is custom user-mounted filesystems (translators). You can e.g. mount a remote FTP site on Hurd in ~/ftp without being a superuser, this is something that GNOME and KDE implement on their own because users can't do it on other systems without special permissions.
Linux has been accumulating a lot of microkernel-like features over the years. E.g. dynamic loading of device drivers, FUSE etc. In time it'll probably usurp every useful Hurd feature worth having.
Richard Gabriel was right. Unix is good enough and few people care whether HURD has some nicer features that can be implemented in the wrong layers through piles of clever hacks.
Indeed. Worse is better, and I use Linux and not Hurd because it does what I need even if it doesn't look shiny on paper.
I'm not convinced that Hurd's translator idea is a hack, or that it's doing it in the wrong layer.
That every program that presents FTP on my system (GNOME, KDE, lftp, Emacs/TRAMP) has to implement its own VFS is a hack, so is having to use iptables or another forwarder if I want to run a non-root daemon on port <1024. And that I have to recompile and reboot if I want to enable some minor feature in the kernel.
EDIT: The last two paragraphs are redundant because I misread rbanffy's reply, and I oddly can't reply to his new one so I'm putting this here.
I am quite involved with virtualization. The interesting thing is, that some people are now running their software on the virtualized bare metal--i.e. using the hypervisor but without an operating system.
This is essentially treating the hypervisor as an exokernel operating system. And it seems to become more common.
Perhaps we are really coming to exokernels in a very roundabout way.
Yep, this is really cool. I don't remember what it was called, but didn't Sun do something like this, where the JVM basically interfaced directly with Xen?
I see exokernels as the future, but I'm slightly biased, as I'm involved with one...
But not at all as a general computing environment. It makes sense once you have your application finished that you could move it to a kernel that has just what you need. However, it makes no sense when you start development to try to guess what calls you might end up using.
You can use it as a general computing environment.
> However, it makes no sense when you start development to try to guess what calls you might end up using.
I agree. That's why you will be using libraries to duplicate what your OS did for you. The Exokernel guys call this a library OS. Libraries are already quite good at providing abstractions.
The difference to a conventional architecture is, that the kernel only provides secure multiplexing between application on the same machine. But the abstractions from the hardware are provided by some libraries. It's easier to experiment with libraries than with operating systems.
The MIT-Exokernel-Operating-System page (http://pdos.csail.mit.edu/exo.html) explains this in more detail. If you look at the first picture, and replace `Exokernel' with `hypervisor', you get modern virtualisation.
I'm semi-intrigued, but I'd like to see a sign of life beyond a page from '98. Not trying to be cruel, just saying that'd I'd like to see what's going on now.
But you can go the Smalltalk-style route. You write your app and then use a tool to pare down the environment (kernel included) to just what you use and nothing else.
The HotSpot style C1X optimizing compiler for Maxine is under heavy and constant development right now, I wouldn't be too surprised to see it ready for that sort of thing fairly soon as these things go.
I don't think the article mentioned it but there was a point well after the Linux kernel reached prominence that it was understood the HURD was just a back burner project. GNU has still had a huge impact on everything from IT data centers to Apple, and arguably you wouldn't even have initiatives like Wikipedia without it either. It's like the building block that allowed everyone else to place a cherry.
Certainly they had a huge impact, but if GNU had not been around, I wonder if something else would have filled the same niches and subsequently had similar impact?
For instance, the Berkeley people were also working on free stuff around the same time (indeed, the article points out the BSD was a contender for the GNU kernel).
I'm wondering if GNU is to free software what, say, Alexander Graham Bell was to the telephone.
There may have been someone else to do all the work that RMS and GNU did, but the reality is that while everyone else was too busy or too fatalist about the situation in the late 1980s, only RMS et al stood up and took on the responsibility for everyone else.
Also, note that Alexander Graham Bell invented the first practical telephone. RMS is an advocate. Much of his software work is specifically not original: he was trying to emulate UNIX. He and GNU are more of a founding fathers to the US than inventors of a device. Yes, eventually someone else might have declared independence in the western colonies and went to war with Great Britain, but we admit that it was GW and company.
I'm fairly sure the parent's reference to Bell is with respect to details that have recently come to light that Bell may have "stolen" all or most of the technology behind the first phone. That is, Bell's importance has less to do with his ability to create the technology and more to do with packaging and popularizing it.
I was not aware of that. Nonetheless, it seemed to me that the parent post implied that if RMS and GNU didn't come to be, then someone else would have come up and done the same thing, and thus what they did is somewhat less of an achievement. My post was mostly in disagreement with that idea.
> Berkeley people were also working on free stuff around the same time
The problem with BSD is that it creates (or at least doesn't remove) an incentive to take whatever you can and run with it that has proven irresistible for companies. Every proprietary Unix has appropriated large portions of BSD and, with few notable exceptions, none gave improvements back - or freely added original work to the common code pool because their competitors could take it and run - take whatever you gave them and compete against you with it.
GPL-like "viral" licenses negate the threat by ensuring any code you contribute cannot be used as a competitive advantage against you.
If it weren't for RMS and the invention of GPL-like licenses, I seriously doubt we would have a healthy open-source ecosystem.
>If it weren't for RMS and the invention of GPL-like licenses, I seriously doubt we would have a healthy open-source ecosystem.
No one can tell what would of happened if this was the case. And expressing your personal opinion doesn't change that.
>The problem with BSD is that it creates (or at least doesn't remove) an incentive to take whatever you can and run with it that has proven irresistible for companies.
And what is wrong with that? Developers know what they're getting into when they license their software under the BSD/MIT licenses. It's better that companies take high quality BSD/MIT licensed code instead of reinventing the wheel by creating their own crappy implementation. I don't even recall any successful high profile proprietary fork of popular BSD/MIT licensed software.
I agree with you on everything else, but there are tons of spectacularly successful forks of BSD-licensed software -- NetApp, Cisco, Juniper, and most of their competitors have proprietary operating systems that are forked from FreeBSD.
Thanks for the info. The only successful software that I could think of was the XNU kernel (Mac OS X) but it is not a fork even though it borrows heavily from FreeBSD for POSIX compatibility.
> No one can tell what would of happened if this was the case
You can look at the market and see if you could form a Red Hat around BSD. Call me back when you get funded.
> Developers know what they're getting into
And that's precisely why stuff like BtrFS is not BSD-licensed. Because the following week, Microsoft would launch their new and improved next-generation NTFS. There may be no successful BSD-branded ("ClosedBSD"? "ArrestedBSD"?) fork (BTW, is JUNOS open? I couldn't download the source) but certainly many pieces of BSD software end up inside proprietary software, and nobody knows exactly how those wheels were modified.
>You can look at the market and see if you could form a Red Hat around BSD.
I don't really see how monetizing from BSD/MIT source code would be different than monetizing from GPL code.
>Call me back when you get funded.
I am sure Apple, Microsoft and Adobe make more money than Red Hat . So again what is your point? Not everything in this life is about money.
>And that's precisely why stuff like BtrFS is not BSD-licensed.
And that's precisely why FreeBSD folk have ZFS and Linux folk don't.
>Because the following week, Microsoft would launch their new and improved next-generation NTFS.
ZFS porting from OpenSolaris to FreeBSD hasn't been easy. What makes you think that porting another modern and complex file system from Linux to Windows would be easy for Microsoft? And anyways it would be awesome that we could get native read/write on Windows partitions from Linux.
>many pieces of BSD software end up inside proprietary software, and nobody knows exactly how those wheels were modified.
Good, how much have Apple contributed back to FreeBSD? Apple also forked the LGPL licensed KHTML to form WebKit.
Does Apple have a real model that benefits open source/free software compared to the much smaller Red Hat? The only companies that I knew that contributed the same amount to open source/free software as Red Hat were Sun and Google.
But I must be just, Apple has released some open source code (GCD comes to mind) and as far as I'm informed funded some open source software (LLVM for example).
It is unlikely that the Berkeley people would've pushed for a completely free (software) kernel without the Stallman influence:
"I think it's highly unlikely that we ever would have gone as strongly as we did without the GNU influence," says Bostic, looking back. "It was clearly something where they were pushing hard and we liked the idea."
>It is unlikely that the Berkeley people would've pushed for a completely free (software) kernel without the Stallman influence:
<sarcasm>And no one else would of ever gotten to the task of creating a BSD/MIT licensed kernel since everyone would of thought "if Berkeley people didn't do it why should we?".<sarcasm>
"like giving the Han Solo award to the Rebel Fleet" -- he was a terrific speaker a decade ago, even if he did get off the rails quickly raving about GNU-slash-Linux.
I was one of the hacker employees of the FSF back in the early days of the HURD project. They pay was modest by industry standards but fair (every employee, hacker or not, got the same pay -- later the formation of a union was encouraged).
Some very good and lasting work got done back then, in spite of our rather unconventional work habits. I'm thinking especially of all the work done laying down the foundation of GNU libc. Roland McGrath got a lot of code rolling and, perhaps more importantly, established a pretty good standard of coding conventions and quality expectations.
I did not myself work directly on the HURD but in our small office I did have chats with McGrath and Bushnell about it. The sentiment around the design was, I think it fair to say, somewhat giddy. The free software movement was (and is) all about freeing users from subjugation to those who provide software. The HURD's microkernel architecture and the structure of the daemons would securely free users from subjugation to system administrators - each user could securely invoke a set of daemons to create the operating environment he or she wished, no special permissions required.
It was well understood back then, and even a point of discussion in academia, that a microkernel architecture posed some difficult problems for performance (related mostly to a greater number of context switches as messages pass between daemons rather than syscalls being handled by a monolithic kernel). Rashid's work had suggested that this problem was not so terribly significant after all. And so, at least to me, it felt like the GNU project was not only doing this shoestring-budget freedom-fighting hacking, but also leading near the bleeding edge of CS research made practical. Well, that was the theory, anyway, and we were mighty proud of ourselves and generally excited to be there.
Not much but some of the hacking of the core staff took place "from home". You must remember that this before any kind of data over voice or particularly high bandwidth connection was commonplace - so that hacking was over modem connected to text terminal. Mostly we hacked in a shared office which, if you saw it, you'd think "Wow, that's a slightly large closet." We were, at that time, guests of MIT.
With all due respect for RMS, and I don't think he'd especially disagree with this (though I could be wrong): he was an absolutely terrible project leader for the hacking part. As history has shown, his popularity among some notwithstanding, he's extraordinarily good at the political part of his works. Leading the technical project? Not so much.
It wasn't so much that he dictated bad technical choices. Even the choice to use Mach might have worked out. On the contrary, he was relatively "hands off" in most technical matters, only micromanaging if you really dragged his attention to some detail. It was more that he lacked any coherent overall strategy for completing GNU and his broad directives involved underestimations of the amount of work involved and were sometimes scattered, even bordering on inconsistent. It just wasn't his strength.
The original vision for the GNU system, at least as I understood it, was to - sure - grow a unix clone, but then to build a user space that much more closely resembled that of lisp machines. Emacs (with its lisp extensibility) was taken to be a paradigm for how interactive programs might work. Originally, it was even envisioned that the window system would be lisp based.
One early change to the original GNU vision occurred when it became clear that X11 worked pretty well and was here to stay and would be free software. As a practical matter: just use that.
Later, as mentioned in other comments here, the ECGS fork of GCC caused issues - ultimately leading to the displacement of an FSF appointed project leader. There is some back story to that. The company Cygnus (later acquired by Red Hat, founded by M. Tiemann et al.) had been advertising to customers that not only could they develop customized extensions to GCC, but that they could shepherd those extensions into the "official releases". There was frustration at Cygnus and some other firms that the FSF branch was not merging these changes quickly enough or was arguably being too prickly about the nature of the changes. As nearly as I can tell those sentiments led to the ECGS fork and RMS was ultimately put in the position of having to choose between "blessing" that fork or simply losing any claim at all to the future of GCC.
Around this time, I am told but can not myself verify, RMS was also under pressure from some key FSF advisors or supporters to exit the software development business and focus on the politics. Whatever the motivation, the FSF shed most of its in-house development efforts.
The pattern of losing the original GNU vision continued in the controversy over Gnome vs. KDE. Originally, KDE had licensing issues and did not pass muster with the FSF as being free software. Those problems have since been fixed but at the time it led to RMS' proclamation that Gnome would be the desktop for GNU -- a radical departure from what was originally conceived. Later, as you may have read, RMS came to describe Miguel as a traitor to the free software movement.
Somewhere in there - I'd have to look things up to get the timelines exactly right - Debian took off, in part to try to fill a void in the FSF's leadership at assembling a complete GNU system. Bruce Perens penned the now famous "Debian Guidelines".
A small group of relatively wealthy influencers, including Tiemann, met with Eric Raymond and conjured up the allegedly business-friendly "open source" notion. The main differentiation they sought from the FSF is that they would not condemn proprietary software or describe themselves as a freedom movement - they sought to emphasize the economic advantages of having volunteers do work for no pay. In my view, their main purpose upon founding was to attempt to politically marginalize RMS (a project in which they've had some success).
Bushnell moved on to a different stage of his life and, I guess it's fair to say, a higher calling. McGrath moved on to what I gather is a sweet job for Red Hat. The GNU project was gutted. Its institutional memory and such momentum as it may have had was gone. This was in part because RMS was not so great as a project leader but also, in large part, because the project was under significant attack.
In my humble opinion, there would be plenty good to come of a resurrection of the GNU project. I don't necessarily mean a resurrection of the HURD although I suspect we can do better than the Linux kernel. I do mean a return to a concentrated effort to build the kind of user space originally envisioned. While such a project could have enormous social benefit, I don't see any way to institute it and find support enough to carry it out.
Programmable applications is indeed the way things should go. And the programmability has to be immediate. Compare the slippery slope of customizing emacs to the burden of writing an Eclipse plugin. Most of the things that go by the name "plugin architecture" have a large barrier to entry because of the complex, heavyweight relationship of a plugin to its host environment.
Programmable applications is indeed the way things should go. And the programmability has to be immediate.
Then you should look at Smalltalk. Just deploy with the compiler and dev tools in the image. You can quickly visually inspect every object in the image and write a script against it, and run it instantly.
Can you describe more about what you see as this originally-envisioned user space?
Briefly, yes.
Two aspects were of particular interest to me, although if you ask the HURD developers from back then they could likely point out others:
1. Interactive programs should be uniformly customizable, extensible, and self-documenting - roughly in the manner of Emacs. That sounds like a trite thing. After all, the programs we wound up with have all three traits. For example, Gimp can be customized, new commands can be written in extension languages, and it has a great deal of on-line documentation. I mean something more specific but hard to convey concisely. Most of Gimp is not written as extension packages, which betrays an architectural weakness in contrast to emacs. Gimp's interaction model and customization model is awkward and ad hoc, compared to emacs. The on-line documentation of Gimp is not designed in a way such that its enhancement is a natural part of writing extensions. I don't mean that Emacs is perfect in these regards - it certainly isn't. I do mean that the architectural approach it takes is vastly more sane than what the interactive programs we wound up with use. (Don't mean to pick on Gimp. The observations apply as well to the Open Office suite, to Gnome, to Firefox, and more. We have a big, barely maintainable heap of discordant and vaguely conceived functionality.)
2. On a more mundane level, even the command line got horked. You know how (nearly) all GNU command line programs have --version and --help options and so forth? At least when I was working at FSF the notion was that that coding standard was a stepping stone to a shell much more like the much-loved shell on the Tops-20 operating system. The standard was supposed to be gradually refined so that those standard "--help" messages could be automagically used by the shell to intelligently prompt for arguments to a program.
I guess I should add that a lisp - most of us that I knew assumed Scheme - would figure prominently as the main extension language (rather than Emacs lisp). Why a lisp? Well, it had a proven track record and that was our aesthetic -- I'll leave it at that.
Could you be a little more specific about this? E.g. perhaps who, or at least how and who it effected how.
And, yep, a good user space of the sort you've described would be great (and is in fact something I'm trying to get myself jump started working on, in a FoNC style: http://vpri.org/html/work/ifnct.htm)
Could you be a little more specific about this? E.g. perhaps who, or at least how and who it effected how.
Briefly:
The ECGS schism was, in my view, purely and simply a successful effort to wrestle control over GCC away from the FSF. The FSF sought to develop GCC <i>for the aims of the GNU system</i> but a few firms, and their employees, sought to develop GCC for the aims of those proprietary software firms or the aims of proprietary software firms which were their customers.
The invention of "open source" was, in my view, purely an attempt to disassociate the resource of GNU source code and free software practices from the FSF's freedom mission. To put it crudely, and you can see this even in Raymond's original essays - they sought to recast the movement to give users software freedom into a movement to give business free labor.
After the Linux kernel started to take off there was, additionally, the non-trivial task of assembling complete and supported distributions. Although there was a community process underway that showed some promise (Debian), the firms that took the lead and grew quite large turned their back on that process, kept their system integration work internal and proprietary, and meanwhile solicited volunteers to work on other matters. That is to say that while the community might have been far further along by now than it is at distributing a decent GNU system, those firms fought (and won) to prevent that from happening.
Those are some examples. There are more but I did say I'd be brief.
I think it's quite disingenuous to describe the development model of Cygnus and RedHat as being more proprietary than how the FSF was doing things at the time. ESR is an epic douchebag, but he nailed y'all in his description of the FSF's development model in The Cathedral and the Bazaar.
At least Cygnus took input from contributors and even actually bothered to work on the project themselves. The audacity!
It's not disingenuous. When I was speaking "proprietary" I wasn't referring to Cygnus but rather to RedHat and other complete system distribution firms. Internally, they maintain a great deal of infrastructure to smooth the assembly of complete distributions. They maintain that proprietary infrastructure to have a competitive advantage. They decline to assist public projects with that assembly of a complete system.
That is there right, under law and free software licensing terms. Nevertheless, it is a wrong in the sense that they are refusing to help the community who has so benefited them, and actively attempting to keep the larger community from self-organizing to eliminate the need for that closely held infrastructure. These firms sing a song about the benefits of community cooperation but they do not practice what they preach. They take free labor from others. They give back labor in areas that are strategic to them. But they withhold labor that would actually advance software freedom in substantial ways.
Canonical did the same thing with Launchpad for five years before finally releasing the whole thing, including the build system (which was expected to be kept proprietary).
Heh. Yes, in the very early days of Canonical I chatted with Mark about the possibility of working there and that intention to closely hold some of the software I'd be working on was one of the red flags for me. In my view, it was partly because Canonical went down that path that GNU Arch got forked and horked (mainly by Canonical employees).
4.4BSD-Lite mentioned in article as possible base of GNU is outcome of this lawsuit - BSD without few legally unclear parts - i.e. "not complete kernel in contrast to Mach"
This is a story with a theme, a theme of stubborn refusal to admit failure in the face of the huge successes of others. Linux showed that open source can develop a vibrant operating system project. The BSDs have shown that a tightly controlled engineering effort can be achieved within an open source context. And Apple has shown that you can even make an operating system off of the Mach lineage! When faced with repeated failure, one has to look at the common denominators. It's GNU.
As noted by tl, EMACS suffered a nasty fork, and it bears pointing out that RMS started with a fully functional version of EMACS written by none other than James Gosling of Java fame. RMS replaced Gosling's MockLisp with a real Lisp (both bytecoded), but the guts remained largely the same.
Is not the big difference dynamic vs. static scoping?
That was a matter of much debate that started to conclude when RMS did GNU Emacs, in 1984 when Common Lisp was also officially released for the first time, the latter's biggest breaking change perhaps being static scoping.
I thought only of the dynamic scoping. I just found out that Emacs Lisp doesn't have tail recursion optimization, thus dooming you to use side-effects (because recursion can't be used for loops).
But I also understand that Lisp isn't a family of function languages. Functional is just a style that's possible and encouraged in Lisp, not required.
Well ... Scheme is the only big Lisp that guarantees TCO, while Clojure provides a recur special form plus a syntax for mutual recursion, since the JVM as of yet does not directly support TCO.
Obviously some (many?) implementations of Common Lisp provide it, but I'm not sure it's something you can particularly depend upon (I got out of the Common Lisp community in ... 1984 and haven't seriously used it since then).
Then again, I myself wouldn't necessarily describe Common Lisp as a modern Lisp anymore, it's essentially been frozen in amber when it standardized and it's filled with legacy cruft, perhaps most especially it being a Lisp-2. Scheme's standardization process has become glacial, but there is one (well, two, RnRS and the SRFIs). Clojure is much like Lisp Machine Lisp in the early days, although it should be past the worst of breaking changes by now.
I use Emacs to do all my editing. But I think it's a failure compared to what it could have been.
There's nothing inherent about Emacs that ties it to being an editor, it could have been rewritten a long time ago in Scheme or acquired a FFI. Then things like GNOME or TextMate might have been rewritten as part of the Emacs ecosystem, while looking exactly like they do today.
Instead Emacs is a very specific thing to a small crowd, and adding things like a FFI to it have been vetoed by Stallman in the past. That's one of the reasons I think that we still don't have Lisp as a first-class systems programming language on GNU as the initial announcement promised: http://www.gnu.org/gnu/initial-announcement.html
GCC is much the same. There was a proposal many years ago to make a libgcc. Stallman vetoed that on the basis that things like C++ support might be written without contributing the changes back to the FSF.
That's a fair point, but LLVM+Clang are quickly replacing GCC in some areas due to its monolithic architecture. You can't write things like a C source code using GCC as a backend (easily), but you can with the LLVM tools.
Instead Emacs is a very specific thing to a small crowd, and adding things like a FFI to it have been veto-ed by Stallman in the past. That's one of the reasons I think that we still don't have Lisp as a first-class systems programming language on GNU as the initial announcement promised: http://www.gnu.org/gnu/initial-announcement.html
So RMS failed in one execution for a still extremely excellent toolkit that would have make the GNU project even more well known for its excellent technical quality.
If anything, the GNU project is a great tremendous success, even if it is sometime dogged by its own short-term political interests. If he didn't listen to his political interest and instead focus on technical quality of his work, he would have achieved something much greater for the GNU project and his movement.
It certainly has a lot of inertia. But most of its big projects started in the old days are huge monolithic codebases that are starting to lose favor for various reasons. Emacs because not everyone likes the UI (and it's impossible to replace it, unlike other parts), GCC because Clang+LLVM are more modular and reusable, automake because it's a huge and hard to learn mess while newer systems aren't etc.
It's also losing some users like Apple and the BSD's due to the GPLv3 switchover.
I think that in the long term the GNU project will become increasingly irrelevant due to most of their crown jewels being hard to maintain, and there being political opposition to changing that.
Those are examples of projects succeeding in spite of GNU. We had the Emacs/XEmacs schism, where rms fought against lucid over features, most of which are in emacs now. How many people actually like gcc? Why is Apple pushing to replace it (see LLVM / clang) when they can play nicely with cups or khtml?
I was under the impression that they wanted to use llvm in their proprietary IDE (amongst other proprietary apps) which would rule out GCC for purely licensing reasons.
Xcode is already (and has always been) a wrapper around gcc/gdb. Clang provides a cleaner interface for integrating with an IDE (among a lot of other things).
Hurd is late because there isn't any need for it anymore, we now have plenty of free kernels. And GNU isn't sponsoring it because of that, just like it isn't sponsoring a GNU MTA.
But work continues because a few people are still interested. There's a lot of neat ideas in Hurd. You get a lot of nice things once you move most of the kernel to user-space.
The best example of this is custom user-mounted filesystems (translators). You can e.g. mount a remote FTP site on Hurd in ~/ftp without being a superuser, this is something that GNOME and KDE implement on their own because users can't do it on other systems without special permissions.
Linux has been accumulating a lot of microkernel-like features over the years. E.g. dynamic loading of device drivers, FUSE etc. In time it'll probably usurp every useful Hurd feature worth having.