Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What makes Fuchsia different then so many other attempts at writing a new OS? They aren't writing a new OS, at least, not in the complete sense.

They are using the IPC system developed in and extracted from Chrome. They are drawing everything in userspace with fast graphics render but the logic for all system components written in Dart from the Flutter project. They use musl for the libc. They are using the little kernel for the core kernel.

As a long time Linux desktop user myself, I'm really excited about this project. A secure desktop without tons of system calls? Userspace graphics? Not HTML/JS based? But could still be used for development? Yes Please!

It's really easy to compile and get it running. Try it out!



I wonder if they intend to keep Flutter as the primary UI toolkit, or if that was just used out of convenience. If this really ends up being an Android & Chrome OS replacement, and a large portion of code being written for it also ran natively on iOS, that'd be fantastic. Maybe even cool enough to get me to use Dart ;)


Is there a reason not to use Flutter? From what I've seen of Flutter it seems like a very competent cross platform UI toolkit. In fact, I wouldn't be surprised if they announced that you can start using Dart/Flutter to write cross platform Android/iOS apps at I/O 2017.



(disclaimer: I work on the Flutter team.)

You can use Flutter today to write an app that runs on iOS and Android. :)


How is it for writing cross-platform desktop apps? There seems to be a dissapointing trend of UI toolkits only going for IOS and Android, leaving out the still very essential need of toolkits for cross-platform desktop apps in languages other than c++.


Is there any Google apps made using Flutter at the moment?


How is it in turns of speed?

On one hand, the website says that it's compiled to native code (so it can be same speed/faster than Java), but on the other hand, it's based on a soft-type language, which makes optimization difficult (even with V8, JS is still slower than native code).

Side question:

I understand that Dart was soft-typed because it was supposed to replace/compile to JS, but what advantage does soft-types have against "implied-types" (like auto or go's := ) combined with operator overloading (to allow string+int, for example)?


Even the original, untyped, Dart was designed to be easier to optimize that JavaScript (e.g. it's class based, not prototype based, so the runtime doesn't have t work as much to optimize object access).

Recently Dart has added a strong mode (https://github.com/dart-archive/dev_compiler/blob/master/STR..., http://news.dartlang.org/2017/01/sound-dart-and-strong-mode....).

One of the reasons for strong mode is to enable better optimizations (https://www.dartlang.org/guides/language/sound-faq#why-stron...).


I'm going to save these links for the next time I get downvoted for saying that js object system and dynamic typing suck for performance :).

When JS performance threads come up I always mention that JS will never be as fast as Go/Java. Some people have a tantrum when I explain these issues. The sources I usually cite are dense and people just continue to downvote without reading them.


You might want to watch this talk on AOT compiling Dart, I'll link right to the perf benchmarks: https://youtu.be/lqE4u8s8Iik?t=9m28s

I think it's a work in progress, but main benefit at least at the time was faster startup.


AOT compiling was initially all about iOS (where JIT isn't allowed), but you can now do that everywhere with "application snapshots" which were recently added with Dart 1.21.

Unlike the script snapshots, application snapshots also include CPU-specific native code in addition to the serialized token stream of the source code.

So, you don't just skip the parsing stage, you also skip the warmup phase. Your application starts instantly and it runs at full speed from the get-go.

https://github.com/dart-lang/sdk/wiki/Snapshots


I tried out their demo apps on Android and they were pretty fast but there were still some hiccups. It's definitely not as fast as good Java apps but it is pretty close. Also the animations are on the level of iOS apps - far superior to Java Android apps.


Dart has strong mode (this is enabled by default for its Angular2 projects) - Check this talk on sound Dart for more details https://www.youtube.com/watch?v=DKG5CMyol9U


Dart supports strong typing.


read on the flutter site that text input still wasn't implemented, so i'll probably wait some more.

I didn't find a doc on how low level is this framework going. Or to put it differently , where does it plug itself on the iOS stack ? opengl ? calayer ? uiview ? webview ? etc...


I was just looking into this two days later..

It's fairly low level. It uses Skia, which is a 2d rendering engine Google bought 12 years ago that powers Google Chome / ChromeOS, Android, Firefox and Firefox OS. Skia is built on top of OpenGL, but they've been working on a Vulkan backend that is close to feature parity.

This seems to be how they get the high performance animations. It also explains how they are able to do cross compatibility between devices (You really only need C++ and OpenGL).

Here's more info from their FAQ: https://flutter.io/faq/#what-technology-is-flutter-built-wit...


Last year from what I have watched on Dart conference in Munich, Flutter was still very much WIP, not sure how much it has progressed since then.


Not that I'm aware of, although the site still says it's a technical preview.


Dart has really picked up a lot of traction in the past few years. If you do end up using Dart, I've got the perfect server-side framework for you to try out... ;)

https://github.com/angel-dart/angel/wiki


I think this definitely qualifies as a new OS. It's true that not everything is completely reimagined compared to what has come before. So what? Great artists steal.


>They are drawing everything in userspace with fast graphics render...

Dumb question, does this mean that it's limited to software rendering only? You need to go through the kernel to talk to the GPU, right?


What happens these days is that GPU buffers are mapped into user space, where they are filled with textures and drawing commands. Then, they are submitted to the kernel GPU scheduler for execution. See e.g. https://lwn.net/Articles/283798.


These days you don't even submit them via a kernel call these days; you've got your own GPU MMU context so you can kind of just go to town within your own process.

That's sort of the whole point behind Mantle/Vulkan/Metal/DX12.


In addition to the other comments, I would be excited about a graphics driver not being able to take out my system.


Does this actually happen with enough regularity to care?

In the cases where it does, I imagine it's a situation where you're actually doing 3D-accelerated renders of the user interface. In which case, when the graphics subsystem craps out, hasn't your running system been rendered effectively unusable anyway?

I get that this is a nice idea in theory, but does it actually improve anything practical in practice?


Yes? I haven't played games in a while, but when I did, crashes were frequent enough to be annoying, and ~100% of the time it was video drivers.


The Linux i915 driver manages to hard freeze my laptop every few days without any 3D games or even a compositing window manager ... It would be lovely to take it out of the kernel and isolate it with IOMMU.


Right, but that's my point. You're playing a game and the display crashes — I guess it's nice in theory that the rest of your system stayed up, but you're still more than likely just going to reboot, no?

If not, how exactly do you plan to restart the graphics process?


Windows automatically restarts the GPU driver after a crash; the screen goes black for a few seconds and then all the windows come back up. It can also upgrade the GPU driver without a reboot. It's not used often but it's pretty handy.


I think that was the moment I realised I liked Windows 7 more than XP. Installed new graphics drivers, expected "you must reboot your system", instead the screen went black (I had a moment of 'oh crap') and then came back up with 'installation complete.' Very impressive if you're used to XP.


This impressed me a lot a couple of months ago, when my GPU driver started crashing about every 5 seconds while playing a game, and Windows kept restarting it, without even killing the game. The performance was abysmal of course, but still pretty neat.


Lot's of options here:

ssh from my phone and restart the driver.

Go to a fallback ui, and restart the driver.

Have a system util notice the driver is misbehaving incorrectly, and restart it.


If the OS itself doesn't automatically recover the graphics process, you could always fire up your OS's built-in screen reader and use it like a blind person until you can get the graphics back up and running.


"The graphics drivers for Fuchsia also exist as user-space services. They are logically split into a system driver and an application driver. The layer of software that facilitates communication between the two is called Magma, which is a framework that provides compositing and buffer sharing. Also part of the graphics stack is Escher, a physically-based renderer that relies on Vulkan to provide the rendering API"


Depends on the OS design. It's really not a good design, but you could run everything in ring0 (x86), supervisor mode (ARM), etc. However, now any fault (user or OS) could halt your system and you have no memory protection or process separation.


You've also have Singularity style software isolated processes where the OS statically verifies memory safety so that it can safely run all of the code in ring 0.


You only have to run the privileged process that talks to the GPU in ring0 right?


No. GPUs typically work over the PCIE bus, and one can talk to PCIE via user space as well. In legacy systems like Linux the mapping of virtual to physical address and generation of scatter-gather-lists (SGLs) resided in the kernel. If one moves the same functionality to the user space without loss in performance (which is what magenta seems to do), there's no benefit to kernel GPU drivers.

Then there's the whole "GPL mafia" in the Linux world who'd like to force vendors to open up their drivers by moving as much of the critical pieces to the kernel as possible. In theory, you cannot write a kernel driver without violating the GPL. Fuchsia will have no such impositions. If someone wants to open their driver up, they could. If they believe their offering is superior, and a secret sauce needs protecting, they can keep it closed


Having control over the virtual to physical address mapping and scatter-gather lists for GPUs is effectively equivalent to kernel-level access anyway, though, because it lets you carry out DMA to and from arbitrary physical memory addresses. Some proprietary drivers for mobile GPUs have even given this level of access to untrusted user processes in the past leading to privilege escalation to root.


Not with IOMMU (Intel VT-d and similar).


And GPUs these days have their own MMUs too.


Unfortunately, the GPL mafia is right, and hardware vendors will continue to abuse their users with proprietary blobs unless they are forced into change.


How long has the GPL Mafia been saying this?

Do you see many vendors being forced into change?


AMD is finally changing. Unfortunately, The amdgpu-pro limbo right now is not ideal... but I am very excited for the future.

AMDGPU is already very usable. I remember ~5-10 years of waiting to use my Radeon 9600 with a recent kernel until the free Radeon driver was finally up to snuff. Catalyst and fglrx were awful.

Proprietary drivers just don't make sense. They are practically unmaintainable.


For GPUs, AMD is changing and even nVidia is using Nouveau for some chipsets.

For non-GPUs, only some WiFi drivers are out-of-tree (Realtek and Broadcom).


Typically only big vendors can afford not to change


You make open source drivers sound like a bad thing.


No, but pushing that agenda via architectural level changes is not exactly a way to get a good architecture.


An architecture is a structural design made to achieve certain goals. They just happen to have another goal to achieve.


It's even bad by that standard. When you compromise the function of the system to achieve that kind of goal, you end up losing out on both.


offering no alternative but "open source" on Linux is certainly not the most business friendly way to go about it.


It certainly is. "Business" just needs to internalize the fact that baking a bigger cake gets you more cake than fighting like a starved peasant over crumbs.


Why would I or anyone care if it's business friendly. The people who just want to sell hardware should have no qualms about open source.


Being business-friendly is not a goal, and shouldn't be a goal.


> Being business-friendly is not a goal

Sure, it is, for lots of people. Even, apparently, the FSF, hence the reason non-consumer products are not subject to anti-tivoization rules in GPLv3.


I wonder why the BSDs haven't caught on for embedded appliances and phones while Linux did with a more restrictive and less business friendly license. Sure some companies like iXSystems and Juniper adopted BSDs, but the vast majority used Linux.


The manpower behind Linux is superior. Not sure if it's the "vast majority", but for many business cases following the GPL(v2) rules simply is not a problem.

Both the PS4 and the Nintendo Switch run on FreeBSD, by the way.


>Both the PS4 and the Nintendo Switch run on FreeBSD, by the way.

Not exactly from what I've read, the PS4 runs 'Orbis OS' which is based upon FreeBSD 9, meanwhile the Nintendo Switch is using the network stack from FreeBSD, but also stuff from Android and the custom OS they wrote for 3DS.


Linux was available under a free license earlier (and alone) and as such developed a huge lead in community that no free OS has caught up with; that has quality, choice, and skill availability impacts that generally dwarf licensing issues for businesses.


Being business-friendly is not and should not be anyone's goal in licensing software. Being business-friendly is shit.


In my opininon, any particular license can only be friendly or unfriendly towards particular business models but not towards business in general.

Some businesses feel threatened by some open source licenses and other businesses are using the same open source licenses to do the threatening.

The funding for many important open source projects comes from global corporations that use it as part of their strategy, sometimes dominating entire industries based on open source.

A large number of small consulting businesses are based entirely on open source software as well.

Open source is not meant to be anti-commercial, nor would that make any sense, because "commercial" is what most of us do to make a living. That is not what takes away anyone's freedoms.

What does take away many freedoms is the fact that none of the widely used open source licenses fulfill their original purpose.

Linking rights to distribution once meant that users of the software got those rights. Now, in the age of the data center, end users get no rights at all and software can be modified freely by those who run the data centers without granting anyone access to those modifications.

Software has become more opaque than it has ever been before since the dawn of the PC age. Even without access to source code we had more control over Microsoft Excel than we now have over anything that runs in a data center, regardless of whether it is nominally "open source" or not.


As to your last point, this is what the AGPL is intended to solve.


I know, but almost no one uses it.


"Being business-friendly is shit". --> Who pays your bills mate? Fairies or a business who employs you, pays you a salary and you know, the inherent Darwinist biological urge to move up the food (value) chain to survive built into all living beings?


The "GPL mafia", how dare they demand their software's license be respected!?


  > They aren't writing a new OS,
  > at least, not in the complete sense.
Neither was Linux. It was based on Unix.

Neither was Unix. "The success of UNIX lies not so much in new inventions but rather in the full exploitation of a carefully selected set of fertile ideas . . . " --- Dennis M. Ritchie and Ken Thompson, "The UNIX TimeSharing System"


Maybe Google got tired of "System D" trying to take too much control ...LOL!

OR it could be a UEFI experiment...

"This project contains some experiments in software that runs on UEFI firmware for the purpose of exploring UEFI development and bootloader development." -https://github.com/fuchsia-mirror/magenta/blob/master/bootlo...


Oh they're using Dart. That's really cool. Dart is surprisingly pleasant to work with.


Yes I'm quite excited as well.

Building user land OS's is a fascinating (if not new) idea that I really love to tinker with a myself. It really would be nice if they can pull it off.

Read through what I could find of the IPC code a while back. Currently it seems workable but a bit "baroque", then again if, as you say, it is working inside Chrome, I guess it is battle tested.

Messaging has always and still is the future for very loosely coupled systems, and driving it all the way down would be really great.

Been following this for a while and can't wait to give it a whirl. Plus everyone likes to root for shiny new OS design projects.


Curious on advantages besides being secure. I normally have garbage computers so I like to run lean on stuff. Not to the point of using Arch but Ubuntu with i3. Also will "regular" programs still run. Like as a developer I need VS Code, Filezilla, file manager, Kate, LAMP stack installed.

Are there concerns on non-dedicated graphics cards, or running say an ARM-based processor.

Also I'm not sure what you mean by '...easy to compile...' How do you compile this? I'll have to read up on it. Get that checksum bruh.

I remember trying to use Slackware and for me that was a lot of work.

Edit: Oh I see at this time it is limited to three physical machines. Interesting on the Pi 3 part. I wonder if I could try it in Virtual box. But why try it in the first place?


Not often someone says lean and VS Code in the same statement.


Egacs. Eight Gigabytes And Constantly Swapping.

Moore's Law. Give it another decade and it'll be Etacs.


It’s not very fast, but it’s surprisingly easy on the resources, especially memory.

Compare using VS Code w/ Typescript Language Service to any type of dev stack that includes the word “Scala.”


> It’s not very fast, but it’s surprisingly easy on the resources, especially memory.

Open VS Code and then Sublime and be amazed.

Atom and VS Code can never be 'easy on the memory' because they have to load an entire Electron instance just to idle.


> pen VS Code and then Sublime and be amazed.

At what? Using 5% more total resources? Great tradeoff!


It costs MONEY!!! Noooo.... haha


Scala with Emacs + Ensime


Do you have any experience using it with scala.js?


It's no joe, but it sure as hell beats NetBeans.


If you tell me your beef with Netbeans I might be able to help.

I have significant experience with all big Java IDEs, 3 months full time with Visual Studio (in addition to testing it on and off for years). Use Sublime regularily. Can use vim productivily for server config. Have used emacs enough to get a taste of it.

But lately I find myself getting back to NetBeans whenever possible, not because of price but because of features, sensible defaults and stability.


It runs like a slug and sits atop a mountain of RAM that it hordes like a jealous dragon.

OTOH, my hobby programming is done on an HP Mini 101.


Ok I guess even I would possibly go with something else if I was seriously resource constrained.


Yeah I used to use Atom. I know Vim is what I should use for "lean" but it's not easy to use. I did see some pretty themes 'space wrap' or something like that.


> I know Vim is what I should use for "lean" but it's not easy to use.

It's easy to use, but it's not easy to learn.


The ol' learning curve


Vim is well known for its bloated, messy codebase. I'd hardly call it "lean".


It's a messy C codebase, which is orders of magnitude leaner than a clean electron codebase.


I wasn't aware, I just think it's command line so it must be "lean" right? No GUI. I don't know.


There are plenty of "mid-range" editors and IDEs: mousepad, gedit, geany, kate..


I recently tried Geany as the chromebook I bought is ARM-based so VS code wouldn't work on it. It's alright.


I always recommend Kate to people who want a programmer's editor but don't want to deal with the esotericness of Vim.

(I personally alternate between both Kate and Vim, depending on whether I'm in more of a GUI or a CLI mood)


s/Vim/EMACS/ and you're correct. Unless you want to learn Vi, Vim is not what you want; rather EMACS is what you want, and I will agree wholeheartedly that it is "not easy to use."


"The little kernel" appears to be a popcorn brand FWIW.


The modular approach seems interesting. Call by meaning?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: