Hacker Newsnew | past | comments | ask | show | jobs | submit | mtzet's commentslogin

A normal desktop with non-soldered components is more repairable, cheaper and can also run on stock Linux?

The only selling point is the form factor and a massive amount of GPU memory, but a dGPU offers more raw compute.


AMD APUs can run stock Linux.

All those SteamOS handhelds are on AMD.


> with non-soldered components is more repairable

This is literally the limitation of the platform. Why even bring that up? Framework took a part made by AMD and put in their devices.


I don’t get this take. Is it so hard to understand that a computer operates on a giant array of bytes?

I think the hard thing to understand is that C’s pointer syntax is backwards (usage follows declaration is weird).

I also think understanding how arrays silently decay to pointers and how pointer arithmetic works in C is hard: ptr+1 is not address+1, but address+sizeof(*ptr)!

Pointers are not hard. C is just confusing, but happens to be the lingua franca for “high level” assembly.


> Is it so hard to understand that a computer operates on a giant array of bytes?

Beginner programming languages universally (since BASIC and Pascal) were designed to hide this fact. There's nothing in a beginning Python course that explains the true nature of computers. You learn about syntax, semantics, namespaces, data structures and libraries. But there's nothing that says, "a computer is endlessly incrementing a counter and executing what it finds where the counter points". And this is probably partly because of "go-to considered harmful", which posited that a lack of control flow (which is a fundamental fact of how computers actually work) is harmful to reasoning about programs.

It's probably objectively true. But a lack of go-to also restricts people from seeing the fundamental truth of the indistinguishable nature of data and instructions in the Von Neumann architecture. Which may also make it difficult to explain GPU computing to students (because it must be understood by contrasting it with Von Neumann architecture).


So let's focus on the case where I'm setting up a bunch of bare-metal hosts as servers. What's the value proposition of using FreeBSD over Debian/Ubuntu if we're not counting familarity?

Either experience will be CLI first, so this is a tie.

ZFS integration is one point. If that's important to you, then you'd want to pick a distro like Ubuntu with first-class support. All major development happens on the Linux on ZFS branch as far as I understand, so this should be okay.

As the original post points out, FreeBSD used to have unique features as selling points: zfs, dtrace, the network stack (before SMP became ubiquitous?), kqueue, jails. I'm sure there are others. But these days it seems Linux has caught up with developments like ebpf, cgroups, namespaces and io_uring.

I'm sure the fragmented nature of Linux means that some of these low-level techs are easier to use on FreeBSD. The counterpoint is that the higher-level stack is more well-supported on Linux. You may not have to care too much about the details of namespaces and cgroups if high-level docker/kubernetes/... tooling works for you.

What am I missing?


That's a great summary that details what I was suggesting.


> special case where all memory can be easily handled in arenas That seems to be an unfair bar to set. If _most_ objects are easily allocated by an arena, then that still removes most of the need for GC.

I like Jai's thesis that there's four types of memory allocations, from most common to least common:

1. Extremely short lived. Can be allocated on the function stack.

2. Short lived + well-defined lifetime (per frame/request). Can be allocated in a memory arena.

3. Long lived + well-defined owner. Can be managed by a subsystem-specific pool.

4. Long lived + unclear owner. Needs a dynamic memory management approach.

If you want to make the claim that tracing GCs surpass manual memory management in general, you should compare to a system written with this in mind, not one that calls malloc/free all over the place. I guess it might be more fair if you compare tracing GC with modern c++/rust practices.

I agree that for most systems, it's probably much more _practical_ to rely on tracing GC, but that's a very different statement.


I agree that all might be too high a bar, but most is too low, because even if most of your objects fall into categories 1-3, sufficiently many objects in category 4 would still make your life miserable. Furthermore, it's not like arenas take care of too much because they still require some careful thinking. E.g. Rust's lifetimes don't automatically ensure a correct use of arenas in all cases. A language like Zig is very friendly to arenas, but you still have to be careful about UAF.

Now, I know very little about Jai, but bear in mind that its author doesn't have much experience at all with servers or with concurrent software in general, which is 1. where objects with uncertain lifetimes are common enough to be a serious problem, and 2. a huge portion of software being written these days. Games are a domain where it's unusually common for nearly all objects to have very clear, "statically analyzable" lifetimes.


> 2. Short lived + well-defined lifetime (per frame/request). Can be allocated in a memory arena.

You now have an "arena management" problem. Figuring out the correct arena to allocate from can be a problem. (You want to put all objects that become unused at the same time into the same arena, but that's not the same as all temporary objects allocated between time t and t+delta.)


Most software in the industry is slow because it's doing a lot of stuff that it shouldn't. Often times additional "optimization" layers adds caching, but makes getting to the root of the issue harder. The biggest win is primarily getting rid of things you don't need and secondarily operating on things in batch.

My playbook for optimizing in the real world is something like this: 1. Understand what you're actually trying to compute end-to-end. The bigger the chunk you're trying to optimize, the greater the potential for performance.

2. Sketch out what an optimal process would look like. What data do you need to fetch, what computation do you need to do on this, how often does this need to happen. Don't try to be clever and micro-optimize or cache computations. Just focus on only doing the things you need to do in a simple way. Use arrays a lot.

3. Understand what the current code is actually doing. How close to the sketch above are you? Are you doing a lot of I/O in the middle of the computation? Do you keep coming back to the same data?

If you want to understand the limits of how fast computers are, and what optimal performance looks like I'd recommend two talks that come with a very different perspective from what you usually hear:

1. Mike Acton's talk at cppcon 2014 https://www.youtube.com/watch?v=rX0ItVEVjHc

2. Casey Muratori's talk about optimizing a grass planting algorithm https://www.youtube.com/watch?v=Ge3aKEmZcqY


Strongly agree. That's perhaps less true for the software I work on these days (lapack), but I've seen that so many times over my career. I'm also a big fan of "Efficiency with Algorithms, Performance with Data Structures" by Chandler Carruth at CppCon 2014. https://youtu.be/fHNmRkzxHWs


Processors doing out-of-order execution doesn't change the semantics of the code. That's very different from the example where gcc just throws away the assignment.

The idea that he just needs to accommodate the compiler people is silly. Compilers exist to serve programmers, not the other way around. It's entirely reasonable to disagree with the compiler developers and use a flag to disable behaviour your don't want.


It does when you have a weak memory model and multiple threads involved.


I agree that go and rust have different areas, but that was less clear when they were getting started. Back then go was trying to figure out what it meant by 'systems programming language' and rust had a similar threading model.

Another point is that they do share similarities, which might we might now just describe as being 'modern': They're generally procedual -- you organize your code into modules (not classes) with structs and functions, they generally like static linking, type inference for greater ergonomics, the compiler includes the build system and a packager manager, there's a good formatter.

The above are points for both rust and go compared to C/C++, Python, Java, etc.

So why do I like go? I think mostly it's that it makes some strong engineering trade-offs, trying to get 80% for 20% of the price. That manifests itself in a number of ways.

It's not the fastest language, but neither is it slow.

I really dislike exceptions because there's no documentation for how a function can fail. For this reason I prefer go style errors, which are an improvement on the C error story. Yes it has warts, but it's 80% good enough.

It's a simple language with batteries included. You can generally follow the direction set and be happy. It leads itself to simple, getting-things-done kind of code, rather than being over-abstracted. Being simple also makes for great compile times.


> I agree that go and rust have different areas, but that was less clear when they were getting started

That I agree with.

But Go is anything but modern on a language front. It shares almost nothing with Rust, which actually has a modern type system (from ML/Haskell).

Even if we disagree about exceptions (I do like them as they do the correct thing most of the time, while they don’t mask errors, but include a proper stacktrace), go’s error handling is just catastrophic, being an update from c which is even worse is not a positive.


I'm not arguying that go has modern tech, but rather that it has modern sensibilities. This means not trying to force 90s style OOP, preferring static linking for easier deployment, including a build system and package manager with the compiler and preferring static types with type inference to dynamic types.

This differentiates go, rust, zig, odin etc., from languages like C++, Java, C#, Python etc. I think it makes sense to describe that difference as one of modern sensibilities.


go's error handling is not catastrophic, it is very good


> I really dislike exceptions because there's no documentation for how a function can fail. For this reason I prefer go style errors, which are an improvement on the C error story. Yes it has warts, but it's 80% good enough.

I’m not a go developer. How does go document how a function can fail?

A Java developer can use checked exceptions so that some information is in the signature. For unchecked exceptions the documentation must explain.

I guess in Go the type of the error return value provides some information but the rest needs to be filled in by the documentation, just like the Java checked exceptions case.


> I’m not a go developer. How does go document how a function can fail?

There's no magic to it. Errors are values, so it's a part of the function signature that there's an error code to check. In C++ any function can throw an exception and there's no way of knowing that it wont.

It's true that go doesn't document what _kinds_ of errors it can throw, but at least I know there's something to check.


But that doesn’t document _how_ a function can fail. Just that it _can_ fail.


> as best as I understand it, because of the content of JeanHeyd's blog post on reflection in Rust.

I'm having trouble finding it. Can anyone link this post?



Thanks!


posted a day or so ago and was flagged, I think it was this

https://news.ycombinator.com/item?id=36091242


Off the charts? Compared to completely at will maybe. You generally have to pay 3 months severance + 1 month per 3 years of employment for a maximum of 6 months severance after 9 years of employment. That's it.


True, but what I noticed is that in many of these countries you can't just start layoffs whenever.

Frequently you need to have consultations with unions or staff representatives, present an actual business plan for the layoffs and why people can't be reconverted.

That hits the breaks on any impulsive layoffs and it adds another buffer of a bunch of months.


As someone who knows nothing at all about the success of PayPal, can you elaborate on why?

I'm also curious about his role in the success of Tesla and SpaceX. I personally find those to be two of the most interesting startups in a long while, and have been inclined to think that Musk being involved in both to be unlikely to be a coincidence.


At least in the book, PayPal Wars, there is a notion that Musk effectively bought his way in to PayPal. He was running a competing payment service, named "X" or something like that. PayPal had better mindshare and tight eBay integration. Competition was intense.

So the two companies merged, and focused on just one brand: PayPal's. Reading between the lines, it wasn't an easy marriage.


The Book Founders at Work's first chapter is about Max Levchin at PayPal. One paragraph

>We had this merger with a company called X.com. It was a bit of a tough merger because the companies were really competitive — we were two large competitors in the same market. For a while, Peter took some time off. The guy who ran X.com [Musk] became the CEO, and I remained the CTO. He was really into Windows, and I was really into Unix. So there was this bad blood for a while between the engineering teams. He was convinced that Windows was where it's at and that we have to switch to Windows, but the platform that we used was, I thought, built really well and I wanted to keep it. I wanted to stay on Unix.

And eventually to stop Musk breaking the thing by rewriting it for windows at a time they were very busy fighting fraud on the network, Peter Thiel kicked him out as CEO.

>...I was like,"You gotta go,man."My whole argument to him was, "We can't switch to Windows now. This fraud thing is most important to the company. You can't allow any additional changes. It's one of these things where you want to change one big thing at a time, and the fraud is a pretty big thing. So introducing a new platform or doing anything major—you just don't want to do it right now." That was sort of the trigger for a fairly substantial conflict that resulted in him leaving and Peter coming back and me taking over fraud.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: