Hacker Newsnew | past | comments | ask | show | jobs | submit | marhee's commentslogin

> Coyuld anyone summarize why a desktop Windows/MacOs now needs so much more RAM than in the past

Just a single retina screen buffer, assuming something like 2500 by 2500 pixels, 4 byte per pixel is already 25MB for a single buffer. Then you want double buffering, but also a per-window buffer since you don't want to force rewrites 60x per second and we want to drag windows around while showing contents not a wireframe. As you can see just that adds up quickly. And that's just the draw buffers. Not mentioning all the different fonts that are simultaneously used, images that are shown, etc.

(Of course, screen bufferes are typically stored in VRAM once drawn. But you need to drawn first, which is at least in part on the CPU)


Per window double buffering is actively harmful - as it means you're triple buffering, as the render goes window buffer->composite buffer->screen, and that's with perfect timing, and even this kind of latency is actively unpleasant when typing or moving the mouse.

If you get the timing right, there should be no need for double-buffering individual windows.


You don't need to do all of this, though. You could just do arbitrary rendering using GPU compute, and only store a highly-compressed representation on the CPU.

Yes, but then the GPU needs that amount of ram, so it's fairer to look at the sum of RAM + VRAM requirements. With compressed representations you trade CPU cycles for RAM. To save laptop battery better required copious amounts of RAM (since it's cheap).

I will definitely reads these books when they come out.

For a historic overview of mathematics with (accessible) formulas I highly recommend “Journey through genius: The great theorems of mathematics”.


Concurrent programming is hard and has many pitfalls; people are warned about this from the very, very start. If you then go about it without studying proper usage/common pitfalls and do not use (very) defensive coding practices (violated by all examples) then the main issue is just naivity. No programming language can really defend against that.


You are completely dismissing language design.

Also, these are minimal reproducers, the exact same mistakes can trivially happen in larger codebases across multiple files, where you wouldn't notice them immediately.


The whole point of not using C is that such pitfalls shouldn't compile in other languages.


Maybe the real reason is more related to Price’s law/Pareto’s principle, loosely meaning that 90% of the work is done by 10% of the people. In other words, in large companies most perons do not contribute much, at least not at the same time.


Maybe, yeah.

And it's also quite possible that my view (which was across a slice of new-technology stuff hosted by the "innovation" arm) was skewed, and things aren't the same elsewhere in the company.

I just remember being shocked by the negativity.


Anyone knows what does "native" means here precisely? Steam Deck has a x86-64 instruction set AFAIK, so it's just same as a the Windows version? Or has it to do with the GPU / OS? Or does it just mean "properly configured"?


It means compiled for Linux/SteamOS instead of being compiled for Windows and using a compatibility layer to play.


Native as in it's a Linux binary, no wine/proton involved


If this thinnest iphone air has 27 hours of video playback, why does the regular iphone 17, which looks twice as thick only has 30 hours? At this point, I just want long battery life. Like an "all-week" battery life would be a nice start.


they are mostly likely using a higher density battery in the air, at least that's what the rumors suggest


If you enjoy this art-style, definitely check out the game Return to the Obra Dinn.


There’s a ditherpunk artist in Moscow named Uno Morales that I’m quite fond of: https://unomoralez.com/


I was just about to post the same link! Found his site today by pure happenstance.

Don't know this guy's technique, but the idea that people were drawing such elaborate pictures on tiny screens - with mice! not even tablets - boggles me. Every pixel a deliberate act.


Well, I use it before google, since it in general summarizes webpages and removes the ads. Quite handy. It’s also very useful to check if you understand something correctly. And for programming specifically I found it really useful to help naming stuff (which tends to be hard not in the least place because it’s subjective).


It’s a clever trick. But can it render a textured text? Transparent text, gradient fills? Maybe it can, I dont know. But why not just triangulate the glyph shapes, and represent each glyph as a set of triangles. This triangulation can be done offline, making rendering very lightweight.


The linked post was about Evan's side project, but within Figma, all of that is indeed possible. The glyphs are transformed into vector networks[0], which has a fill pipeline that supports transparency, gradients, images, blending, masking, etc.

[0]: https://www.figma.com/blog/introducing-vector-networks/


I wonder, in reality, if a Lua program uses large (consecutive) arrays, its values will likely have the same type? At the very least it is a common use-case: large arrays of only strings, numbers etc. Wouldn’t it make sense to (also) optimize just for this case with a flag and a single type tag. Simple and it optimizes memory use for 98% of use cases?


The main catch is that if the optimization guesses wrong and a different type is inserted into the table afterwards, then it would incurr an O(n) operation to transfer all the data to a deoptimized table.

Another caveat is that Lua can have more than one internal representation for the same type, and those have different type tag variants. For instance: strings can be represented internally as either short or long strings; Functions can be Lua closures, C closures, or perhaps even an object with a __call metamethod; Objects can be either tables or userdata.


This seems likely to create some inexplicable performance elbows where you have 1000 strings, but there's one code path that replaces one with a number, and now the whole array needs to be copied. Tracking that down won't be fun.


It makes a lot of sense, and but then you have two code paths for tables.

The Lua folks want a simple codebase, so they (knowingly) leave a lot of performance on the table in favor of simplicity.


For what it's worth, there are already two code paths for tables. The array part is stored separately from the hash table part.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: