Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I (mostly) returned to C a little while ago, and for smaller things I sometimes create the entire application state as a single, big struct that's made of many smaller nested structs, maybe with one or very few "layers" of dynamically allocated data dangling off from the static "root structure" (but only when really needed).

A very simple example is an all-in-one application data structure like this:

https://github.com/floooh/v6502r/blob/1d2b79ac11d7828b2722b5...

This very simple approach has some downsides of course, mostly because C doesn't help much to solve some problems like a more specialized language could (but on the other hand, it also doesn't get in the way much):

- Every part of the program sees and is allowed to change everything, so it would be nice to have a simple syntax for fine-grained visibility and access rules (but not at all like C++ public/private, more like compile-time read-only and read-write access tokens).

- Not much compile- and runtime-protection from code scribbling over neighboring nested structs.

- Not much flexibility for dynamic arrays. It would be good to have 3 flavors: (1) compile-time max capacity which can be completely embedded, (2) a runtime max capacity array, which is allocated once but can never grow, and (3) a fully dynamic array which can grow (but maybe never shrink?). Such dynamic arrays should never change their base address, so that memory locations remain stable.

- It's not well suited for bigger programs built from many modules. It should be possible to have highly modular program code, but still end up with a single monolithic "root data layout".

One great side effect of this approach is that it feels completely natural to not do dynamic memory allocation all over the place (and one of the good features of C is that memory allocation is always very obvious - and thus easy to avoid).



Isn’t that just a variant on organizing your globals well to make using lots of globals manageable?

That’s what historically was done in languages such as FORTRAN and COBOL (both of which had compile-time memory management, but managed to do that by completely dropping memory allocation at runtime)

And yes, that meant dropping all dynamic memory allocation, too. If your wanted to run your FORTRAN program on a larger data set, you replaced the cards defining the dimensions of your arrays and recompiled.


Most likely! I never wrote FORTRAN or COBOL, but instead was too heavily influenced by the OOP hype of the 90s, and I think this hype still heavily lingers on even in the current "post-OOP" world. It feels like memory management is still heavily stuck in the "every small thing in a program needs its own lifetime" mindset.

Remembering how I did "memory management" in my early 8-bit assembler programs (e.g. not at all, just figure out what's needed upfront and assign every single data item its fixed address) was when I realized that dynamic allocation actually isn't all that important as I assumed the whole time.

But I'd like a "modern approach" and modern tooling for that very old idea :)


I reached a similar conclusion. Dynamic memory is seductive for the task of indefinite scaling, but a practical system always encounters bottlenecks that aren't along the memory management axis, and in the meantime your code is much harder to verify.

NB: Historically, game engines have tended towards object pooling at runtime without any dynamic allocations. In that case there is a defined limit to what a scene will accommodate, and the object counts often simply reflect the other performance bottlenecks involved.

GC is still nice, but mostly in the sense of stitching together the most dynamic elements of the system. You don't want to have to trace a ton of stuff, and that also leads in the direction of flattening the data and making it more manual and static as the type system allows.


> It's not well suited for bigger programs built from many modules. It should be possible to have highly modular program code, but still end up with a single monolithic "root data layout".

You can still have modules per file, where the globals can be thought as members of a virtual root. And the globals can be either shared or private to the module.

I have seen very complex programs done like that. There are some advantages like proving the program does not run out of memory, but the problem is that you end up with code that is harder to reuse, harder to test, etc. as soon as you start sharing state.

But yeah, it is sometimes done like that.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: