Hacker Newsnew | past | comments | ask | show | jobs | submit | kfuse's commentslogin

Raylib is a very good option for 2D games. For me it was the easiest way to translate my toy Doom renderer from javascript that used html canvas to C#.


Updated a pet project of mine and got a minor break:

  var pixels = new uint[renderers.width * renderers.height];
  var pixels2 = MemoryMarshal.Cast<uint, ulong>(pixels);
  pixels2[idx] = ...
In NET9.0 pixels2 were Span<ulong>, but in NET10.0 a different MemoryMarshal.Cast overload is used and it is ReadOnlySpan<ulong> now, so the assignment fails.

Spans is such a fundamental tool for low level programming. It is really unfortunate they were added relatively late to the language. Now every new version includes a slew of improvements related to them but they will never be as good as if they were there from the start or at least as early as generics were.


NHibernate project stumbled upon much bigger break: https://github.com/nhibernate/nhibernate-core/issues/3651#is...

We force to use this workaround for now.


That workaround seems wild as hell.


That's not just Java and there is nothing really cursed about it: throwing in a finally block is the most common example. Jump statements are no different, you can't just ignore them when they override the return or throw statements.


It is just Java as far as I can tell. Other languages with a finally don't allow for explicitly exiting the finally block.


And JavaScript .. And Python (though as sibling posts have mentioned it looks like they're intending to make a breaking change to remove it).

EDIT: actually, the PEP points out that they intend for it to only be a warning in CPython, to avoid the breaking change


Modula-3 has it despite not having a continue statement:

https://www.cs.purdue.edu/homes/hosking/m3/reference/exit.ht...

But there, it's more comprensible because in order to define the interaction of return (and exit) with exceptions, they define return, and exit, to be exceptions. So then it's more obvious why return can be "caught".


Notably, C++ and similar languages don't support lexical `finally` at all, instead relying on destructors, which are a function and obviously cannot affect the control flow of their caller ...

except by throwing exceptions, which is a different problem that there's no "good" solution to (during unwinding, that is).


I thought destructors were all noexcept now... or at the very least if you didn't noexcept, and then threw something, it just killed the process.

Although, strictly speaking, they could have each exception also hold a reference to the prior exception that caused the excepting object to be destroyed. This forms an intrusive linked list of exceptions. Problem is, in C++ you can throw any value, so there isn't exactly any standard way for you to get the precursor exception, or any standard way for the language to tell the exception what its precursor was. In Python they could just add a field to the BaseException class that all throwables have to inherit from.


> I thought destructors were all noexcept now...

Destructors are noexcept by default, but that can be overridden with noexcept(false).

> or at the very least if you didn't noexcept, and then threw something, it just killed the process.

IIRC throwing out of a destructor that's marked noexcept(false) terminates the process only if you're already unwinding from something else. Otherwise the exception should be thrown "normally".


> override the return

How is this not cursed


It is Java as C# disallow this


Node now has limited supports for Typescript and has SQLite built in, so it becomes really good for small/personal web oriented projects.


Why is everyone so fixed on interstellar travel? What if the galaxy is chock full of rogue planets? There may be hundreds if not thousands of worlds between us and the surrounding stars. Hopefully, Roman Telescope set to launch in 2027 is going to find lots of them.


Because we’re a species of explorers. At least some of us are.


Those rogue worlds are frozen though, the only source they have is from radioactive decay in their cores. And eternally dark, since they have no Sun to light them.

It would make more sense to use fusion reactors to power colonies in the outer reaches of the solar system than to go to a rogue planet.


Jupiter is 5 times farther from the Sun than us, there is basically no lighting and heat from the Sun there.

I think for a civilization capable of sending reasonable amount of people people at reasonable speeds to colonize such planets, creating suitable atmosphere, lighting and heating is very much possible. Gravity is the only hard requirement.


"Rendering engine is also completely orthogonal to polygon-based 3D accelerators"

Software rendering engine, yes (and even then you can parallelize it). But there is really no reason why doom maps can't be broken down in polygons. Proper sprite rendering is a problem, though.


Sure, that has been done since the late '90s release of the source code, both by converting visible objects to triangles to be drawn by the accelerator (glDoom, DoomGL), or by transplanting game data and mechanics code into an existing 3D engine (Vavoom used recently open-sourced Quake).

However, proper recreation of the original graphics would require shaders and much more modern extensive and programmable pipelines, while the relaxed artistic attitude (or just contemporary technical limitations) unfortunately resulted in trashy y2k amateur 3D shooter look. Leaving certain parts to software meant that CPU had to do most of the same things once again. Also, 3D engines were seen as a base for exciting new features (arbitrary 3D models, complex lighting, free camera, post-processing effects, etc.), so the focus shifted in that direction.

In general, CPU performance growth meant that most PCs could run most Doom levels without any help from the video card. (Obviously, map makers rarely wanted to work on something that was too heavy for their systems, so the complexity was also limited by practical reasons.) 3D rendering performance (in non-GZDoom ports) was boosted occasionally to enable some complex geometry or mapping tricks in popular releases, but there was little real pressure to use acceleration. On the other hand, the linear growth of single core performance has stopped long ago, while the urges of map makers haven't, so there might be some need for “real” complete GPU-based rendering.


As I said, traditional doom bsp-walker software renderer is quite parallelizable. You can split the screen vertically into several subscreens and render them separately (does wonders for epic maps). The game logic, or at least most of it, can probably be run in parallel with the rendering.

And I don't think any of the above is necessary. Even according to their graphs popular doom ports can render huge maps at sufficiently high fps on reasonably modern hardware. The goal of this project, as stated in the doomworld thread, is to be able to run epic maps on a potato.


Frankly, that doesn't explain much, because that sounds like how modern computing works: every program has its own continuos 32/64 bit address space. With 24 bits you can address 16MB which seems enough to be useful if you throw away reflection and such.


16MB is larger than every single SNES game ever released.

Modern programs have dynamic memory allocation. You can't just start writing to any address you want. You have to request memory from the operating system with malloc() and then free() it when you're done. Memory-managed programming languages handle this for you but it's still there under the covers.

On the SNES, you simply have all memory available from the beginning. No malloc/free, just start reading and writing.


Malloc and free aren’t handled by the operating system, they’re handled in user space.

Underneath malloc is mmap(2) (or in older unices, setbrk), which actually requests the memory. And with delayed/lazy allocation in the OS, you can just mmap a huge region up front, and it won’t actually do anything until you write/read to the individual pages.

Point is, you only need one up front call to mmap to write to any page you want.


The SNES doesn’t have any concept of user space. Your program has full control of the hardware. You can do whatever you want. There is no operating system at all.


I was responding to your second paragraph, where you talk about modern programs having to request memory from the OS with malloc and free. This isn’t true, malloc and free are not operating system concepts, they are ways for your program to divide up memory address space that is already mapped to you.

To bring this back to the SNES, you could totally use malloc and free on the SNES, but it would be just vending pointers to the address space you can already use. But my point is that this is no different from a modern OS, because malloc and free are just managing the address space you already got from the OS using mmap. And plenty of malloc implementations avoid repeated calls to mmap by mapping a large amount of space up front.

My point is, “having full access to the hardware” is completely orthogonal to whether malloc and free are a good idea. You can use malloc/free on a flat address space, just like you can use them on a big fat mmap() region. Instead, the reason you’d generally avoid malloc/free on SNES is that the amount of physical memory is so tiny that it’s generally a bad idea to do any dynamic memory management. Instead you want fixed regions representing in-game entities and logic, and the memory addresses you use should be managed manually in fixed size buffers.

(If you’re still not convinced, consider that malloc and free work just fine in DOS, where there’s also no virtual memory and you have total access to the physical memory space in your program. DOS doesn’t have mmap, and malloc implementations on DOS just stick to managing the flat, physical address space. No MMU or virtual memory needed.)


  > If you’re still not convinced, consider that malloc and free work just fine in DOS, where there’s also no virtual memory and you have total access to the physical memory space in your program.
Also: any modern microcontroller.


The point is that "the system has only one program running at all times" is not an explanation for why there's no dynamic memory allocation, because modern operating systems use virtual memory to give the illusion of a flat address space that the program is in full control over. You can use the .data/.bss sections of an executable exactly as you would use memory in a SNES game.

And in fact, on many game consoles newer than the SNES (such as the PS1, N64, GC/Wii, DS/GBA, etc.) there's no operating system and the game is in full control of the hardware and games frequently and extensively use dynamic memory allocation. Whether you manage memory statically or dynamically & whether you have an operating system or not below your program are almost completely orthogonal.

Rather, the reason why SNES games don't use dynamic memory management is because it's impossible to do efficiently on the SNES's processor. Dynamic memory management requires working with pointers, and the 65816 is really bad at handling pointers for several reasons:

- Registers are 16 bits while (far) addresses are 24 bits, so pointers to anything besides "data in a specific ROM bank" are awkward and slow.

- There are only three general-purpose registers, so register pressure is extreme. You can store pointers in the direct page to alleviate this, but addressing modes relative to direct-page pointers are slow and extremely limited.

- There is no adder integrated into the address-generation unit. Instructions that access memory at an offset from a pointer have to spend an extra clock cycle or two going through the ALU.

- Stack operations are slow and limited, so parameter passing is a pain and local variables are non-existent.

All of these factors mean that idiomatic and efficient 65xx code uses static, global variables at fixed addresses for everything. When you need dynamism, you make an array and index into it instead of making general-purpose memory allocations.

But as you get into more modern 32- or 64-bit processors, this changes. You have more registers and better addressing modes, so the slowness and awkwardness of working with pointers is gone; and addresses are so long that instructions operating on static memory addresses are actually slower due to the increased code size. So, idiomatic code for modern processors is pointer-heavy and can benefit from dynamic memory allocation.


All modern popular languages are fast, except the most popular one.


JavaScript is hella fast for a dynamically typed language, but that's because we've put insane amounts of effort into making fast JITing VMs for it.


Sure, but "for a dynamically typed language" still means that it's slow amongst all languages.


And Python+Ruby


Maybe one reason is its verbosity for small everyday tasks, like config files or when representing arrays. If xml allowed empty tags there probably would be no need for json.


Empty tag as in <tag /> ?


That's a bit confusing to me, I don't understand. I think that type of tagging can be ambiguous. Just to suggest a better construct:

<tag type="tag" class="tag" purpose="tag" tag_subtype="empty" description="this is a emptytag, a subtype of tag" empty="true"></tag>

Now, that's not perfect, I would even describe it as minimalist, but I hope it sets you in the right direction!


Doom doesn't use raycasting, it uses binary space partitioning.

From the description it seems to be a DOOM clone. Physics feels a bit off, but all the little details may trick you into believing that it's the real thing.


Doom uses both: binary space partitioning to find the visible surfaces then casting rays to draw the textures.


There is no need to cast rays when you have divided the space into convex polygons. If you know all the vertices of a convex polygon and two of them form a visible wall, you can determine the texture column and height of the screen space wall column by means of interpolation. This is what the Doom engine does.

The PC version of Wolfenstein 3D used a more straight forward ray casting approach, but I think the SNES port was Id's first use of BSP.


It doesn't. Doom uses BSP to precisely determine which columns of textures it needs to render.

It's faster than raycasting, so this was even used in Wolf3D port to GBA.


Yeah is fast and cheap because the BSP tree is created after the level mapped out. It's also why "Will it run DOOM?!" is a thing.

For further reading: https://en.wikipedia.org/wiki/Doom_engine#Binary_space_parti... https://en.wikipedia.org/wiki/Binary_space_partitioning#Appl...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: