Hacker Newsnew | past | comments | ask | show | jobs | submit | more Cieric's commentslogin

That sounds useful, do you have the Dockerfile for it pushed anywhere?



I think they mean that the number that the image starts on is by default 0, I think they want to be able to change the starting number so top left could start at something else and increment from there. Adding on to that it would also be interesting to change what the number increments by.


Just to add some context Adam works at AMD as a dev tech, so he is constantly working with game studios developers directly. While I can't say I've heard the same things since I'm somewhere else, I have seen some of the assumptions made in some shader code and they do line up with the kind of things he's saying.


While I won't endorse what the GP said, I wouldn't say that it's only gotten worse. I work for a modern gpu company (you can probably figure out which one from my comment history) on one of the modern apis and they much more closely represent what the gpu does. It's not like how opengl use to be as the gpus hold much less state for you than they use to. However with the new features being added now it is starting to drift apart again and once again become more complex.


That's interesting to know! I keep meaning to try fixing into the AMD stuff (mainly as it seems like the more open source one), but need to find the time to deep dive!


Yeah, we also have a gaming and a developer discord where I hang around. So feel free to join and ask questions there.


When the modding community is so heavily plagued with mods that are malicious in some way, I don't think having a modding api that can be safe by convention yet recompiled to be fast would be a bad idea. So while I'm not sure it was the smartest choice, it's not so inanely stupid as you seem to be putting it.


> When the modding community is so heavily plagued with mods that are malicious in some way

Is it? Granted, I'm a heavy mod user myself for various games, but only created mods myself for Cities Skylines and Cities Skylines 2 so I guess I know that ecosystem the best, and yes, there been a few cases of malicious mods, but "heavily plagued"? What ecosystem are you talking about?


Most recently in my memory it's been minecraft. There was a wave of mods that were stealing things like discord access tokens, I don't have clear memory on all of the cases that I've been through, just that I always try and verify all mods I can now. I think I remember one for Lethal Company and looking online I'm seeing some referenced for Dota 2, Sims 4 and Slay the Spire.

Just learned Nexus mods is also pretty good about handling anything that's virus like, most of my modding experience has been external to that though.


Yep, this really bums me out. Wanted to try plenty of Minecraft mods but never wanted to go to the trouble to set up a secure environment, so I never did either.

As an only-tangentially-related aside, the difficulty of making mods for Windows games work on Proton also bums me out.

I wish some kind of cross-platform, sandboxed modding ecosystem existed solving enough problems that most modders would prefer to use it. I'm not sure that's even possible, though.


If you’re saying that there’s no other way to create a mod API with clear security boundaries, I disagree.

The mod API should not have to do this anyway. The OS should do this. It is beyond belief that most operating systems just allow programs to do anything they want simply because they’re being executed.

And if the OS can’t do this, you run in a VM which nvidia disallow you to do in a performant way.


While I agree that it's not the best situation, (and I'm wholly against nvidia in this case.) And yeah this isn't about the mod-game api boundary, this is more about the mod-os boundary since that is harder to control against. WASM from my research so far doesn't allow any of that by default and it has to be passed through by the runtime. In this case it would be passed through to the retargeting compiler. This can give additional benefits like allowing mods on consoles in a more secure way and allowing for targeting the game to future cpu architectures without requiring all mods to recompile their code (not that I think the latter is a reason microsoft cares about.) But the idea of recompiling code when launching a game is already kind of standard on the gpu side of things.


I really like the idea of this. I wonder if I can convince my work to use it for our hardware. Are things like SIMD, SIMT, and other weird formats easy to represent in this kind of language? Or should I just assume anything describable in Verilog/HDL can be described in this language.

This also brings up another question if anyone knows. Is there a term for hardware description languages similar to turning complete for programming languages, or is there a different set of common terms?


Yeah you can describe basically any ISA including SIMD. The RISC-V model doesn't support packed SIMD (the P extension) but it does support Vector.


This looks and feels like "Verilog Lite."


This is pretty cool and it's surprising how well it works on mobile. I think the shuffle puzzle game has a bug where it can generate unsolvable puzzles. I ran into a parity issue. I solved it with the blank in the upper left but got no response from the game so I don't believe that was the intended solution.

Also checked with an online solver and it verified that there was no solution.


I haven't gone as far as verifying puzzles are solvable - right now I only verify the state of the puzzle is valid. Maybe in the future. For now I guess it will be like solitaire, where if you can't solve it you'll have to reshuffle it.

Incidentally, I'm planning on adding solitaire as the next game!


Oh nice. I don't actually know how to play solitaire, but I know Microsoft used the method of randomly generating them, solving them and then saving solvable seeds. (I believe they had 2 that weren't solvable somehow though so maybe it was a human solving them and it was a typo or mistake)

Also, I was just checking around to see if there were any good methods for telling if a puzzle is solvable without solving it. Seems geeks for geeks have some code for it.

https://www.geeksforgeeks.org/dsa/check-instance-15-puzzle-s... The only other solution I can think of is detecting both configurations (blank in bottom right or top left) and displaying something when either is reached.


I remember making that puzzle in C++.

Half of the random states created are solvable, and the other half are unsolvable.

My solution was not checking if the puzzle is solvable (the mathematics of this seem complicated), but starting with a solved one and then do a fixed number of random movements.


That's a good idea.


Great work, great memories on the Lisa.

But sheesh! The first time I play the numbers puzzle on a computer in my life, and the first time in 30 years that I play it at all, and I find out some joker has snapped two of the pieces out and reversed them, making it unsolvable?! Diabolical!


I spent so long fiddling with the puzzle game, trying to get a final 2 tiles swapped, just to find it was unsolvable sigh

Cool stuff though!


Same here, I spent forever on it :(


But GLM does have support for SIMD [1] or do you mean that it doesn't support specific instructions under SIMD?

[1] https://glm.g-truc.net/0.9.1/api/a00285.html


Totally missed that, thanks! I don't know the library very well, but as far as I can tell they don't support .xyxy style sizzling with the SIMD Vec4 type.

They're using the same "proxy object" method I was doing for their sizzling which I'm pretty sure won't work with SIMD types but would love to be proven wrong!

I haven't deep dived into the library as I'm no longer doing this kind of code.


This looks interesting and I'm going to take a look later. Just a minor nitpick up front though, I think the performance graph should be a bar graph instead of a line graph. Mainly since the in-between states don't have much meaning as you can't be half way between 2 different gpus.


Those discussions are a bit misleading. Original Doom updates its state only 35 times a second, and ports that need to remain compatible must follow that (though interpolation and prediction tricks are possible for visual smoothing of the movement). Rendering engine is also completely orthogonal to polygon-based 3D accelerators, so all their power is unused (apart from, perhaps, image buffers in fast memory and hardware compositing operations). Performance on giant maps therefore depends on CPU speed. The point of this project is making the accelerator do its job with a new rendering process.

Though I wonder how sprites, which are a different problem orthogonal to polygonal rendering, are handled. So, cough cough, Doxylamine Moon benchmarks?


"Rendering engine is also completely orthogonal to polygon-based 3D accelerators"

Software rendering engine, yes (and even then you can parallelize it). But there is really no reason why doom maps can't be broken down in polygons. Proper sprite rendering is a problem, though.


Sure, that has been done since the late '90s release of the source code, both by converting visible objects to triangles to be drawn by the accelerator (glDoom, DoomGL), or by transplanting game data and mechanics code into an existing 3D engine (Vavoom used recently open-sourced Quake).

However, proper recreation of the original graphics would require shaders and much more modern extensive and programmable pipelines, while the relaxed artistic attitude (or just contemporary technical limitations) unfortunately resulted in trashy y2k amateur 3D shooter look. Leaving certain parts to software meant that CPU had to do most of the same things once again. Also, 3D engines were seen as a base for exciting new features (arbitrary 3D models, complex lighting, free camera, post-processing effects, etc.), so the focus shifted in that direction.

In general, CPU performance growth meant that most PCs could run most Doom levels without any help from the video card. (Obviously, map makers rarely wanted to work on something that was too heavy for their systems, so the complexity was also limited by practical reasons.) 3D rendering performance (in non-GZDoom ports) was boosted occasionally to enable some complex geometry or mapping tricks in popular releases, but there was little real pressure to use acceleration. On the other hand, the linear growth of single core performance has stopped long ago, while the urges of map makers haven't, so there might be some need for “real” complete GPU-based rendering.


As I said, traditional doom bsp-walker software renderer is quite parallelizable. You can split the screen vertically into several subscreens and render them separately (does wonders for epic maps). The game logic, or at least most of it, can probably be run in parallel with the rendering.

And I don't think any of the above is necessary. Even according to their graphs popular doom ports can render huge maps at sufficiently high fps on reasonably modern hardware. The goal of this project, as stated in the doomworld thread, is to be able to run epic maps on a potato.


Even just updating the graphs would be helpful. There appear to have been several releases since 0.9.2.0, including a bump from .NET 7 to .NET 8 (and a bump to .NET 9 in dev).

The more recent .NET versions by themselves are likely to have some impact on the performance, let alone any changes in Helion code between versions.


Might make sense to use a logarithmic scale for the graphs too, it's hard to tell what speed the other ones are since they're compressed so far down.


I don't have enough time to read the paper in full right now. But I'm curious if using this they could possibly find the solution to the 3 sided coin problem. I haven't heard anything about it since I watched the matt parker video about it.

https://youtu.be/-qqPKKOU-yY

Or I guess if anyone else knows the answer, that would also satisfy my curiosity.


They should be able to simulate it! Here's another answer: https://news.ycombinator.com/item?id=33776796


Looks like that post author forgot to loop back to the original question once they found a model that fit their own simulations.

Just visually going off the chart, the answer is a "coin" has a 1/3 chance of landing on its edge when its height is 1.7x its radius, or 0.85x its diameter. (the blog author used half-height and the paper he found uses full height)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: