Hacker Newsnew | past | comments | ask | show | jobs | submit | superrad's commentslogin

That's not a bug just an inherent problem with using very thin colliders with a discrete collision detection system. It's not a Unity problem and Unity allows you to configure continuous collision detection to prevent tunneling https://docs.unity3d.com/6000.3/Documentation/Manual/Continu...

Game development is full of domain knowledge like this because games need to make compromises ensure completing simulation updates at no lower than 30Hz.


That's fair. I just remember being very frustrated that felt like a basic feature I was implementing per the tutorial broke immediately in such a fundamental way in such a simple situation, and not being able to find any sort of explanation or solution at the time. Possibly it was fully my fault and the info was readily available!


I think its really just the trappings of game development being full of tribal knowledge.

The tutorial probably should have instructed you create box colliders for walls (giving a greater area for the physics engine to catch collisions) rather than a simple plane/quad which would lead to the exact issues you had, or at least explained why a box works better than a plane.

I guess you have to balance the necessary information with overload in your tutorial or at least have an aside or additional reading that really helps understand many of these internalized understandings.


For a while now Unity has an incremental garbage collector where you pay a small amount of time per frame instead of introducing large pauses every time the GC kicks in.

Even without the incremental GC it's manageable and it's just part of optimising the game. It depends on the game but you can often get down to 0 allocations per frame by making using of pooling and no alloc APIs in the engine.

You also have the tools to pause GC so if you're down to a low amount of allocation you can just disable the GC during latency sensitive gameplay and re-enable and collect on loading/pause or other blocking screens.

Obviously its more work than not having to deal with these issues but for game developers its probably a more familiar topic than working with the borrow checker and critically allows for quicker iteration and prototyping.

Finding the fun and time to market are top priority for games development.


At this point I really wonder why anyone would use Rust for anything other than low-level system tools/libraries or kernel development ...

Anything with a graphical shell is probably better written in a GC'd language, but I'd love to hear some counter-arguments.


It depends on the kind of game you’re making.

If it’s a really logic-intensive game like Factorio (C++), or RollerCoaster Tycoon (Assembly), then I don’t think you can get away with something like Unity.

For simpler things that have a lot of content, I don’t think you can get away with Rust, until its ecosystem grows to match the usual game engines of today.


I mean, you could write your logic engine in Rust (as a library), and do all the rest in a more ergonomic language, with a GC.


Yeah, I think that would be ideal if possible, although often that requires most of the state to be in Rust, and exposing proper bindings to every bit of that state is quite an undertaking.


I think they mean the original amiga files, not the nft image which obviously right click save is the same as buying it .


yes thanks, that's what I meant


Do you not plan to go grocery shopping or do you always do it on the spur of the moment? I'm not sure what the issue is. If you need to pick up something small from the store on the way somewhere you can definetly get a small always carry on you bag that will fit in a pocket. When you're going to actual go grocery shopping just bring the bigger reusable bags. If the purpose of the journey is shopping it's not inconvenient to carry those bags and you'll have to carry the grocies back anyhow.

I get that it's less conventient to have to remember a bag but it's not some insurmountable task and it does seem to reduce the amount of plastic bags that get caught by the wind and blow around as trash.


Spur of the moment -- my schedule is always changing. I know I need to go sometime during the week but it's totally going to depend on when I happen to have free time on the way home, and I generally won't know that until I'm heading home. It might be Tuesday, or it might not be till Friday.

Always having a bunch of bags on me just isn't a thing, not when you walk and take the subway everywhere and don't want to be lugging around a backpack when you go out for drinks and have nowhere to put it when you're standing around a bar.

I'll take the big bag when it's on the weekend and I'm making a special trip to the supermarket, but there isn't always an opportunity for that either.


I don't know if you're joking but Ultima online was obviously released 20 years before GitHub actions existed


Good point. Still, this could have been caught by fuzzing, which is a term coined in 1988.


There's more of tangeable value from buying property in virtual game worlds in the sense people get to use those property or items when normally in the game they would not be able to.

For things like ship preorders on Star Citizen it's much more of a risk that the value will ever be delivered but it's still offers more than just a note of ownership of an abstract token on a blockchain sequence.

If someone decides to honor these tokens as proof of ownership of property/items in the real world/ a virtual world then they may offer some value but there's a very great risk that will never happen.


IMO, someone’s tangible value of putting an Ape as their avatar pic on social media might be greater than that of someone being able to use a ship purchased in Star Citizen.

It’s a little crazy to me how Twitter profiles with Apes are viewed as “influencers” and “pioneers” and get treated as such. Not my cup of tea per se, but there is tangible value for some people there.


It's not great to have to double your memory usage while you reallocate your array. On more limited devices (see games consoles or mobile devices) you'll end up fragmenting your memory pretty quickly if you do that too often and the next time you try to increase your array you may not have a contiguous enough block to allocate the larger array.

There's also the cost of copying objects especially if you don't know if the objects you're copying from your original array to the resized array have an overloaded copy constructor. Why copy these objects and incur that cost if you can choose a datastructure that meets their requirements without this behaviour.

If you're holding pointers to these elements elsewhere re-allocating invalids those, and yes you probably shouldn't do that but games generally are trying to get the most performance from fixed hardware so shortcuts this will be taken, its a least something to talk about in the interview.

I can see why they were confused by your answer as its really not suited to the constraints of games and the systems they run on.


> It's not great to have to double your memory usage while you reallocate your array.

You don't have to use a growth factor of 2. Any constant multiple of the current size will give you amortized constant complexity.

> On more limited devices (see games consoles or mobile devices) you'll end up fragmenting your memory pretty quickly if you do that too often and the next time you try to increase your array you may not have a contiguous enough block to allocate the larger array.

If fragmentation is a concern, you can pre-allocate a fixed capacity. Or you can use a small-block allocator that avoids arbitrary fragmentation at some relatively minor cost in wasted space.

> Why copy these objects and incur that cost if you can choose a datastructure that meets their requirements without this behaviour.

We have an intuition that moving stuff around in memory is slow, but copying a big contiguous block of memory is quite faster on most CPUs. The cost of doing that is likely to be lower than the cost of cache misses by using some non-contiguous collection like a linked list.

> as its really not suited to the constraints of games and the systems they run on.

For what it's worth, I was a senior software engineer at EA and shipped games on the DS, NGC, PS2, Xbox, X360, and PC.


When you reallocate your array you will in memory have your old array and your new larger array while you move your data over. At the very least you're using 2x and the extra memory for your expansion.

For your other points if you'd mentioned them in the interview you'd probably have been better received. Copying is really only that fast for POD objects (your objects copy constructors may need to do reallocation themselves or worse) so if you're suggesting a general solution you should be aware of that (or at least mention move constructors if they were available at the time) .

I would be surprised if any of the games you worked on actually shipped with an amortised resize of dynamic arrays (at least not for anything that didn't matter in the first place) so I don't know why you'd suggest it as a general solution in a game dev context.


A linked list needs a pointer for every piece of data. If the data is small, this will also be a 2-fold memory impact. Plus impacts on cache coherency.


A typical C implementation would be using realloc():

The realloc() function tries to change the size of the allocation pointed to by ptr to size, and returns ptr. If there is not enough room to enlarge the memory allocation pointed to by ptr, realloc() creates a new allocation, copies as much of the old data pointed to by ptr as will fit to the new allocation, frees the old allocation, and returns a pointer to the allocated memory.

Worst case definitely 2x memory. But not necessarily always.


> It's not great to have to double your memory usage while you reallocate your array. On more limited devices (see games consoles or mobile devices) you'll end up fragmenting your memory pretty quickly if you do that too often and the next time you try to increase your array you may not have a contiguous enough block to allocate the larger array.

That doesn't smell right to me, assuming you're talking about userspace applications on newer hardware. aarch64 supports at least 39-bit virtual addresses [1] and x86-64 supports at least 48-bit virtual addresses [2]. Have you actually had allocations fail on these systems due to virtual address space fragmentation?

Certainly this is something to consider when dealing with low-RAM devices with no MMU or on 32-bit, but the former hasn't applied to the device categories you mentioned in probably 20 years, and in 2021 the latter is at least the exception rather than the rule.

[1] https://www.kernel.org/doc/html/v5.8/arm64/memory.html

[2] https://en.wikipedia.org/wiki/X86-64#Virtual_address_space_d...


Should we be training on TIS-100[0] to be ready for when these chips become a necessity.

[0]https://en.wikipedia.org/wiki/TIS-100


Exactly, the smaller your chip is the more you can fit on a silicon wafer.

If your chip is too large it can even make it practically impossible to manufacture at scale due to the increased chance of defects as your chip size increases.


Yeah I thought it was just a scare tactic and something the licence 'enforcers' could say (i.e. 'our detector van says you have a tv') to try and catch people out.

They surely can't get too much of a return paying people to check up on only potential licence 'evaders' so investing in a fleet of actual vans with drivers and operators would further reduce any return.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: