Hacker Newsnew | past | comments | ask | show | jobs | submit | maiybe's commentslogin

Who wants to live in a world where art is dead and AI killed it? AI in games can either flood the world with forgettable, soulless content, or it can help artists and designers make the impossible playable.

We wrote a deep dive on how our AI x Games company, Studio Atelico, is approaching AI in games with respect for the artists and creators who make games special. The only approach that makes sense is that artists must share in the rewards of the work they produce.

AI sparks a lot of concern (rightly so) about exploitation, devaluing art, and flooding the market with low-effort content. We are gamers and game developers ourselves and feel that too. That’s why we’re building systems where:

- Artists explicitly approve how their work is used for training AI models

- Revenue is shared from games where AI draws on their style

- Labor and creativity are respected throughout

- AI is used to expand what games can do, not cut corner and cheapen the craft

We believe that when used right, AI can expand what creators can do, making the impossible playable (dynamic worlds, reactive characters, evolving art) without undermining artists or creativity, but expanding their reach.

We wanted to get your thoughts and feedback on our approach, positive or critical.


Looking for feedback for an AI engine that runs sophisticated game AI locally instead of in the cloud, literally reducing costs to 0. Our tech demo (GARP) runs 20+ autonomous NPCs with memory, planning, and real-time interaction on a single RTX 3090 - something that previously cost $500/day with cloud APIs.

The engine is composable, modular, and integrates with major game engines. We're enabling developers to create deep, responsive game worlds without the burden of cloud computing costs or API rate limits.

Would love to hear the community's thoughts on local vs. cloud AI for gaming applications.


How much GPU memory are you using? Demanding games already use most if not all available VRAM just for rendering so there isn't a great deal of room left for big AI models. Even if you target games with simple graphics, the size of the AI model would still dictate the min-spec for it to be playable.


Under the hood, we're supporting multiple models that can be selected, but haven't optimized all the quantizations possible (the space is moving fast).

The range is 1GB - 24GB, depending on model selection, but would be amazing to push lower than that. 24GB is high end as only the NVIDIA XX90s can support those.


1-2GB might be workable if the model still performs adequately at that level, but anything more than that sounds very hard to justify for as long as the median Steam user and console baseline (Xbox Series S) only have 8GB of VRAM to go around.


Depends on the fidelity of the graphics, but I agree with you that the smaller the VRAM usage, the broader base we can support on e.g. Steam. 1GB - 2GB would be the sweet spot for all game types, which 1B parameter quantized models can hit.

There is some evidence that next gen consoles will feature AMD NPUs, and I suspect there will be more available RAM. There's definitely positive tailwinds that will change the hardware landscape over time.


Very interesting. How do we get our hands on it to try it out?


We are in closed alpha, but eager to talk with folks about what they're working on. Easiest is to reach out to: hello@atelico.studio


Most of the responses to your comment are a bit off here.

The challenge of autonomous trucks isn't that they're "more dangerous," it's a matter of physics and current LIDAR technology. The weight of the truck means there is a minimal safe stopping distance at a given speed. Frankly, the quality and distance of current LIDAR tech falls short of the distance required for safe stopping at the average highway speed for a truck.

Put another way, the autonomous driving stack has difficulty seeing far enough ahead of a truck to successfully stop in time to not cause an accident in highway environments. You'll need better fusion of perception stack (LIDAR + imaging + neural nets) or better LIDAR ranges to be able to deploy autonomous trucking sooner.


That doesn't sound plausible. Breaking distance goes up with the square of speed while it only increases linearly with weight. As an example, a truck going 55 can stop as fast as a car going 65. Ultimately it just means your self driving car has to drive a bit slower.


Agree with the lidar range issue, but in a perfect storm... there was also major hype to drive money into robotaxis, e.g. Uber flashing an order for $10 billion of Mercedes S-Class ( https://www.roadandtrack.com/car-culture/news/a28508/uber-or... ).

This year we've seen Luminar start to go public via SPAC, and their high-range Lidar has been mounted on most (all?) non-Waymo self-driving truck prototypes. There's even a public dataset with Luminar data now: https://github.com/TRI-ML/DDAD


Radars already go 100m+.To avoid a front collision pretty much all these systems use radar, not lidar. Next generation radars are already doing 300m+.


But angular resolution still is too low to disambiguate targets in the far field and neither frequencies nor maturity for imaging radar are here yet. Range alone does not solve the main radar problem.


Trucks already have slower speed limits. A truck going 50 can stop as fast as a car doing 75. Run them all night. No rest breaks.


Overall, I've seen similar movement away from Tensorflow in my social circle of research scientists/engineers.

One area I'd push back on is that "this is not the fault of Tensorflow." An area of weakness for Tensorflow is that it solves a number of DL problems with a specialized API call. That's not an asset, that's a liability.

LSTMs were always a pain point. So much so that for Tensorflow projects, I gave up and insisted on traditional feedforward approaches like CNNs + MLPs or ResNets when LSTMs would be viable. Mostly identical performance with decent speed boosts from avoiding recurrence, and the simpler code reduced maintenance by non-ML engineers.

As soon as you branch out of standard DL bread and butter models, you spend frustratingly long periods of time tracking down obscure solutions in a part of the API space that had its own hard-to-follow logic.

Every time I'd point out that it's hard to do something either in forums or HN directly, I'd get a response that its easy to do with [insert-random-api] function call.

In the end, it's my opinion that Tensorflow will lose out to JAX and Pytorch, by no fault other than its own complicated construction.


I agree with this, although I think it was a conscious and deliberate choice with TF 2.0. We have given up on TF for all future work which is sad since I really appreciate a number of the pieces that surround the core. I think they made a choice to emphasize the support of already developed models and make the experience great for novices will be a decision that they will come to regret. We found so many issues when we tried to port some of our existing models to TF 2.0. The sad part was that there were GitHub issues for all of them.

Personally I think Tensorflow has already lost and we just need to let it play out over the next few years. One interesting wrinkle is that since Trax, Jax and Flax utilize pieces of Tensorflow the TF team can probably claim good internal adoption numbers depending on how they count.


Every obscure solution in Tensorflow also has a change to break at an upgrade, I'm glad I moved to pytorch.


As far as PPLs on Python go, I have found that Pyro's documentation and tutorials are quite stellar. Built on a Pytorch backend (now NumPyro has support for JAX), it's by far my favorite PPL to use.

For your bread and butter PPL starter, I'd go for Variational Autoencoders (VAE). Fun to visualize and tweak: http://pyro.ai/examples/vae.html

If that's too much going on, try this basic tutorial on inference: http://pyro.ai/examples/intro_part_ii.html

The full set of examples is here: http://pyro.ai/examples/


I'd check out Pyro, a probabilistic programming language built on Pytorch.

You can find Bayesian Neural Network examples starting here: http://pyro.ai/examples/bayesian_regression.html

I think the documentation and tutorials are thorough and laid out well to ease you into Bayesian NN and generally handling uncertainty with Neural Networks + Distributions. There's some Pyro-specific constructs in there, but it's the easiest way to get into BNNs without lots of prior knowledge.


This comment is ironic given your previous praise for Linus on acknowledging his rude behavior towards other developers.

The language you're using is pretty abrasive and it can come off as quite hostile even if you don't intend it. You also get defensive when someone interacts with your easy-to-misinterpret comments and you gaslight them by saying they shouldn't be quick to "take sides" about your ripe-for-polarization statement.

Maybe you could take a queue from Linus. As you said in your own comment in reference to Linus admitting he had an attitude: "Good for him. These are hard things to admit, and he's setting a great example."

If I'm so lucky, I look forward to a quippy response about how that situation is totally different.


I'm not sure what's abrasive about their comment.

If anything, it's a great example of direct-without-abusive, something I wish folk like Linus would adopt.


> You also get defensive ... you gaslight them ...

Very general, very untrue, and very off-topic. Please re-read the comment guidelines.

> Maybe you could take a queue from Linus.

Cue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: