Justifications aside, it was depressing seeing how quickly the AI gamedev movement ground to a halt. I’ve been waiting patiently for 4 years now. And now that the moment is here, it’s impossible for any serious gamedev to pursue it without committing financial suicide.
Bah. At least such things tend to work themselves out over decades. The next few generations will look back on us and wonder why we dragged our feet for so long.
I find it excellent that game devs have firmly rejected stolen content. At the end of the day it’s their livelihoods. Openai and those who promote the idea of reselling work without permission should be out of business. But their bet is that if done at scale they’ll get away with it. At least one industry rejected their philosophy. Other software and content developers should follow suit.
If you only care when something affects you personally it's hard to take your criticism seriously. It's much easier when you have no stake in the game or even benefit from it.
Likewise, if you can say "this sucks for me personally but is a net positive for humanity", then I have a much easier time to consider your endorsement.
As a programmer, I'm glad if AI tools can learn from my code and improve copilot etc. for everyone. I find it exasperating how much other creators clutch their pearls over the same thing.
Think of it like those in favour of communism. They find it easy to demand that other people share the output of their work for free so they can benefit from it. Similarily, programmers that produce insignificant code are happy to use the valued work of others for free. For instance, corporations have no issue in using the enormously valuable work of open source developers and neither do some of their mediocre workers. Essentially this is what we are witnessing with demands from workers that folk's work is made available to them so they too can be productive.
On the contrary, only someone who can't produce more value than an AI should be afraid of being replaced by it. Our value is in abstract thought and using tools as force multipliers, not writing boilerplate by hand.
I fail to see how that is relevant to your original comment. You seem to suggest that if you're fine with AI being trained on your code it means that your code is insignificant, which of course makes no sense.
So no code with permissive licenses has any financial value? I release all my code under MIT which means people are free to do whatever they want with it, including training their AI on it.
What are you trying to argue about using open source? A valuable developer uses an open source project when appropriate instead of wasting resources succumbing to NIH.
A valuable developer uses an open source project according to its licensing terms. An unvaluable corporation steals said code and resells it via an ai.
This argument does not work. You are making an assumption that humans learn and create the same way that AI models do. It's also deflecting from what most AI art opponents are actually upset about--the potential loss of income from AI models.
Under certain circumstances, it is theft. Those circumstances are similar to ai software ingesting content. However the two are not comparable. Software has no rights.
I'm not saying this specifically is Evil. But neither was Google "first" instance of whatever it did to start it's downward descent. At the time it was probably something silly and a thing we thought we could brush off due to the goodwill they had with us.
Nothing downward about Google making sure advertisers can advertise to an audience likely to buy the product instead of truly random ad distribution either.
Generation is a means to an end. It's up to the user to decide what that end is: it can be absolute naff, or it could be a solo auteur's masterpiece. The problem isn't AI, it's Valve's complete abdication of quality control.
Valve has adopted the stance that any game with even a hint — literally just a whiff — of AI generated content will be instantly rejected from steam with no explanation beyond "it appears your game contains AI generated content." They say nothing beyond that.
If that seems unbelievable, I agree. I didn’t believe it till I drilled down into the details of multiple instances of this happening. It pops up on Reddit with alarming frequency.
Meanwhile Tim Sweeney looks like a visionary, since he says "bring your games over to the Epic Store, we’ll host them." And while I applaud this and cheer him on, as a former gamedev I understand the reality of the situation: neither indie studios nor established studios can take the financial risk of prototyping a new development model spearheaded by AI, only for the resulting game to be thrown out of the primary revenue stream (Steam).
There’s a ray of hope, which is that AI can revolutionize the mobile gaming space. But I’m pessimistic because mobile gaming sucks. It’s not an industry I’d choose to be in, because the financial pressures practically force you to develop something addictive at best and downright predatory at worst.
It’s hard to imagine Portal being developed for iOS, yet it was easy to imagine a lone dev making Portal for desktop PCs using AI. But now no one will try. Hopefully some teenager with nothing to lose will pull it off.
I don’t think Gabe made this decision. It’s too hamfisted. It feels like someone at Valve got a bug up their bonnet about AI and implemented the worst reactionary policy imaginable. Gabe would understand the implications and nuances, and certainly wouldn’t tolerate a system where Valve is abusing their developer trust by rejecting them with no explanation and no recourse. But Sir Newell is old, and it was a matter of time till empire decay set in, so it’s not surprising it ended up this way. Still sad though.
> In particular, [Game Name Here] contains art assets generated by artificial intelligence that appears to be relying on copyrighted material owned by third parties. As the legal ownership of such AI-generated art is unclear, we cannot ship your game while it contains these AI-generated assets
Additionally, Valve has put out a number of clarifying responses to journalists.
I think we might be talking past each other, or we’ll have to agree to disagree. I stand by my claim that that message has no explanation, no details, and no specific steps for developers to reform their game. They even threaten the developer by saying they have exactly one chance to remove all of the unnamed supposedly-infringing content, or else they’ll never be able to publish their game.
I should sign off now because I was about to write “in what universe does this seem reasonable?” and then I remembered it’s not worth getting dragged down by this nonsense. There are lots of intellectually satisfying problems to work on. Valve may be relevant now, but a decade tends to favor upstarts. The way to deal with this is to make Valve irrelevant, which is a matter of patience and time.
One case I recall was a developer who used AI text to speech for speech synthesis and was rejected for "using AI". If this report (and my recollection) is accurate then Valve do seem to be taking it to ridiculous extreme.
> It feels like someone at Valve got a bug up their bonnet about AI and implemented the worst reactionary policy imaginable
The response by digital artists on Twitter and Reddit against anything AI generated ever even making an appearance in their communities is one of the most intense reactions I’ve seen in response to a new tech. Maybe there’s game devs who have similar extreme feelings rooted in concerns about replacing their own jobs, loss of quality without human involvement at every stage, creativity, etc.
It’s not even just don’t post it, it’s you shouldn’t be engaging with AI tools at all because it accelerates our industry decline.
I of course think that’s silly (at a minimum the idea you can stop progress) but this sort of pearl clutching will likely be a common phenomenon re AI in other industries as it happens
so what you're saying is we should form a software engineer guild, that prohibits members from using AI and then use our collective power to force companies not to use it
Valve is doing an excellent job at keeping ai spam out of steam and should be commended for it. There should be zero tolerance for allowing stolen ip to be resold the way openai and many in the ai underworld dream of reselling.
I can see your viewpoint, but at the same time, if there's nothing of the original training material in the output, what exactly is being stolen and resold?
I guess I'm saying that I can see how you got to that position, but I'm not sure your conclusion makes sense. The production of the model involves widespread use of IP - much of it questionably obtained - but you cannot go back to the original IP from the output.
Why are they so anti AI? It seems inevitable. Fear of IP battles like those w/ authors suing OpenAI? I guess I can understand wanting the dust to settle on some of those issues.
But I think the genie is out of the bottle (and I’m of the opinion that GPT content is transformative enough to warrant fair-use exemptions) so it would seem wiser to find a careful way of embracing the new tech.
that's not true. they only ban ai if you don't clearly have usage rights for the data that the ai was trained on. obviously, this includes basically all of the big and popular available AI APIs at the moment.
that sucks for using AI right now, but it's only a question of time until you get huge models trained on CC0 data imo
This is functionally identical to “This is true, because Valve bans all existing means of generating AI content for games."
As for your dream of CC0 models, it’s a dream because you’d have to be asleep to believe it. I don’t mean to phrase it so harshly, but there are so many reasons that can’t work. The main one is that there isn’t enough non-copyrighted data to train any competitive model. The competitive models have only recently gotten good enough to just barely be usable, and far more than 90% of their training data was sourced from people they certainly didn’t get usage rights from.
I deleted a paragraph ranting about usage rights. Suffice to say, Stallman’s “Right to Read” becomes more prescient with each passing day.
I'd like to point out that it has been shown that text models can be trained on purely synthetic data and perform at or above the level of models trained on human derived data. This works because you can use an LLM judge the quality of a particular generated sample which allows you to automate the process of picking high quality generations. It won't be long before this is done with generative art as well, a multi-modal model could be used to curate the output of some CC0 derived model and build up a much larger training set for a new model. You could also procedurally create data for training by generating images based on 3D scenes with various shaders applied to give them the look of different art styles. You could also use neural style transfer instead of or in addition to a shader to add more styles of images. You could use the multi-modal model to judge these images as well, selecting only the best. With that, you essentially have a fully automated pipeline for producing any size training set you want 100% synthetic except for the base 3D assets, shaders and example style images which you could source CC0 or buy license to.
I more or less agree with you (I'm not convinced that training models on the imagery of the internet isn't fair use), but I wouldn't rule out a CC0 model just yet.
There's Mitsua Diffusion One [0], which doesn't produce incredible results, but it's a start and they're planning on adding more data, including opt-in work from artists.
PIXART-alpha [1] was trained on only 25 million images, and has excellent and competitive results. This could pair well with Fondant AI's 25 million Creative Commons-only dataset [2] (not all CC0, but a sizeable amount).
I don't think it's as far away as you think it is!
> Valve has adopted the stance that any game with even a hint — literally just a whiff — of AI generated content will be instantly rejected from steam with no explanation beyond "it appears your game contains AI generated content." They say nothing beyond that.
Steam is not the world. You can publish on Xbox, Playstation, GPlay, Apple Store, web, anywhere. Steam is not the panacea.
No, but they are more or less a monopoly over PC gaming. Yes, there are alternatives, but they are a tiny percentage of the market, and your ability to sell to the PC market will be severely limited. People don't use the other storefronts unless they really have to, and your indie game using AI-generated assets is unlikely to get people to move over.
I'm having a hard time imagining what role AI (in around its current state) would actually play in creating Portal.
The only professional art assets I'd imagine you could substitute with AI work might be textures, which already tend to be found in libraries rather than handmade. Actual spritework is still quite far off and 3D Models even more so. Not to mention an AI definitely can't do much for art direction or getting your assets to actually look good put together.
Music and Writing seem too far off right now to even consider, let alone design work. So what essential element is Generative AI bringing at the moment? Skipping out on Voice Actors? (I don't believe it's on par with professional work on that front either but it's closest and people aren't as discerning.)
The most I can see besides that is it could speed some parts of art concepting and save you a buck on texture licensing.
Revolutionizing the programming part with copilot or such seems more believable, although since Steam isn't barring use of AI code (How would they even know?) I don't think that's what you mean.
"but does it only make these because there already exist hundreds of examples and tutorials? fear not, there's an entire library of other projects it made all of which also have hundreds of examples and tutorials"
don't get me wrong, it's interesting, but the response here didn't answer the question
GPT doesn't create novel things unless you give it a reason to. It doesn't "want" to be novel, due to it's lack of wanting. But if you poke at it you can easily get it to create novelty.
For instance I asked GPT to make some game concepts for a more narrative game with an emphasis on locked areas you'd have to get through, and here's a few of its ideas:
1. Masquerade Ball of Eternity: Set in a never-ending palace ball where attendees wear masks that indicate their access levels. To gain entrance to different rooms, players must persuade, charm, or deceive other guests to swap masks or uncover clues about hidden access points.
1. Fantasy Festival: A week-long festival in a magical kingdom where different events, shows, and parties require special passes. To get these passes, players must navigate the gossip mill, undertake tasks for performers, or sneakily forge invitations.
3. Library of Forbidden Knowledge: Every floor of this immense library has knowledge more restricted and coveted than the last. To move up, players must win debates, discover hidden lore, or befriend ancient librarian spirits.
4. Zoo of Mythical Beasts: To access each enclosure, players must understand and empathize with the creatures, learning their stories, likes, and dislikes to gain passage without arousing suspicion.
(Now I can hear you saying: but aha! Every one of those ideas is made up of letters and words found in the training set!)
I can pretty trivially generate something using a LLM that I doubt was in the training set or has ever been written down by humans. Do you have reason to believe that's not the case?
You can literally make up a language yourself and get ChatGPT to talk in it. Sure I can't prove that I didn't just make up a language that someone in the trainingset also made up exactly the same way. But that just seems incredibly unlikely.
The point is pessimistic off hand dismissals without thinking about it deeper than the surface level because there’s current day examples. Not someone posting a bad comment because they merely threw a couple things together.
I mean, it's just a clickbait video. The research paper [0] is cool and all, I just wish we didn't have these hyperbolic clickbait titles like "OpenAI’s ChatGPT Makes A Game For $1!" - we really are in a bubble.
The annoying part of so many of these examples is that they make GPT seem so shallow by treating stuff like a race. You can get something simple in 7 minutes, but you can also get something much more interesting in 8 hours working with GPT.
I imagine the near future will be launching a swarm of agents to accomplish a complex task and come back later when done. Right now, it is just doing the basics.
Game dev for online game here and genuinely curious.
I have to work with online servers, regular testing, analyzing bugs in both client and server (python), troubleshooting, adding new features to client (written in C# for iOS and Java for Android).
In practice, what is the best way I could use ChatGPT in game development? How can it realistically test the game and troubleshoot issues, how can it implement new features?
These things require making changes to code, compiling, running the client and testing how it plays and looks. Everything has to be extremely precise.
Right now today it won't go as far as you are asking but the best workflow imo is using GH's Copilot in your IDE and a ChatGPT plus account on the side to use for more complicate discussions that GPT4 handles better.
I am not in game dev but that is my workflow and it has definitely improved my productivity and is a pleasure to use. I can get Copilot to help write tests for my functions, I use it regularly to see if there is a more optimized way to write the code I am looking for. I switched from JetBrains to VS Code so I can get a better integrated experience too. The Copilot autocomplete is usually pretty spot on with what I want to write.
I'd like to add to what the other commenter said. I use the same stack but I mostly limit Copilot to autocomplete, the built in chat is not really as good as just copying and pasting into GPT-4 for anything even remotely complex. There is also Cursor https://www.cursor.so/ which integrates GPT-4 into the IDE but you do have to switch to using their custom version of VSCode. ChatGPT Plus is also getting multi-modal support so you can send it images, this has been useful for working on webdev for me but I could see it being useful in gamedev as well. If you haven't used ChatGPT much before, make sure to fill in your Custom Instructions with details about yourself and your preferred communication and programming styles. It really helps focus the model on your use case.
Any time you can drive things with data there's a good opportunity.
For instance, if you have a functional testing suite that can be run with, say, a series of commands in JSON... you can tell GPT what it can do and ask it to write out scripts in that format.
In general if you think you want a DSL you might be able to get away with JSON and GPT instead, using GPT prompts as your DSL and some simpler imperative system run off GPT-generated JSON. (GPT can translate the JSON back to English too, so it's not just one way)
If there's any story elements in your game you can generate them with GPT, again probably resulting in JSON. You shouldn't defer the entire plot to GPT, but it can fill out lots of details and make the environment richer.
If you have autonomous entities in your game (as simple as any kind of enemy) you could both brainstorm behaviors and personalities and then turn that into code, making the enemies more eclectic.
I quite like Hungarian accents, I'm surprised you find this one so grating.
I don't think anyone's going to make you a step by step guide, but if you want to jump in and play around yourself, the author did link the repo in the description.
Justifications aside, it was depressing seeing how quickly the AI gamedev movement ground to a halt. I’ve been waiting patiently for 4 years now. And now that the moment is here, it’s impossible for any serious gamedev to pursue it without committing financial suicide.
Bah. At least such things tend to work themselves out over decades. The next few generations will look back on us and wonder why we dragged our feet for so long.