I have an academic and professional background in traditional AI, and I've programmed an AI for a 3D video game I made in college. My traditional AI background is mostly in applying a wide variety of mainstream machine learning algorithms, including neural networks, SVMs, and regression, to real world problems like fraud and image detection. I was somewhat frustrated when building the game AI because the best solution came down to building a large state machine with hard-coded logic. The AI was dead-simple, but it worked well and wasn't too far from state of the art in video game AIs at the time, and I suspect today.
I agree with the general point of this article; in fact, I think it understates the case. When people want AI in video games they want the AI to be truly smart. The fact is, we haven't cracked "intelligence" yet. Most of 'AI' research has veered into machine learning, which is basically applied statistics. While this algorithms can solve many constrained problems quite well (this field powers much of google search) it's tough to frame complex problems like a 3D FPS AI, into a simple statistical framework. Even biologically-inspired AIs like neural networks are designed to solve highly constrained problems. In short, I don't think it's possible to even design a non-bruteforce AI, regardless of computational power, with what we currently know.
One approach that has been more successful and easier to integrate than machine learning is Monte Carlo based systems. That's what the latest Go AIs use and they are making big strides.
[quote]Some of the AI neural nets for the Brink of War are only partially-trained; the AI performance in big games is way too slow and takes weeks to train. I'll need to fix that both so that players won't have to wait and to improve training speed.[/quote]
For "true" AI in games, check out Steve Grand's work:
It's surprisingly brutal and even against good players it wins a good proportion of games. It does though show the downside of the machine learning and neural net approach (it uses a trained neural net) in that it can take a very long time to train the network and when it exhibits unexpected behavior it's almost impossible to tell why.
For those interested, all the source for the AI is on Keldon's website (linked above) - the only thing missing is the art for the cards at the publisher's request.
I hear you. I have a similar background like yours and I had a huge disappointment when I broke into the game industry 2 years ago. But I am not agree with you in the sense that an statistic approach is not possible. There are so many opportunities but the reality is that game designers don't want that to happen.
The main problem I have found in this regard is the fact that designers love to have control over the behaviour of the game, and I believe they don't even want to understand how a machine learning AI can be possible. I even discussed my master thesis (an SVM based controller for enemies) with a veteran RTS game designer and his response was very heart breaking, he just told me that Game AI was good enough and that "people" wouldn't care about a better AI (after that he just turned out and walked away)
I am still optimist that eventually that would happen. But I don't have enough patience to fight against that any more. So hopefully a young hacker reading this would change that :)
a veteran RTS game designer and his response was very heart breaking, he just told me that Game AI was good enough and that "people" wouldn't care about a better AI
This makes me sad.. most RTS games have such terrible AI it often pains me to play them (and I love RTS games). Yeah yeah everything is moving towards online multiplayer, but I can't be the only person who would pay extra for a good single player experience, because often I really dislike the multiplayer experience and despite that I'm playing a lot of online games at the moment (Path of Exile, World of Tanks, League of Legends...), I usually prefer to play single player games and for this good AI is very important to me.
I love playing humans but same as you sometimes I want to take things my pace and it's a very different game from the kind of AI I want to be playing. And some of my friends hate playing humans, the games are too fast paced and they hate the shit-talk you sometimes get.
I remember reading THQ lamenting the decline in RTSes, as I watched RTSes from the two remaining greats move from base building to quick mini-troop production games. The trouble that people like THQ and Blizzard don't seem to realise is that though the sales of this game do great and people are playing lots of online matches, it's the next one that'll suffer the decline in sales as casuals abandon the increasingly online optimized experience.
I want hordes that aren't totally retarded so that each game is different. They need to start making horde modes that are actually varied and interesting, unlike the predictable drivel that DoW2 and SC2 were. The AI in those games was stupid, predictable and broken. It started fun as you raced to build your base in time, but after a few games it wasn't fun at all. You could set a stopwatch by when the computer would try and rush you.
I didn't work on DoW2 but I worked on CoH. The core AI engine was similar. CoH added features to improve group pathfinding and pathfinding for when terrain changed, etc. I don't remember all of the details since this was 5 or 6 years ago.
But the fundamental problem with AI is not building the AI. The problem is tuning the AI. You end up with a terribly tuned game if you give designers too many knobs or if your knobs look like a helicopter cockpit.
AI War is an excellent game and the AI does some surprisingly smart things sometimes (not to mention that you can have tens of thousands of units in a game).
But AI War is also a very different gameplay experience to most other RTS games, so while its a good game in its own right and does have good AI, it may leave you disappointed if you're looking for a more traditional experience.
Not to mention that its an insanely hard game but I guess that's the price you pay for "good" AI!
>I even discussed my master thesis (an SVM based controller for enemies) with a veteran RTS game designer and his response was very heart breaking, he just told me that Game AI was good enough and that "people" wouldn't care about a better AI (after that he just turned out and walked away)
The game industry is broader than ever. Supreme Commander uses neural nets for its RTS AI:
Last time I played Supreme Commander 2 (about a year ago), had the most braindead AI I've seen in an RTS in a while. The single player skirmish (and I normally prefer skirmishes to campaigns) was therefore unsatisfying and frustratingly annoying to play due to shockingly bad AI.
>Last time I played Supreme Commander 2 (about a year ago), had the most braindead AI I've seen in an RTS in a while. The single player skirmish (and I normally prefer skirmishes to campaigns) was therefore unsatisfying and frustratingly annoying to play due to shockingly bad AI.
>Maybe they've updated and improved it?
Not an expert, but I believe it started off as a mod, and eventually the modder worked with the devs to integrate it into later patches in the game.
> What benefit would your machine learning AI bring to a RTS game?
Currently if I understand correctly, the traditional approach is to define a bunch of actions and figure out a realistic way to choose what action to perform, tuning a state machine carefully to make sure the character is realistic as well as beatable. I can imagine this takes a lot of work.
I think a huge benefit of machine learning could be to change the nature of the interface that the character designer has access to: instead of designing the character's internal states, probably relying on collaboration with a programmer, a designer could be tuning reward functions, which more naturally express the character's relationship with its environment.
In other words, the designer can now treat the character states as a black-box model, and instead worry directly about its input-output behavioural relationship, something I think an artist might have better intuition for compared to dealing with explicit state machines.
Being able to do design by means of tuning rewards and actions could help decouple design from programming, which I think would be hugely beneficial for industry.
> Currently if I understand correctly, the traditional approach is to define a bunch of actions and figure out a realistic way to choose what action to perform, tuning a state machine carefully to make sure the character is realistic as well as beatable. I can imagine this takes a lot of work.
The aim is not to produce a realistic response, but fun. It's much, much easier to produce a response that isn't fun than to produce one that is, and this is the problem. Often a complex, systemic AI needs much more tuning to be fun than a simpler, more authored AI. Fun is an emotional response so it's hard to define systemically.
> In other words, the designer can now treat the character states as a black-box model, and instead worry directly about its input-output behavioural relationship, something I think an artist might have better intuition for compared to dealing with explicit state machines.
Game designers are not the same as graphic designers or artists, they work exclusively in logic. Flow charts, decision trees and giant spreadsheets are standard tools of the trade.
However, there's certainly scope for being able to take the game designer's logic to a higher, more abstract level. Being able to remove some of the micro-management would be useful. I'm not sure that machine learning helps there though, it seems more of a planning problem.
As a big fan of Age of Empires series I have to disagree, after a while the game (even in hardest mode) becomes predictable and after that the solo mode game is pretty much done. That doesn't happens on a LAN party because humans evolve their strategies as their opponents gets better and that increases the life cycle of the game in orders of magnitude. Which would be a very important economical incentive for an on-line game.
The thing is some research has to be done for this and the game industry is not very healthy to support research.
> When people want AI in video games they want the AI to be truly smart.
Yes! This statement made me think immediately of Team Fortress 2 and how impressive talented spy-players are. They're so impressive that I can be in awe of a good player whether he's my ally or my opponent. Of course, the latter also comes with some frustration.
For those unfamiliar, the spy in TF2 has the ability to cloak (turn invisible) for a few seconds as well as indefinitely disguise himself as a member of the enemy team. Subtle cues give away a disguised spy--he can't run 'through' team members of the same color like true allies can; he un-disguises if he attacks. His deadliest attack is the backstab--knife someone in the back to instantly kill.
What makes good spy players so impressive is that they've anticipated how other people behave and have thought of effective countermeasures.
A troll-ish, albeit intelligent, example of such intelligence involves a spy spraying a provocative out-of-game image on a wall at a corner. He cloaks when an enemy approaches and waits near the corner. Most new players will pause when running past the image to have a closer look. The spy then uncloaks and backstabs the player.
Pre-programming this sort of behavior makes it quickly predictable, unless the AI has enough variety in its 'bag of tricks' that it would take a long time to learn it all...just like playing against a skilled opponent for a long time. I'd love to see the day when game AI could discover such behaviors on its own, searching for new ones when old techniques become too common.
But there are middle grounds between machine learning and simple state-machine approaches. Probabilistic state machines, online learning, (especially reinforcement learning), sound to me like they would be ideal in a game environment.
You want the character to act realistically according to his position in the world relative to the player? Define his observations, define his "reward", define a set of possible actions and state transitions, and let him go! I'm curious whether this approach is starting to be more used in game AI, it seems like a natural fit to me. Maybe some pre-training is needed so that it doesn't act randomly at first.
However, the idea of handicapping the AI by actually reducing his observations or his set of possible actions seems like a much more natural approach than more explicit handicapping methods.
Well, one could also say that human intelligence is applied statistics, subject to biases. When you look at how we make decisions you'll see that we either refer to past data on the subject, or on a subject related to the issue at hand. Our past data can contain incorrect information and we'd still use it, since we don't know it's incorrect. Right up until someone comes up to us and says, "It's common sense silly, you don't adjust the side mirrors to view the back of your car but instead to view a continuous image of your back and sides along with your rear-view mirror. It's common sense".
Humans are a very good example of machine learning and "AI". What we learn in our school syllabus would probably have been University level a few hundred years ago. That is to say, if you consider the human as a single state machine, it(we?) is/are getting more intelligent with the passage of time.
Pardon my cynicism on humanity but I generally think that our intelligence is merely logical inference of past empirical data mixed with some healthy dose of confirmation bias. You could also add a bit of True Randomness to account for irrational behaviour along with a bunch of other biases.
There's very little logic evidenced in human intelligence. What's there is a tangled web of heuristics that gets us to a good enough approximation of a reasonable action to keep us from killing ourselves (usually), but also brings calculation time way down compared to if we had to explicitly determine the rational action.
Irrational behavior is the norm. It's explicitly rational behavior that is artificial, and we choose when to mentally crunch the numbers of rationality based on our good-enough old heuristics.
I know the video is confusing since it was intended to be a material support rather than a self explanatory material. But I still think it shows potential in the sense of how we could develop self adapting game characters using machine learning techniques.
It seems to me that 3D game AI is like low-latency trading: it's not about sophistication in approach, so much as fast execution and knowing how the system (e.g. the exchange or the game world) and tools work on a microscopic level.
I think one of the issues in game AI is that "good enough" is the target for commercial work, and it's not finding platonic solutions to the underlying problem. I'm working on AI for Ambition (using vanilla neural nets and backprop, to start). I'm not actually creating an "intelligence". I'm creating something good enough to pose a challenge (e.g. not make obvious stupid plays that ruin the feel). Eventually, I'd like to have agents that give me more understanding of the underlying structure of the game... but for now, something shippable is the target. With games we don't want a real human intelligence (humans get bored and leave, a few cheat) but rather a reliable player at a customizable skill level.
The single biggest (and maybe the only) difference between Game AI and Traditional AI is the optimisation function. TAI optimises for correctness, followed by efficiency, and GAI optimises for efficiency, followed by fun. The inapplicability of TAI solutions to GAI problems is because we (the games industry) don't have any kind of theoretical framework for even measuring fun, let alone optimising for it.
The state of the art is essentially a handful of ad-hoc models of how people experience games, how they improve and how they have fun (and why they stop having fun). Coupled with that, you have an enormous range of player skill - the lead designer of World of Warcraft estimated that the capability of player groups varied over several orders of magnitude (and he's a former academic professor, so probably speaking literally).
Someone in another reply stated that GAI was roughly passing a restricted Turing Test. That seems like a reasonable suggestion to make, but if it's true, then applying TAI frameworks to solve GAI issues is roughly the same difficulty as trying to create a non-Turing-test passing AI that can write a Turing-test-passing AI. It seems intractably difficult with the tools that we have available to us right now.
"It seems intractably difficult with the tools that we have available to us right now."
I think that's actually the problem rather than the optimization function. There is an vicious circle where lack of tools implies less interest that implies less research than implies lack of tools.
I am very reluctant to think that TAI's optimization function would make a difference in terms of techniques usability in GAI problems. I agree that fun factor is not quantifiable under current knowledge. But that doesn't mean that there is not common ground between them. For example, it is not the first time that path finding is being approached using ML:
It's a lot more important for AI for game characters to look real than to be real. It's easy to forget that the characters are there to support the fun of the game, and not to be fun to write or support the ego of the author. Incidentally, I thought the article was worth it for the mention of Infinite Mario and Galactic Arms Race alone-- I had not heard of those projects before.
There was even an Infinite Mario AI contest in which you had to write a player which gets through as many levels of increasing difficulty as possible. The winning solution? A* of course: http://www.youtube.com/watch?v=DlkMs4ZHHr8
(I had an also-ran entry in this contest that took 3rd place, but my solution wasn't significantly different than Robin's -- his was able to anticipate bullets firing but mine wasn't) Writeup: http://julian.togelius.com/mariocompetition2009/GIC2009Compe...
It depends on your definition of fun. It can be hilarious!
I would guess that all of these bugs were known by the developer at the time of ship and not seen to be important enough to 'fix'. Bug #4 may even have been an authored response. As a player, that's exactly what I want to see happen when I throw in a flaming barrel.
If you're interested in the intersection of game AI and 'real' AI, you might also want to check out NERO and all the work associated with it that has been done over the years: http://nn.cs.utexas.edu/?nero . One of the original principals on NERO, Ken Stanley, went on to a professorship at the University of Central Florida, where his team produced Galactic Arms Race. They have a social game in the works called Petalz: http://petalzgame.com .
It's like the Turing Test. You only have to be convinced that a human is on the other end, the AI on the other end doesn't actually have to think like a human does. It's a black box. How it does its thing is not important to the test.
A nice article, but I believe it should be mentioned that the main difference of game AIs vs traditional ones is, that the game mechanics are known. This means for machine learning algorithms there are a lot of low hanging fruits like perfect shooting (or more generally exploiting that a AI can usually control the game better than a human since it does not need to use a interface device). And because of this, it is actually necessary to tweak the AI such that the player does not feel cheated himself.
> And because of this, it is actually necessary to tweak the AI such that the player does not feel cheated himself.
Unless you don't, in which case you end up with a Perfect Play AI, as happened in Mortal Kombat:
> Mortal Kombat is famous for this, this is why the trope used to be called MK Walker; an AI opponent on this mode would simply block or dodge any attack thrown by the player with inhumane frame precision and beat the player mercilessly, giving the impression that the AI was walking over the player.
> "Cheat wherever you can. A.I.s are handicapped. They need to cheat from time to time if they're going to close the gap... Never get caught cheating. Nothing ruins the illusion of a good A.I. like seeing how they're cheating."
In single player games you have NPCs (non-player characters) and in multiplayer games you have bots (again, computer controlled characters). For instance, Team Fortress 2 has bots and Half-Life 2 has NPCs.
The difference is, bots behave and mimic actual human beings. They are bound by the same rules, and have the same controls and range of motion as human players. It's hard to distinguish the difference at times, granted they're far from perfect.
NPCs on the other hand, seem to be this "dumb" AI the article is referencing. They are intentionally made this way to serve a simple, singular purpose. They are either scripted for dialogue or just peppered throughout the map as meat puppets.
Why is it that we aren't seeing the bot approach to AI implanted into single player games? Why aren't we facing opponents who have their own missions/agendas in single player games, who are bound to the same controls/restrictions/exploits as yourself?
The current AI approach is to place enemies in hiding positions who want nothing more than to fight you to the death, without any purpose or reason.
I want AI that wants to do something, like another player would, that isn't concerned with me in particular, that has it's own goals/rewards instead of just fighting me or spouting dialogue at me.
> Why is it that we aren't seeing the bot approach to AI implanted into single player games? Why aren't we facing opponents who have their own missions/agendas in single player games, who are bound to the same controls/restrictions/exploits as yourself?
It is hard to put difficulty curves on those kind of AI, and they often are implemented to use full scale path searching to find objectives, so in games with larger maps they would either need to blind to things in the distance or consume excessive processing power.
I remember the Skarr in the original Unreal would be hunting the Nali until you came around and they would attack you. Their behaviors were amazing at the time (and honestly, they still are better than 99% of game AI) in that they would dodge, feign death, hide in cover, and act as packs when in groups to fight you.
They were also distinctly harder at the time than almost any other AI opponent. So much so that I remember a tremendous amount of complaining that their AI was too good.
> I want AI that wants to do something, like another player would, that isn't concerned with me in particular, that has it's own goals/rewards instead of just fighting me or spouting dialogue at me.
The problem with this is that not enough people would pay for the complex AI. Game are about profit motive, and sadly making sophisticated artificial worlds of intelligent bots doesn't bring home the bacon. It is absolutely possible, especially in an MMO kind of setting, since you can just throw more CPU cores to do all the complex logic for whatever fancy AI your NPCs are using, but nobody seems interested.
The last in the office job I had (12+ years ago in San Diego, before my wife and I moved to the mountains in Central Arizona) was doing game AI for Nintendo and Disney.
I loved it! The only reason I left was that we bought a house and had been planning to move.
I only work remotely now but if anyone ever wants a technical partner for doing a small Indy game, contact me. I have no skills as an artist. I write code.
I've just had my first course in AI, and being interested in games, I was curious to see how the reasonably specific, theoretical techniques we were taught could fit into game design. This was a reasonably interesting example, but I think I'd be more interested in some specific applications, although I appreciate these might be jealously guarded secrets...
You might want to check out the "AI Game Programming Wisdom" series. I think at this point, there are four of them. "Game Programming Gems" is another series worth checking out (I think there are 8 at this point); each book normally has a section on game AI.
The books are a series of articles written by industry practitioners solving a problem they have run across. Normally the articles contain inline code, and the books come with a CD with executable examples and code to accompany the articles.
I agree with the general point of this article; in fact, I think it understates the case. When people want AI in video games they want the AI to be truly smart. The fact is, we haven't cracked "intelligence" yet. Most of 'AI' research has veered into machine learning, which is basically applied statistics. While this algorithms can solve many constrained problems quite well (this field powers much of google search) it's tough to frame complex problems like a 3D FPS AI, into a simple statistical framework. Even biologically-inspired AIs like neural networks are designed to solve highly constrained problems. In short, I don't think it's possible to even design a non-bruteforce AI, regardless of computational power, with what we currently know.