Gaming Literacy: A brief overview of game artificial intelligence

Video games have always struggled with the problem of artificial intelligence (AI). Every enemy in every game has to have some kind of coding that tells it what to do. Back in the days of the Atari 2600, enemies and NPCs were given short preprogrammed lists of commands. However, today we have the ability to integrate actual learning AI into our games. Yet despite our advanced technology, game designers still struggle to create game AI that simulates player actions. Will we ever fully emulate human players? Should we?

Video game AI isn’t really AI in a traditional sense. When computer scientists talk about AI, they talk about some sort of construct that has the ability to emulate the human brain’s ability to learn and solve problems. Video game AI isn’t looking to emulate the human mind, just human behavior. Since there are only so many actions you can take in a videogame at any point in time, this is easier than it sounds.

Enemy AI

Not every piece of game A.I. needs to be complex. Simple game enemies, for example, usually only have to emulate one sort of human behavior: movement. If an enemy deals damage by coming in contact with the player, then their only goal is to make that contact.

Simple enemies, like Goombas in Super Mario Bros., have very rudimentary A.I. In this case, their brain can only do one thing, move in a single direction. This makes them more like moving stage hazards than truly thinking opponents. More complex enemies have the ability to change direction when they reach the edge of a platform or to walk a certain pre-programmed path. However, none of these enemies truly adapt to the player’s behavior.

Super Mario Bros. 3 introduced some new enemies that did adapt. These enemies were controlled with simple if-then trees. The Thwomp, for example, slammed to the ground if Mario was close enough to it on the horizontal axis. The Boo was even more complex. It only moved if Mario was facing away from it. It also didn’t move in a pre-programmed path. Rather, it always moved directly toward Mario’s position.

Most game AI doesn’t get any more complex than this. Look at the Bokoblins from The Legend of Zelda: Breath of the Wild. When idle, each has a basic set of programming that they follow. Some are programmed to stay in place, some are programmed to follow a preset path (like the aforementioned Mario enemies) and some will randomly choose a direction to walk, walk for a bit, stop, and then choose another direction. All of these patterns of behavior can easily be represented with a few lines of code.

If the Bokoblins notice Link however, their AI changes. Now, their first goal is to find a weapon. If they have no weapon equipped they will run to the nearest weapon and pick it up. If they are equipped with a weapon, they will move into range and attack every few seconds. This holds true for both melee weapons and bows.

This is a very simple set of instructions for an enemy in a very recent game, and when you start to look closely you’ll see most enemies follow similar instructions. Enemies in shooters will usually seek out a weapon first, then seek out cover, and then attack. Enemies in action games will run through patterns of attacking and blocking. Even classic Mega Man bosses had basic patterns of movements and attacks that could be slightly influenced by where the player was standing.

These sorts of enemy A.I. never evolved past these simple instructions because they never needed to. Enemies in single-player games are meant to be challenges for the player. The player gains enjoyment from overcoming these challenges. Thus, they need to be able to figure out the enemy’s behavior and exploit it to overcome them. This is why even the most challenging bosses are still predictable. If these enemies had a form of human intelligence they would be far less predictable, and thus less fun to play against. When creating this sort of AI, developers aren’t necessarily thinking about how they can emulate human behavior. They are thinking about what pattern of behavior would be most fun to predict and overcome.

Competitive AI

Simple behavior patterns like the ones we mentioned above might be fine for enemies and NPCs in action games, shooters, and platformers, but what about competitive games like fighting games, MOBAs, RTS, and even board games like Chess? Players enjoy these games when they can outthink the complex strategies of a skilled opponent. In this case, AI has to simulate that sort of opponent.

The simplest form of competitive AI is reactive AI. This sort of AI follows a very simple pattern of behavior when idling, but a far more complex decision tree when the player performs an action. For example, fighting game AI can choose to block when the player attacks and attack when the player uses an unsafe move.

The issue with reactive AI, is that it depends on the player… well… doing something. Without player input, the AI is stuck on its idling pattern of behavior. In most circumstances, this pattern of behavior is designed to lure the player into taking some form of action so that the reactive AI can do its job. It’s essentially incapable of understanding when the player is doing nothing, and similarly incapable of understanding anything that it’s decision tree couldn’t predict.

This flaw in reactive AI is so glaring that it became a meme. “Luigi wins by doing absolutely nothing” showed how the most difficult AI in Smash Brothers couldn’t handle a player that just stood still. The AI didn’t know how to approach Luigi, constantly jumping back and forth and dodging, expecting an attack that it could react to. It also couldn’t really understand the shape of the stages it was on. So it simply followed its idle pattern until it unceremoniously died, whereas any human player would have easily adapted and demolished the idle opponent.

This meme also showed off another type of competitive AI, probability based AI. This is a type of AI that has a certain degree of randomness programmed into it. For example, if an opponent can choose from several different useful moves in a fighting game, it will choose to do one of them randomly. This prevents the AI from becoming predictable.

Even simple enemy AI sometimes uses random probability to determine its actions. Usually you see this in bosses that have a chance to choose from different attack patterns each time you fight them.

Competitive AI can use randomness in much more complex ways. For example, it can give different actions or strategies a different weight. You can make an AI more aggressive by giving it an 80 percent chance to approach the enemy and attack and a 20 percent chance to retreat.

You can simulate different skill levels with random probability. For example, a fighting game AI can be coded to block 100 percent of all sloppy moves, but that would make it nearly unbeatable. “Hard” AIs could block 90 percent of these moves or more, forcing players to think about pressing buttons, while “Easy” AIs could block 10 percent or less, allowing players to mash buttons and get a win.

Probability based AI has also been used to simulate “learning” in some games. Smash Bros. 4 and Soul Calibur 2’s arcade release both have modes where you are able to “train” an AI. In reality, these AI’s weren’t actually learning. They were just using simple probability based decision trees, but used player input to determine that probability. Using a certain move enough times would increase the probability of that move being used by the AI. Similarly, blocking enough times would increase the probability that the AI would block. However, fully trained AIs were mostly indistinguishable from one another, showing that there wasn’t any real learning going on.

If you are interested in coding your own competitive AI, you can in the create-your-own fighting game engine, M.U.G.E.N. A full tutorial on AI coding can be found here.

Actual Learning AI

That’s not to say that we haven’t developed game AI that can actually learn. In fact, video games have been used in several computer science experiments. They are actually fantastic training tools for AI since they provide a virtual environment with simple predefined goals.

We don’t stick this AI inside games. We make this AI play our games. This would appear to be a simple task. We just talked about how developers routinely code in behavior patterns to bots and enemies.

However, these actual learning AIs aren’t coded to do… anything really; anything other than learn, that is. They are given a goal and raw video game data and have to determine themselves how to best meet that goal. It’s approximately the same thing that a player does, minus any menu or tutorial reading.

Youtuber and speedrunner SethBling did a few AI experiments with Super Mario World and Super Mario Kart. His first experiment, MarI/O, tasked an AI with getting to the end of a Super Mario World stage. The AI had a simple goal, get further to the right.

At the start of the AIs training it’s basically random. It will do nothing or press a button that doesn’t cause Mario to move to the right. After a certain number of frames without right movement, the simulation decrees the run a failure and resets.

Eventually, the AI randomly presses right on the control pad, which makes Mario move to the right. As a result, the AI’s fitness level increases. This is what the AI examines when determining whether it has performed well or not.

Seeing that pressing right gives it a reward, the next generation of AI will start the level by pressing right as well. It will mimic the previous generation’s behavior until it gets to the point where it died, and then it starts to randomly iterate until it makes more progress. If it continues to get stuck, it iterates more and more, changing up its inputs until eventually it makes more progress. Doing this enough will eventually create an AI that can play through a whole level given no instructions other than “try to move right.”

This is an example of a neural network that develops out of evolution. However, you can also train neural networks based on data recorded from humans. This is what SethBling’s Mario Kart experiment, MariFlow did. It trained itself on 15 hours of his own recorded gameplay footage and attempted to learn how to simulate his own driving style. In this case, MariFlow wasn’t concerned with how well it was driving, just with whether or not it was driving like SethBling

This is the same style of AI that we have seen attempt to write stories and music. It’s the same style of AI that wrote Harry Potter and the Portrait of What Looked Like a Large Pile of Ash after being trained on the Harry Potter series. It’s also the same type of AI that generated custom Hearthstone cards a few years ago.

When we task AI with playing video games, it sometimes figures out strategies that humans never thought of. In SethBling’s own MarI/O project, the AI figured out a way to make a one frame jump to stomp on a koopa when it shouldn’t have had room.

However, one of the most recent examples of an AI exceeding human expectations came about when researchers had an AI play Q*Bert. The AI was told to maximize its score, with the thought that this would eventually lead it to defeating each stage. However, the AI eventually found a glitch that continuously looped the end of the first stage, allowing it to generate an infinite score.

How Valuable is Realistic AI?

You may notice that these actual learning AIs don’t really simulate human play, and frankly they wouldn’t. While neural networks can simulate human learning, machine learning is still a whole other beast entirely. We can create game constructs that learn, but they will always learn in ways that we can’t necessarily predict.

And this ends up in conflict with the purpose of game AI. Game AI is designed to be something we can predict. It’s that very prediction that makes us feel like we are solving a puzzle and triumphing over our enemies. Even the most sophisticated competitive AI is designed such that the player learns something from it. Our greatest challenge isn’t developing realistic game AI. Our greatest challenge is creating game AI that we want to go head to head against.