What is Tuebor?

Well, Tuebor is a computer game made by Strength in Numb3rs Studios about a dystopian future, so far so good. But this game doesn’t fall into any of your typical game categories because it incorporates aspects of many games you might have played and enjoyed and combines them into a very new experience. In brief, it is a collaborative multiplayer action game, where you control characters fighting an opposing force using Guns, Psi, Blaster, Lasers, Fire, High-Energy Anythings, Nanites, and everything else a great Sci-Fi game deserves. But what does that have to do with me, evolving AI?

I just got a grant from SiN studios (yeah!) to integrate evolving brain technology into this game, and I can’t be more excited. Enemy characters will not only have Markov Brains, which by itself already makes a difference, they also use Darwinian evolution to adapt their behavior to the opponent. To fully appreciate this we have to first take a look at conventional AI in game which falls into three classes: the algorithm, the dance, the decision tree.

  • You have all seen “the algorithm” in action when deep blue beat Kasparov in chess. It is a piece of software that crunches the numbers and does the perfect move. Typically here the challenge is think further and faster than the AI, and it is typically deployed in strategy games, and I know of no good example where this approach might make sense in an action game like Tuebor.
  • The “dance” describes a choreography of actions deployed by the AI that follows a strict order of events. World of Warcraft (WOW) does this a lot, and I just saw my kids play Skylanders which does exactly the same. The big advantage is that the player experience becomes consistent and you can very easily balance games. It also means that once you figured out what the AI does it is only a matter of doing the right thing at the right time. One effect is that you could actually watch a video of a boss battle, and once you saw what the enemy will do, you can just deploy the same successful player behavior you saw to overcome the enemy – I find this boring, and it is one of the reasons why I don’t dance the WOW tango…
  • The decision tree is pretty much the none plus ultra for many action games. Depending on the current situation, a decision tree decides whether or not to do something or to contemplate something else. This typically results in a rather machine like behavior where a guard on watch might walk back and forth, and once the guard spots the enemy it engages. Obviously, you can make this more complicated, and good game AI designers will come up with sophisticated trees, but what remains is the “state to state” nature, which results in rather predictable behavior. In most cases you want to present the player with a predictable situation that could be overcome once the behavior of the opponent becomes predictable. These behavior trees are much more flexible than the “dance” but are robot like enough that many player still prefer human opponents.

Markov Brains are different in two key aspects. They are stochastic, which makes them as unpredictable as a good human opponent. And they can form memory and representations, and use them to make better informed decisions. They don’t execute a stereotyped list of actions, but can recognize patterns and act accordingly. In their simplest form they can be similar to a decision tree with the one big exception that they can “jump” from one action to the other freely, which makes them much more dynamic and flexible, and hopefully much more dynamic opponents. Which by itself should give you an experience you never made before with any other game.

Lastly, we incorporate evolution in this for two key reasons. First, we simply can’t design these complex behaviors by hand, and therefor we let evolution sort out what is good and what is bad. Secondly, other than a pure machine learning approach, that optimize for one goal, here we hope to allow for arms races with the players. We know that players communicate a lot, and tell each other what worked and what didn’t, and new found exploits agains AI opponents spread quickly thanks to social networks. This creates a very interesting situation for our evolving AI. It means that something that was good now becomes bad, and evolution has to find a new solution. Which in turn first fixes the exploit by itself, and secondly creates never ending new challenges to the players.

Honestly, I can’t wait to see this, and I am thrilled that we are working on this. One of the things that I as a game developer seek to invoke in a player is a rewarding new experience, and this game and its AI seams to satisfy this perfectly.

Cheers Arend

 

Arend Hintze

 

Leave a Reply

Your email address will not be published. Required fields are marked *