Why an AI for Outwitters is hard to do
04-05-2012, 07:44 AM
(This post was last modified: 05-05-2012 04:42 AM by Kamikaze28.)
Post: #1
|
|||
|
|||
Why an AI for Outwitters is hard to do
Artificial Intelligence is not easy. You might think it is because you are used to seeing it in so many places nowadays: your iPhone (Siri), endless computer and console games and even TV (if you watched IBM's Watson take on leading Jeopardy!-champions). You may have seen science fiction movies or read novels where machines surpassed humans in their ability to think and act.
We are far away from the science fiction idea of AI, but board games and games in general have always been a very popular discipline in the field of AI research. Mostly because they are fun and interesting, but also because they offer a wide array of problems to tackle and solve. I would like to look at Outwitters from the vantage point of an AI creator to give you a little glimpse behind the scenes of AI. This will inevitably lead into some background on Artificial Intelligence but fear not brave readers, I will try my best to keep it simple and entertaining. When talking about an Artificial Intelligence it is usually called an 'agent'. Agents basically do three things to achieve their goal:
Environments come in many shapes and flavors but in the field of AI there is a small list of features that characterize each environment exactly. For Outwitters, this list of features would look like this:
The Adorable runner moved there to destroy my Scallywag runner, even though I cannot see it after this turn, I know for certain, that it must still be there (unless there is a Mobi involved). The Adorable heavy on the other hand could have moved after my runner was destroyed, but based on this observation, I can estimate where he is likely going to be in the next turn. The agent has to integrate all of this information to be competitive because any skilled player would do the same thing. This already touches on one of the major problems for my AI - dealing with uncertainty. It would be easy to create a simple reflex based agent ("if you see this, then do that", skill level: lobotomized cockroach) but these kinds of agents are trivial to defeat. What we need is an agent that can deal with the fog of war. To do this, the agent would have to understand the basic rules of the game as well as all the special units abilities. There are mathematical models to handle uncertainty, but believe me when I tell you that they are a nightmare of their own to wrap your head around. Similar to the situation above, the agent would discover new information during its own turn. By spawning a unit or moving a unit, the fog of war changes and new enemy units may become visible. Uncertainty then becomes certainty and the whole situation changes. Next, we come to the "thinking" part. The reason I am using quotation marks all the time is because AIs do not think, they follow a predefined procedure to arrive at a specific action based on their observation and their own state, if they have one. Whether or not this could be called "thinking" is a different question entirely. How does an AI do this in the case of a game? Quite straightforward: it simulates all possible moves that it could make and all possible moves its opponent could make in response and so forth and then picks the one turn (set of moves) that promises the highest chance of winning. Now we are presented with two problems:
To give you an example: Deep Blue, the famous chess computer which defeated the reigning chess world champion Garry Kasparov in 1997, checked 200 million positions per seconds and simulated 12 moves ahead (in special cases even 40 moves ahead). Now that is chess - Outwitters is a whole different story. When thinking about game simulation you can imagine a tree with the current state as the root/trunk and all possible states that you can reach from the given state as branches and so on. An important feature of this tree is the branching factor, that is to say how many states can come out of a given state. For chess, this branching factor is around 35. This is due to the rules of chess: during your turn you have to move exactly one piece. That is all that you do in chess. In Outwitters, you not only move with units, but also do stuff with them or build new units. And worst of all (from the AIs point of view) you can do more than one move in a turn - or do nothing. Similarly, the opponent can do very many moves in a turn or very little. This will put Outwitters branching factor somewhere in the hundreds. What is so bad about this is that the broader the tree, the less moves ahead you can simulate in the same amount of time. The less moves the agent can look ahead the less sophisticated its long-term strategies will be and therefore the easier it is to beat it. If we taught Deep Blue to play Outwitters, thinking 12 moves in advance is like 2 turns (assuming Deep Blue could cope with the higher branching factor). That would result in an AI of novice difficulty. Last but not least, we have to talk about tuning the difficulty of an AI. If I did manage to get all the above problems solved, then I would have a nearly unbeatable AI. How do you diminish its skill level? Throw in random bad moves? That would produce some quite hilarious results. The answer would be to use different evaluation functions in the simulations and given how much work this would have been the first time ... yeah, go figure. To sum up: I am not saying, it is impossible to create an AI for Outwitters, but I am very convinced that it would be a very difficult task to accomplish. It would also take a very long time and a ton of work. I hope you enjoyed this read and if you have any questions feel free to post them and I will try my best to answer them. I am in no way affiliated with or authorized by One Man Left Studios, LLC. Any information on Outwitters I present is founded on personal experience, public knowledge or the Outwitters Beta Test. |
|||
« Next Oldest | Next Newest »
|
User(s) browsing this thread:
3 Guest(s)
3 Guest(s)
Return to TopReturn to Content