Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming IT Technology

Talking 'Bout Game AIs 133

Steven sent over an interview Feedmag has got with the lead AI programmer for Black & White. He talks about some of the creature/villager routinues in the game, which is interesting for the game, but also interesting in terms of how much the world of AIs for games has changed in the last few years.
This discussion has been archived. No new comments can be posted.

Talking 'Bout Game AIs

Comments Filter:
  • by Anonymous Coward
    pshaw. Perceptrons & decision trees are not revolutionary. When they're dropped into a simulated microworld they can & do adapt impressively. This is hardly cutting edge AI, though. Not to detract from B its hands down the best AI in a consumer game.
    Academia needs to make it more widely known to the software industry that stuff like this has been available. There's no way you could drop the code into a robot & have it do anything at all - the hard stuff is the vision, adaptation - stuff that is hardwired into the game. (in B&W the AI doesn't have to see/recognize a person, the internal game state hands it coordinates & tells it that its item #5109 or whatever, that its been x hours since it has eaten. the AI walks the decision tree & pulls the appropriate perceptron for the situation).

    It works well here, but be careful claiming this is anything bigger than excellent game AI using well-known techniques.
  • First, some use the term perceptron not just for Rosenblatt's original classifier, but for any of a class of multilayer feedforward neural networks (which use the perceptron as their basis).

    Second, decision trees are another form of classifier used in machine learning. As others have noted, Quinlan's ID3 and C4.5 algorithms are the most popular.

    --
    Marc A. Lepage (aka SEGV)
  • AAAI [aaai.org] has (or points to) lots of good introductory material on decision trees [aaai.org], and machine learning [aaai.org] in general.
  • Cutting edge graphics, and killer AI always show up in the gaming industry before anywhere else.

    Hogwash. You just don't see it because you're not in the right place. I don't know about graphics, but good and practical AI techniques have been flourishing in logistics and data analysis, in manufacturing and distribution, and dozens of military applications, plus fraud detection, consumer incentive modeling, financial forecasting, and dozens of other areas that consumers don't directly care about.

  • There's no link in slashdot announcement nor in the "feed" article.

    BTW, this tends to generalize... no link to outside in articles, no easy way to find other information source than the current site...

    dead-end servers.

  • What would a 100% increase on 1.0 be?
    What would a 50% increase on 1.0 be?
    What would a 50% decrease on 1.0 be?

    Now, what would a 250% increase on 1.0 be?
  • Perhaps this long juicy piece on Gamespot [gamespot.com] will satisfy your curiosity.

    mahlen

    Velilind's Laws of Experimentation:
    1. "If reproducibility may be a problem, conduct the test only once."
    2. "If a straight line fit is required, obtain only two data points."

  • I of the future... I come for you... sarah conner...
  • Anyone have any insights into this "decision tree learning" that Evans mentions? It seems to me to be one of those fuzzy terms that could refer to any of a dozen things.

    yeah, i know what you mean. in my area, decision trees usually refer to simple trees of hierarchical dependencies - in order to satisfy the goal represented by the parent node, you have to satisfy the goals of all child nodes, in order. it's a very nice way to represent scripted behaviors (such as how to go about finding food, engaging in combat, etc.).

    i don't know what he means by learning there, but i'd bet i'm not too far off in suspecting the 'learning' part is just tweaking the weights on subgoals, so that the approach you used the most before would be the first one to try next time...

    i personally found that whatever learning technique got used in that game, it has significant problems with reward assignment. for example, i see my creature standing around and growling, and its hunger is about to peak. i feed it something and reward it. the game responds - "from now on, your creature will eat more when it's tired." but no, that's not what i intended! i fed it because it was hungry!

    proper reward assignment is a huge problem in all learning algorithms - which is why it's especially surprising that they would prevent the player from knowing what exactly he's rewarding the creature for...

    Like "perceptrons" -- unless he's actually referring to the algorithm described in Marvin Minsky's book of the same name

    i think he was just refering to neural nets without hidden layers - those were the ones described in the book, and the name stuck, at least in colloquial usage.

    but i don't have any inside knowledge (such a bad pun! :) about the workings of b&w. i could be wrong. :)
  • Having loved the original Populous games and enjoy games such as SimCity, RedAlert, Championship Manager etc., it is easy to see that the visual effects of a game attract the buyers and reviewers but the cleverness of the game actually will make you play it for longer than the first month or two.

    Having AI take a much more active role recently in games can only be a great thing. After a while predictability in the games make them easier and less enjoyable to play. I for certain will be buying Black & White when I next visit the game store.
  • yeah... I know... I was just making fun of the article, not the lack of good AI in games. There are some really awesome game AIs out there right now.
  • If you want to play a game that doesn't cheat and at the same time will almost certainly beat you, try chess programs. It can't cheat because all the pieces and moves are visible, and the best of them has beaten Kasparov ;-).
  • Decision trees in the context of AI are inductive learning algorithms. They perform supervised classification.

    You provide them with a set training examples where the class of the example is known and "features" that describe the examples. The algorithm them tries to determine how to distinguish the different type of examples or classes. This is called supervised learning.

    There are numerious types of decision trees. The best know ones, find the best way to split the examples into seperate groups using a entropy based metric (such as information gain usually only using a feature at a time). This splitting continues until all the data have been split into pure groups or it is not possible to differential between members of different classes. (This is gross over simplification that ignore problems of overfitting the training data, ect.)

    Once trained, the trees can be used to classify unlabeled data.

    The have some advantages over neural networks in that the intial structure and the number of training iterations do not need to be specified. However, the have a harder time representing class boundaries that are linear combinations of features (e.g. class 1 if x > y, class 2 otherwise).

    C4.5 and CART are the best know decision trees. While there is a comercial version of c4.5 called c5.0(www.rulequest.com), the c4.5 source code is availible from http://www.cse.unsw.edu.au/~quinlan/.

    Tom Mitchell's book "Machine Learning" provides an excellent introduction to decision trees as well as other machine learning algorithms such a error backpropagation neural networks.

    I can't seem to find what looks like a good tutorial online quickly although I am sure that there must be one.
  • Multi-layer perceptrons are not limited to linearly seperable problems.
  • Not in some programming sweatshop at EA.

    I think a certain Peter Monyneax would be very offended to read that comment about his company.

    You know, a company where people come into work at 1pm, work until 5pm, go home for food, pub for fun, then back into work at midnight to work another few hours at the best time for coding!

  • By design time I meant the amount of time people spend working on the AI, as opposed to the CPU resources it eats up while the game is running. This includes testing/tweaking.

  • Of course they cheat. The only modern strat game I'm aware of that doesn't cheat is Europa Universalis and the AI is disappointingly easy to beat.
  • The outputs for a perceptron must also be known. The difference between a perceptron and a back prop, AFAIK is that there are multiple layers in a backprop, and therefore you need a more complex training algorithm to propagate the error backwards.

    But they both rely on error correction to learn.
  • That could be(re Hebbian), but my gut feeling is that they have figured out a tight and complete set of outputs and have figured out a way to use the error signal of god feedback to adjust the weights.

    But you're probably right that a back prop net would require a more uniform spread of the input space to accurately capture what it needs to.

    I guess what I'd most like to see is a way to use multi level networks, backprop or no. The capabilities of a perceptron are fairly meager, it can't even do XOR, which one would think is a fairly basic tenet of a decent AI.

    That article just struck me in the way it waved around these very basic AI techniques as if they were something to be proud of.

    But you can't argue with results. B&W is definitely interesting to alot of people, so they got something right.

  • It was written in (microsoft visual c++)
  • The text say:
    (it'll be available on other platforms shortly)

    Anyone who know what platforms he talk about?
  • Actually one interesting thing about AI that keeps learning is that it can drive itself insane. At least that was the experience we had where I worked last. We had a neural network to control a highly non-linear system with a huge amount of noise. It would do well, but if you kept in on learn mode, it could develop some really serious quirks. It's possible it was just a bug in the NN code, but it was certainly an interesting "feature".
    It would also be interesting if you developed an AI through randomization and had it end up coming up with a completely non-intuitive solution that kicked. Although if that happened it's probably something that should be fixed with the way the game plays.
  • Different problem. DirectX/OpenGL provide well-known APIs with well-known oft-repeated tasks to implement.

    For AI...

    a) Human factors are currently important. Deciding how to structure a neural net, for instance, is a bit of a black art. Ditto for a GA/GP. A card probably isn't going to help this *very* critical stage.

    b) Another limiting factor is feedback -- for supervised learning. That generally only is available through actual games... preferably with people, since a strategy that does well against a badly-coded AI could get trounced by a person. Adding a card isn't going to significantly speed up the number of games a person is willing or able to play to provide more training data.

    c) Sheer problems of scale. There's a LOT of bits to choose. For instance, try computing the number of valid Bayes net structures for, say, even 100 variables. It's not tractable to apply, oh, a distributed.net-style approach.

    And there are probably more people who can write little bits of code than can intelligently plan complicated recurrent neural networks, say.
  • A neural net is just a function fitter. It won't help you decide what functions to fit or how to use them. That's a long way off from planning...

    And real-time games such as FPS games, in particular, might be REALLY nasty; what constitutes a training instance, and where does feedback come from? If one picks up armor and then gets immediately fragged, how does one make sure that "picking up armor" -- if that's allowed in the input space -- doesn't get associated with negative consequences? And so forth.
  • It cheats too, I believe -- if memory serves, its fleets suffers no attrition. Obviously, this gives it a MASSIVE advantage in, oh, exploring the New World -- a human-controlled Colombus might easily vaporize before his explorations up and down the North American coast reveal a single province, whereas an AI explorer can simply wander around until all the terra incognito coast is revealed.

    There is also a slight suspicion that it may cheat in diplomacy or money.
  • ISTR that some bloke actually managed to pull off a dissertation involving a fully autonomous, non-cheating Netrek team. Heh.
  • C4.5, if memory serves, includes a pretty good example of a decision tree algorithm. Briefly, it's a tree in which each node queries one discrete attribute, and based on the value determinstically either provides a discrete output (a classification), or selects one of the child nodes for another query and decision.

    A common criteria for deciding which attribute to use for any given node is information gain ala Shannon -- the most informative attribute being selected.

    There are other foo, such as how you decide when to _stop_ splitting (overfitting is a problem if you let outliers and noise produce lots of spurious leaves -- and with noise you may not get a 'perfect' fit, anyway), how to prune the tree and so forth. You could perhaps modify the tree based on feedback -- or even maintain multiple trees using different criteria and weight them accordingly. Heck, one could probably encode a decision tree for a genetic program and use the player's input as part of fitness. *shrug*

    I suspect that it's similar in spirit to regression trees, but I've not used the latter.

    As for perceptrons, I'd be surprised if he *really* means classic perceptrons -- since those are linear combinations with the linear separability requirement and all -- instead of, say, multilayer neural networks of some form or another.
  • The devil's in the details. A fairly big problem is situational awareness; with a random-map game like SMAC, the choice of starting strategy should be heavily influenced by your surroundings and your neighbors. Heck, strategies even need to work around the amount of fungus near you... The variability in SMAC is further exacerbated by, say, Unity pods -- I don't recall if the AI ever goes for them, but the result can make a very, very big difference (good or bad).

    Maybe if it had, say, a LISP or Perl interpreter so people could easily try out even different functions and algorithms, let alone tweak parameters.
  • ...and flexible game design itself. In particular, some games are highly configurable -- Space Empires IV [malfador.com], for instance, lets you redo the entire technology tree and a rather large number of other settings; even without that customizability, it would still be a highly complicated game. As a consequence, the number of variables that would be needed for, say, even non-completely-scripted ship design would be rather extreme.

    Perhaps 'completely reconfigurable' versus 'highly competitive AI' is a fundamental choice, and it's implausible to have both with current limitations?
  • Right. The pattern matching and unit coordination might be tricky. The AI might be able to judge that _overall_ its military is stronger, but that doesn't easily lead to figuring out the where, who, when and how.

    Taking the example of _Xconq_, for instance. In the standard game, one can use amphibious assaults; one could start with coastal bombardment of port cities via BBs; one could use bombers to parachute infantry into a nearby island to set up bases, and then send in air support to cover an eventual amphibious (or paradropped) invasion (a favorite of mine -- I've won games against the AIs, heh, without using ships at all...); one could use carrier-based air instead... and it all has to be coordinated well, because a transport or two of armor can be vaporized pretty easily. And amphibious assaults without air support are just asking for trouble...
  • Could be "case based reasoning". A company I used to work for a loooong time ago used this to debug the electronics of cars (which car? well...it's named after a sign of the zodiac). It was a lot of OPS5 statements that boiled down to:

    if the car won't start, then

    1. check the battery (80%)
    2. check the wiring (15%)
    3. check the solenoid(5%)

    Or something to that effect. If the problem turned out not to be the battery, the program backtracked to check the wiring or the solenoid.

    A lot of it could be more complicated than what I've shown. I wasn't an AI guy (aum mani padma), just a grunt.

    I wonder how the compute power of a circa 1985 LISP machine compares to your basic 1GHz Pentium IV?

  • I know ALOT of people. I'd say you need to get out more, if you have nothing better to do then troll.
  • Haven't you seen Terminator?

    I have yet to meet anyone that HASN'T seen that movie...well T2 at least.

    Anyway, while i know that this post is supposed to be a joke, it is something that needs to be considered. Once something thinks for itself, there's no reason to believe that it won't come to the conclusion that it would be better off without us. It might simply see the 'tendancy to destory ourselves' in us, and decide to destroy us before we can take it down with us.

    Fortunatly, that kind of AI is probably still far off into the future. But there are many issues that sci-fi stories seem to address, even though those issues might not present themselves for a few years.

    The AI turned on human is found not only in T2, but 2001, Star Trek (Data's 'brother' Lore), the Matrix, the list goes on. If you can get past the bad acting, every week the Outer Limits seems to address some techno-nightmare issue.
  • Real can't solve any real problems. The theoretical stuff is all good, but it's not practical. You can't solve a planning problem in milliseconds while keeping the framerate up and having realistic looking characters. When people laud (sp?) the AI in games, it's not so much that an advance in AI theory has occured, but that someone was able to code AI in such a way that it's useful in a real-world application.
  • Don't pick the cow. The cow is totally freakin stupid. Not only do you have to spend all of your time slapping it, but it spends all of it's time doing especially stupid stuff.

    Cow pulls up bush and throws it at villager [slap slap slap]
    Cow pulls up bush and throws it at food [slap slap slap]
    Cow pulls up bush and throws it at tree [slap slap slap]
    Cow pulls up bush and eats it [slap slap slap]

    Stop pulling up the freakin bushes! [slap slap slap]

    Jon Sullivan
  • have you played the game?
    when's the last time you taught your tamagachi to terrorize a village?
  • Here's a couple thoughts:

    - Couldn't some company somewhere come up with an AI card. Think along the lines of the 3d accelerators 4 years ago. There could then be a stand set of APIs to interact with AI hardware linke DirectX or openGL.

    - What about an open-source AI project for use in games.

    Just my $.02

  • The article says to become the One True God you have to make your power felt. I was pretty excited when I read this, but I've just spent the last couple of hours searching and I can't find any felt anywhere. This is really frustrating. Can any B&W guru give me a hint?

  • We heard you the first time.

  • I don't think your evil because you're creature is throwing the peeps. That would make you're creature evil not you.

    it's very easy to slip to the darkside however. things like not keep your villagers happy, feeding your creature meat (not sure). storm or really any destructive mirecle will make you evil.

    -Jon

    yes, i know i can't spell.
  • Actually, I'd like to run it on one of my Netware 5.1 servers.
  • Don't put all your faith in neural networks. As the article pointed out, it used perceptron (essentially a basic NN) for some things, competing fuzzy logic (group minds, I make an assumption there) for some, and decision trees.

    Neural networks do work well for some things, but there is ample documentation that in some complex situations, the NN may key and learn on parameters that are not key, but coincedental instead.

    tweak, tweak, tweak, rewrite, tweak, tweak, rewrite.... repeat as needed.

  • Check out the C4.5 and ID3 decision tree algorithms by Quinlan. I'm sure a search on google will result in many hits. These are some great academic examples of decision trees and I believe this type of algorithm is the "decision tree" that is being discussed.

    If that is too confusing then try to find and introduction to Machine Learning or Learning Theory.

    As for Perceptrons, in terms of "learning theory", that usually means an algorithm that can divide a set of data into classes. If these data are decision making attributes that algorithm will divide them into 2 (or more?) parts, then the specific action indicates which "type" of behavior this is... or something like that.
  • TDIDT (Top down induction of decision trees) is an oldfashioned type of machine learning. The intuition that it is based on is this: each item of data (event) has a class and a set of attribute values. The class might be "good" and the attributes might be "killno", "kissno". The values for "killno" and "kissno" might be "5" and "500". The root of the decision tree is a description that maps to the whole data set. A question is constructed at the root from the data set by seeing what the most "informative" or "significant" split in the data set is. So, if you have a situation where all "good" things have kissno>killno then the question at the root is "if kissno>killno branch 1, else branch 2". You can construct algorithms that use any criteria to do this, depending on the language that you choose, but the more complex the question, the higher the search complexity of finding a decision tree that maps to the data set. The idea is that you generate questions for every branch until you have a tree that has leaves with only one class - ie. good or bad.... DT's are ok, but they are exposed to structural risk, that is to say, they can become so complex that they overfit to the problem... a very bushy tree probably represents more information than the domain theory that it was induced from, and that information is probably just noise, so the tree will not generalise well to data from outside the original data set. Another problem is that dts are bad at catching data at the margins of distributions, because they tend to use measures based on statistical or information theory.

    Modern machine learning uses algorithms like Support Vector Machines, because these have properties that limit the sturctural risk. Alternatively you can use fuzzy neural nets, and these deal with marginal cases much better.

    Of course I have a number of unpublished algorithms that do better than both of these techniques... but I can't tell you about them, because I would have to kill you ;-)

  • My GF thinks Counter-Strike is cool....

    To each their own :)

    Hehe

    Jeremy

  • Hehe yeah.. ok...

    Evidence: Article 1 [terminalvelocityclan.net]

    While this doesnt prove anything since its all the internet I can tell you with absolute certainty she is 100% woman :)

    Jeremy



  • I am [TV] Bah

    See here [terminalvelocityclan.net] for her actual opinon :)

    Ehh she is Pre-Med so the blood thing obviously doesnt bother her :P

    Jeremy

  • The strength of B&W is that it went for a completely noddy decision system

    Why not a big-ears decision system? Or a plod decision system? Or any of the other characters from Blyton's Toyland? And wouldn't Enid Blyton Ltd. be after their ass if they actually implemented a noddy [pbs.org] decision system?

  • The article's pretty interesting but doesn't really go into too much detail. Does anyone know what Black and White would have been written in and what tools were used?
  • What? Surely you are kidding? The AI in Black and White is interesting because it is far better than most computer game AI. But there's nothing revolutionary about it in the field of AI in general. They are using well-established algorithms that have been known about for years or decades.

    I'm not cutting down Black and White. I love that game, at least partly because of the AI. But let us not kid ourselves. Gaming is not in the least leading the way in the field of artificial intelligence.

    --

  • Computer AIs will never be able to compete with an experienced human opponent in today's strategy games, unless the AI is allowed to cheat. This is because current computer strategy games have nearly an infinate set of possible outcomes for a given state because of terrain/map features and the absence of information about unexplored territory and what other players are doing. These games are not like chess, which has a fairly limited set of possible outcomes and there is no absence of information about the current game state.

    CPU processing power will not help this situation. Brute force approaches will add little improvement to the AI. The quality of the AI is derived from the quality of the algorithms used.

    Good idea on creating a plugin-AI for a game ... its true that a game becomes stale as soon as the AI becomes stale.
  • Gaming has become so obsessed with AI and graphics and all the frills, that I'm getting really depressed about it.

    I can't wait until the games get bloated with individual player skins, that it takes 2 DVD's to install. (Just so they look just like they do in real life with injuries and scars too!)

    Can anyone guess what football game I've been playing most lately? Techmo Super Bowl.. NES style.

    Because it's more fun to play than ll these new ones.... Pretty sad.

    Never underestimate the stupidity of the individual, and never over estimate the intelligence of the masses.

  • ... is that they've never published (and probably never will publish) what the AI is supposed to do. Therefore, any errors in the AI code cannot be identified as such - everything is ultimately a 'behavior', whether intentionally programed by Lionhead or not.

    They've achieved programmer nirvana, where they can at last exclaim "It's not a bug, it's a feature!" and leave it at that.

    Reading the messageboard discussions for B&W reminded me profoundly of something: discussions I've had in the past with colleagues trying to deduce the inner workings of some third party technology that shipped without source code. All you can do is send rays into the black box and see where they come out.

    Black & White is the first game to have turned the art of debugging into a commercially successful entertainment form, which is why I don't need to buy it - I get plenty of that from 9 to 5.

    -BbT

  • I want to take a moment to talk to all the kids out there. Sure, villager-throwing may seem like a "cool" thing to do for "kicks," but as Calle Ballz shows us, once you start, it's hard to stop. Or rather, to get your creature to stop. Don't end up like Calle Ballz. Don't throw villagers.


  • > So... A whopping 0.25% is now devoted to game AI? Step back.

    Now remember, that's the equivalent of 100+ Commodore 64's chugging away madly.

  • Haven't you seen Terminator? For those that haven't, it is about a killing machine from the future where humans have been nearly wiped out by intelligent machines.

    This is the sort of careless thoughtless behaviour that caused it. They built an AI designed for war, these computer games programmers are designing AI's for synthetic battle. How will they know whether they are fighting in a game or reality? Have the Asimov rules of roibotics been added as a safeguard?

    If just one of these escapes, it could replicate itself and spread across the world using the internet. Humanity would be no more.

    Why can't we learn from these people?

  • What would a 100% increase on 1.0 be? 2.0

    What would a 50% increase on 1.0 be? 1.5

    What would a 50% decrease on 1.0 be? 2/3

    Now, what would a 250% increase on 1.0 be? 3.5
  • ...a game called Galapagos? It was made by a company named Anark, which seems to have fallen off the face of the earth (anark.com is now a provider of web tools).

    Basically, you had a pet insect-sort-of-thing that you had to help escape from a 3D puzzle world. The trick was that you didn't control the creature, you only manipulated the environment and let the creature react to it. The creature was driven by Anark's AI technology with a buzzwordy name, but you could see it working. After it fell off a certain place a few times it would be reluctant to go back there, and would try to ignore your commands (you could poke it with the mouse) and get back to a safe place.

    Anyway, my point is that B&W might not have the most advanced AI in gaming history after all.

  • The problem with Quake bots as an AI challenge is that you can give them perfect aim, making the challenge into one of who has the best gun when, which could then break down into deathmatch spawn points.

    Also, it's impossible with Q3 to get clientside bots to connect to the server as the protocol is unknown. You'd have to edit the DLL source that Id released, and getting two bots in one source tree is more than a whim.

    I would like to see an environment where the hardships in overcoming it would all lie on the programmer instead of a user, like with Quake's aiming. The general idea would be to have every decision be one of the type of 'no best answer' as opposed to Quake's aiming 'there is a best answer', at least for every hitscan weapon.

  • almost all of these questions can be answered with a variable holding a certain numerical value.

    The problem (and what makes these games fun) is that the answer to the question depends on more than a simple number -- they all depend on the state of the other questions, and all sorts of other factors in the game. It's the emergent properties that are interesting -- and that are hard to quantify.
  • There is a problem with having the AI play against itself though. I can't remember the exact reason but it can only learn so much that way.

    The problem is that the AIs will become good at beating other AIs - not at beating humans. If the competing AIs are all stupid in one particular way, they won't clue into it by themselves. Also, if you have deterministic AIs, you might enter a closed loop (endlessly replaying the same set of games with the same set of learning variations).

    Careful design of the AI can minimize these effects (e.g. by forcing speculation on random strategies to discover new techniques by brute force), but it's not easy and not very efficient most of the time.

    Humans are very good at showing AIs where the holes in their techniques are, so mixed human/AI games will provide the best learning environment for them most of the time.
  • According to the article, game AI has traditionally been forced to use a meager .1% of the CPU in games, due to the huge resource requirements of making the pretty pictures. However, this has all changed with an earth-shattering 250% average improvement in the amount of CPU time allocated to AI-- leaving us with a remarkable .25% of the CPU dedicated to AI!! That's amazing!
  • One is that you need one that learns. Before you flame me about this, let's think about this for a second.

    I'm not going to flame you. You are essentially correct. The problem is the practical difficulty.

    There is a backgammon program that learned, from scratch, how to play backgammon. It is now a world-class player. So clearly, we can learn how to play games.

    One little catch: The program played millions of games of backgammon with itself before it got that good.

    As you might imagine, Alpha Centauri is significantly more difficult then backgammon. Chess hasn't even been "learned" yet (all the best approaches I know have heavy dollops of brute-force searching). Plus, as the problem increases in difficulty, the time necessary grows. Ouch.

    It's a good idea, but we don't know how to do it practically yet. That's why B&W really is interesting to me; while the algorithms aren't necessarily ground-breaking, it is an interesting application of real-time AI in an environment where the AI really shines (as opposed to input difficulties).

  • 1. time scales. as one developer put it, "if i want to use a new AI technique in a game, i have about two weeks to research it, and a month to implement it. any more than that, and i won't be able to justify the time spent on it to my boss."

    this is pretty standard in the industry, btw. otoh, it would take a skilled ai programmer easily more than a month-and-a-half to implement and debug an inference engine in C++. and you can forget about something like writing a compiler for building behavior-based networks - that takes too much time.

    Seems to me that there would be a niche for a company to invest heavily in developing a flexible AI framework to be used in multiple games. Or does something like an inference engine so much customization to a particular ruleset that this wouldn't be worthwhile?

    Otherwise, though it represents a big up-front cost, a company with a variety of titles, or a well-established series, should be able to spend a little extra time on AI and gain a competitive advantage.

    --

  • "As recently as 1999, most games devoted only .1% of the CPU's resources to running the AI."

    ...snip...

    "According to a recent article on the game development site Gamasutra, an average of 250% more of a computer's resources are now devoted to AI."

    So... A whopping 0.25% is now devoted to game AI? Step back.

    Seriously though, processing power is a really weak way of assessing the sophistication of an AI. It's really easy to max out a chip on a neural net that ends up going almost nowhere, and a well-programmed behavior engine could create an extremely realistic AI on a Palm. Like most anything, it's all about the coders. It's great to see that more attention is being given to cognitive realism (Sims, B&W, etc.) instead of/in addition to kinematic realism (Trespasser, <insert latest FPS here>, etc.).

    I can't wait until this kind of dedication to learned and adaptive behavior makes its way to War/Star/foo-craft... Or Microsoft Bob...

    Kevin Fox
    --
  • i>My GF thinks Counter-Strike is cool....

    Propose. Today.

  • Can we get some AI routines that will keep the villagers from walking right through a four-foot-high piece of poop?

    I just can't be impressed with AI until then.
  • "decisions trees have not been used in games before".

    I'm willing to bet that's a load of bull. Maybe noone called it a "decision tree" but I'm sure there have been AI structures that perform exactly the same function in some game, somewhere. There have been *alot* of games, and they have explored *alot* of options. Maybe they didn't have 6 million dollars of personal cash make them famous like B&W, but they were still there.

    I was glad to see him acknowledging its limitations though.
    What he describes is nothing special to the AI world, very basic techniques. I'd like to see something more complex, perhaps a backprop neural net and adaptive planner (which would give them foresight) instead of a perceptron and a decision tree. For a single creature, the CPU hit would be trivial. The real limitation is the human time required to design it.
  • I'd rather it did cheat than provide such an easy win. The scenarios are incredibly complex and challenging at the outset. But over time, human victory is assured if you can hold out long enough.

    It would be a great multiplayer game, but its nature isn't very easy to multiplay with.
  • It's not that the developers are stupid, they know how to play their game and they can usually fill in decent values for all of those variables.

    The problem is that being good at a strategy game involves much more than the variables you list. Learning to plan out attacks and spot weakness are not things that the AI can easily do, nor pick up from observation of a person playing.

    These are the achilles heel of AI in a strategy game, and there is no simple fix.
  • I'm aware that a perceptron is a neural net, but being limited to one layer really gimps its ability to generalize.

    But I think you sell neural nets short, while they are pattern recognizers, they are capable of finding extremely complex relationships between variables, relationships that are hard to code as boolean values.
  • But I think in practice it would prove difficult. Mining data from a set with so much variance in player skill would prove very difficult. And again, you still have the problem that figuring out a huge array of optimal variables is not going to give you a good AI. It will be better, probably, but still woefully inferior to a human. It all comes down to foresight and insight. Until we can put that into an AI, forget it.

    Actually, one way to make AI's challenging is to paly to the computer's strengths, real time games with lots of things to do are great for computers because they can handle it all. A human needs the game to go at a certain pace or they lose it.

    So Europa Universalis, for example, is a game that is much more difficult to win at if you don't allow pausing.

  • Well writing a strategy game AI isn't like writing a novel, but its no small feat either. Take the time to consider everything you think of while playing a game. Yes 95% of it is probably set ahead of time, but that last 5% is what really sets an AI opponent off from a human one, and that's where the challenge in programming an AI is. An example is when I play games there are certain things I might consider a few turns in advance, while others I ignore. Having the computer decide what to consider in depth though is hard. Does it consider the long range implications of launching an attack or for expanding. It doesn't necessarily have CPU time to do both, just as a human only has so much real time to think about the situation. It's very easy as a human to quickly judge the situation and say "I'm pretty safe I'll expand for now" but telling a computer how to do that is much harder.

    The idea of the computer learning after every game is nice though. There is a problem with having the AI play against itself though. I can't remember the exact reason but it can only learn so much that way. What might be interesting is an internet site that could collect the AI's from many users games and combine them, distributing new AIs. That way every person playing against the computer is helping to train it.

    And I definatly like the idea of an API to build your own AIs or expand the one currently there. It would be fun to load up Starcraft and go on BNet and start a game with someone and see if your computer program can beat them. Or even just playing your own AI to train it.
  • In 3.3 milliseconds, an Athlon 550 can go ahead and read/write ram a few hundred times, do a few hundred thousand instructions, etc. When you consider that it has 1 click tick every 1/550,000th of a second, 0.0033 seconds (or 1,815 clock ticks) don't seem like that short a period of time at all.

    Think of popular and succesful games: Half-Life had neat AI for the Marines. If it had just been the odd aliens and Sci-fi plot, I probably would've have played the game through twice in a row. That game performed well on a 300Mhz machine!
    --
  • Just a guess, but if each node in a tree was a decision (do I eat this?, is this compatible with my alignment?) and the left and right had strengths (60% sure I'd say yes), then you could tweak that bias as you monitored feedback from the actual outcomes.

    It's a bit more crisp and reflective than a neural net, in that you know the specific purpose of each node. In a neural net, it's hard to reflect: "why exactly did you make that choice?"

  • I can make a few guesses. Perceptrons and other neural networks basically map a set of inputs to a set of outputs. An external error is generated each time the network runs and the network is adjusted to try to reduce this error. In the case of a game the error would be derived from the success of the unit. Each sample theoretically drive the network to minimise the error, however it is a rather random process, especially with such a non-linear system.

    The real problem is to choose the right inputs and outputs for the network, and to get the thing to actually work. The inputs and outputs must be general enough that the network can analyse them, but specific enough to represent the environment and control the unit.

    As for decision trees, they would be useful to breaking down the situations a unit is faced with, and isolating them so that the decision making is effective. It also provides a look ahead mechanism to try to predict what will happen. The decision tree would be the main memory of previous experiences. I think in the case of a game it would be very difficult to recognise similar circumstances to fit branches of the tree.

    AI is a very interesting area, and not really that difficult to get started in. I suggest looking up some of the terms on the net and seeing what turns up, although a good textbook will be much better.

  • My wife normally plays the Sims, building huge elaborate houses with intricate themes. That is, she doesn't like games with conflict. I'm trying to transition her to a more combative game, and I convinced her to play Black & White because a good portion of it is just playing around.

    She's cruising through the tutorial having a great time when she is supposed to bring a rock back to the village sculptor. She fails to zoom in far enough, and drops the rock right on top of him. "No! I didn't want to kill him. Oh, no." She quits the game, turns off the computer (we never turn the computers off), goes downstairs and crawls under the covers.

    Someday people will realize this is what its like to "play like a girl" and write a game accordingly.

  • An another article [upside.com] discusses how an AI researcher is developing bot's with cutting edge AI.
  • The Creature uses a noddy [pbs.org] state machine

    So that's why my character acts so wooden.

  • Acck, I'm going to get flamed for this. But anyway.

    A 250% increase in 0.1% of CPU time means it goes to 0.35%. Learn some simple math, yeesh.

    --

  • Does anyone know what Black and White would have been written in and what tools were used?

    Some of it at least was written in C/C++. I know this because I found a bug where if you click on a workshop that is in your area of influence, but which you do not control, you get: "Percentage complete: %3.0f%%". Someone didn't do their sprintf correctly. :)

  • Anyone have any insights into this "decision tree learning" that Evans mentions? It seems to me to be one of those fuzzy terms that could refer to any of a dozen things.

    Like "perceptrons" -- unless he's actually referring to the algorithm described in Marvin Minsky's book of the same name, which would actually surprise me quite a bit, based on (what little) (and I mean little) I know about the book.

    Seems to me that a "decision tree" is a simple deterministic programming construct, so the real interesting part would be how you change it in response to stimuli. Anybody have any inside knowledge (grin) on what he's actually doing?

    --

  • You know, this game is starting to make me wonder if we're not just individual threads in some massive "Black and White II" game, a la Thirteenth Floor [imdb.com]. Forget Descartes, can you think non original thought is actually original if you're programmed to ignore the source?
  • Are AI routines still computed during the vertical refresh with today's multi-threaded OSes?

    You'll have to ask the task scheduler of the OS when it's executing the AI threads, now. ;-) Of course, "vertical refresh" is a CRT concept, so fortunately, developers can simply draw or blit into abstract frame buffers. OS drivers then get that bitmap representation onto a physical screen (LCD/CRT/VR goggles).

    Joking aside, I recall not too long ago having to count cycles to keep my AI code from "leaking" outside of the vertical refresh period on a Game Boy.
  • Real can't solve any real problems. The theoretical stuff is all good, but it's not practical. You can't solve a planning problem in milliseconds while keeping the framerate up and having realistic looking characters. When people laud (sp?) the AI in games, it's not so much that an advance in AI theory has occured, but that someone was able to code AI in such a way that it's useful in a real-world application.

    Duh.

    The point is that the theoretical stuff is what is the bleeding edge, not the gaming industry. The fact is that they don't pick up on stuff until its well past the theoretical stage.

    Immersion is one thing, games do it rather well, but the study of AI isn't all about "wowing" the audience. It is a serious mathematical, philosophical, and electrical study of the limits of computer programs. The gaming industry doesn't have time for this stuff. They are busy making money, like they should be.

  • Lets face it. Cutting edge graphics, and killer AI always show up in the gaming industry before anywhere else. They continue to impress us. Unfortunately, people think this is more important than gameplay, but I digress. Graphics were the fad the past few years, but perhaps AI will be the new fad for the coming years...

    Nor really... The AI in games is minimal at best when compared to the capabilities of AI in a theoretical sense. The problem is that AI is difficult to design and takes alot of time, and developers are out to make money, so they invest in technologies that will immerse the player in the game to get them addicted to it.

    Its a new type of addiction for me, because I'm not playing to see how far I get, or see how big my avatar will get, its to see what he does next when he's off my leash. Was he watching when I was throwing the rocks, and start throwing villagers? Was he watching me pickup and move villagers to do the same?

    So, it may be a long time before some really sophisticated AI gets into games, if ever. Think about it, if a chess computer can beat the world champion, don't you think there are strategies in many of these games that would be similarly difficult to beat?

    If you want cutting-edge AI, don't look at games, look at OSCAR [arizona.edu] at the U of Arizona, or at the MIT Media Lab [mit.edu] , or at the stuff going on at CMU [cmu.edu] or RPI [rpi.edu]. That's where the real progress and research is being done. Not in some programming sweatshop at EA.

  • Wifey: Can I see your new game? Me: Sure. It takes a long time to load, but here... Wifey: Oh! She's so cute! Me: It's a he! And my creature is not cute, he's 'neutral'. Wifey: What's he doing. Me: No, don't poop on the villagers! Bad boy! Wifey: Oh my god! He's about to eat that little girl.
  • Try Black and White Center [bwcenter.com] for some more information about these ports:
  • I liked that article. It helped explain why my villages behave so differently in B&W. One village is always begging for food, and can't stop having children, while the other village is the exact opposite. It is so interesting to see the severe divergences in group and individual AI behavior patterns based on only .25% of the CPU's computation time!

    Since I've been spending all my time on the 3rd level of B&W lately, I haven't gotten the chance to do much creature training. But just watching how the villagers act on their own, and develop their own 'community' is quite interesting. What will be really interesting is to see all the new offshoots of this game, much like CounterStrike was to Half-Life. And as a previous poster mentioned, gaming always pushes the envelop of what a PC is capable of, so it should be a joy to watch PC AI evolve over the next couple of years.

  • "... that perform exactly the same function in some game, somewhere....I'd like to see...backprop neural net and adaptive planner"

    A perceptron _is_ a neural net, and if I understand Evans correctly, he's not saying that the use of decision trees per se is unique (it isn't), but that the game objects use perceptrons to weight their traversal of the decision tree -- which would be an adaptive planner. B&W is the first successful game that even comes close to demonstrating "real" AI techniques.

    You'd be amazed at the lack of AI sophistication that's shipped in games. As far as I know (and, just to establish some credentials, I was the founding editor of Game Developer Magazine, the editor of AI Expert magazine, and used to teach AI techniques at the Game Developer's Conference), no commercially successful game has _ever_ before shipped with an AI based on neural nets, genetic algorithms, true fuzzy logic, or even a "real" inference engine. There have been a few non-important games that have used non-adaptive neural nets and at least one almost-successful game that claimed to use GAs (Creatures? It was kind of like a Tamagotchi-- you raised these things in an environment and taught them how to catch food and so forth.)

    90% of game AI is based on finite state machines, decision trees, and scripting.

    In defense of game programmers, though, everyone thinks that it would be easy to "use a neural net" to control a game object. Generally not so. A neural net is a pattern-recognizer, not a symbol manipulator. Anything you can do with a neural net you can do with boolean operations, and a sequence of boolean gates is typically faster to program and execute. But what's clever about B&W, if I understand Evans correctly, is that NNs are used to weight the traversal of a pre-existing decision tree (i.e., the next time I "see" a fire burning, I am marginally more likely to cast a "Water Miracle"). That's a good design, since god games are enormously repetitive.

    The other type of game object for which I've been baffled that no one has shipped a neural network is in a fighting / fast reaction game, learning the player's bias (does he always break left and then perform an immelman?, does she always use a particular fighting move?), but most introductory books on NNs don't discuss neural nets that can handle temporal data. So there are a million game programmers who know enough about NNs to "know" that they don't work.

    A lot of games have also claimed to ship with fuzzy logic, but in every case that I have spoken with the developers, it turned out to be a probabilistic overlay on the results of some boolean operations, not the higher-order symbolic manipulation that characterizes "real" fuzzy logic.

    The Creature behavior in B&W is brilliant. Like the classic AI program Eliza, it demonstrates how ready we are to project "intent" and "consciousness" onto computational structures that are, in reality, not very sophisticated at all.

  • by r ( 13067 ) on Friday April 20, 2001 @01:22PM (#277346)
    Seems to me that there would be a niche for a company to invest heavily in developing a flexible AI framework to be used in multiple games.

    several independent developers have tried that - and the game ai page [gameai.com] has links to pretty much all game ai sdks attempted thus far.

    the problem is that while high-level ai can be pretty general, the low-level ai (pathfinding, collision detection, world physics) is completely tied to the internal representations of the world inside the game engine. it's a similar problem that you have in physics sdks.

    also, given the game development characteristics (18-month dev cycles, ai being one of the last steps in development because it requires a working game engine), it's rare for studios to design a game in such a way that a general solution like an ai sdk could be just 'plugged in' that late in the development cycle. unless the workings of the sdk are well understood, it's easier to just build your own (especially if you're not doing anything complex).

    on the other hand, if a company with a hit game licenses their ai engine to others, that would be a big step in the right direction - the same way that id and epic licensed their graphics engines after the success of quake and unreal. and sure, many studios will write their own anyway, but those who don't want to rewrite a* for the nth time could instead concentrate on writing high-level behaviors. :)
  • by Black Parrot ( 19622 ) on Friday April 20, 2001 @06:33AM (#277347)
    > The amount of human time required to develop and debug a proper AI, one that makes a significant use of computational resources, is enormous.

    IMO, machine learning (ML) is the way to solve this.

    And that's what Evans is doing here:
    When your monster does something good (or at least something that you want it to keep doing), your Divine Hand literally strokes it; when it does something incorrectly, the same Hand of God smacks it. Eventually -- ideally, anyway -- it grows into an active extension of your will...
    That's aka Reinforcement Learning. For decision trees, the feedback is the "evidence" that the tree has to explain, so presumably his system saves some/all of the feedback and intermittently updates the decision tree. If you give consistent feedback, it should converge to a point where your monster can guess what the outcome of an action is, and thereby avoid the smacks. As a side effect, it looks like it "knows" what you want. Similarly for perceptrons / neural networks.

    The bit about Moore's Law is certainly apropos. I recently ran a genetic algorithm program that searches for good solutions for the travelling salesman problem, and on a late model x86 desktop system the program was evaluating 1000 candidate solutions a second for a 2000 city problem. Our resource-intensive GUI desktops obscure just how fast our desktop supercomputers really are.

    Also, contrary to what someone suggested in another thread, games are not the state of the art for AI. You can easily find tons of papers on this kind of stuff with your favorite search engine, and in some cases download the code for the program described in the paper.

    That's not to knock it; games will probably be AI's killer app.

    --
  • by Grendel Drago ( 41496 ) on Friday April 20, 2001 @06:24AM (#277348) Homepage
    Have you been smacking the shit out of your creature when he does that? My creature ate a villager once, but I beat him stupid (it's a guilty pleasure... I'm good, honest!) and he hasn't done it since.

    -grendel drago
  • by Illserve ( 56215 ) on Friday April 20, 2001 @06:11AM (#277349)
    CPU's these days are more capable of provididing good AI than they are given credit for. In my opinion, it not a CPU bottleneck that has kept AI at .1% of system resources, rather it's design time. The amount of human time required to develop and debug a proper AI, one that makes a significant use of computational resources, is enormous.

    Therefore, it's done half-assed. I don't blame the developers for this, they are operating in a market in which the average selling game loses money, so they are under alot of pressure to cut corners. Truth to be told, a crappy AI is probably not going to cripple sales of your game too much (unless that's the central theme like B&W).

    We're going to need to see the computer industry actually become profitable before we see more decent AI like B&W.

    Note thet B&W was developed with personal cash from Peter, and therefore wasn't subject to the same tight budget/publishing requirements that most games are.

    It's a credit to Lionhead that they got the product out the door without a publisher breathing down their neck.
  • by FortKnox ( 169099 ) on Friday April 20, 2001 @05:30AM (#277350) Homepage Journal
    Lets face it. Cutting edge graphics, and killer AI always show up in the gaming industry before anywhere else.
    They continue to impress us. Unfortunately, people think this is more important than gameplay, but I digress. Graphics were the fad the past few years, but perhaps AI will be the new fad for the coming years...
    I have a small background in AI, and I must say, I have played Tribes2 only once (had it since it was released), because I'm so extremely impressed with the AI for Black&White (if you haven't played it yet, go grab yourself a copy!!). Its a new type of addiction for me, because I'm not playing to see how far I get, or see how big my avatar will get, its to see what he does next when he's off my leash. Was he watching when I was throwing the rocks, and start throwing villagers? Was he watching me pickup and move villagers to do the same? Its one of the first games I enjoy playing without touching the keyboard... I just watch what he'll do next...
  • by isomeme ( 177414 ) <cdberry@gmail.com> on Friday April 20, 2001 @09:36AM (#277351) Journal
    The Brunching Shuttlecocks explain the lessons of Black and White [brunching.com].

    --

  • by Calle Ballz ( 238584 ) on Friday April 20, 2001 @05:41AM (#277352) Homepage
    I threw one villager into the ocean. ONE! My creature happened to be standing nearby. Now even though my power is strong because I'm being worshipped by just about everyone, it sucks because I'm becoming an evil god now because my damn creature keeps throwing people into the ocean!!!
  • by Telek ( 410366 ) on Friday April 20, 2001 @06:05AM (#277353) Homepage
    is that they become predictable. Once you learn the exploits and how they work, the game is no longer fun. Take Alpha Centauri [firaxis.com] or Master of Orion 2, easily 2 of the best, if not the best, strategy games around (IMHO of course). However I can play both of them on impossible levels and win almost every time.

    And what really bugs me is that to make up for deficiencies in their AI, as the levels increase in difficulty, the computer just cheats more. I was abhorred when I found out first hand how badly the AIs cheated at the higher levels in the 2 aforementioned games.

    So what my question is, is this: How can this be fixed?

    I have a few ideas. One is that you need one that learns. Before you flame me about this, let's think about this for a second. We're not talking about an AI here that can learn how to write a novel, we're talking about relatively straightforward strategies and mechanical play in these games. I know that 95% of of my strategy for these games is down to an art, it's just an automated system until I get to the few points at which I need to make a new decision, or something new crops up. So if I can do this by a predefined strategy, then why can't the computer do that? Keep in mind too that the computer can simply try variations on it's current strategies, and see what happens. If I beat the computer 9 out of 10 times, and one time with some wierd method the computer CLOBBERS me, then hey, maybe it should keep that method around. Also the computer can play against itself, with many different strategies, seeing how each one works. Keep in mind here folks that the strategies that I'm talking about have a few variables: how fast do I expand? at what point to I build an army? how big do I build my army? When do I stop expanding? When to I attack, and who? These can be values that can be changed and experimented with, and hence the computer could learn.

    Secondly, one of the things I loved about Alpha Centauri is that just-about all settings were configurable through text files. This was amazing. You could make things easier or harder, change global settings, pollution rates, everything. You could even make new factions and trade them with your friends. If somehow settings for the AI were configurable this way, then people could learn how to tweak the AI to make it a more formidable opponent, and then share this information with others.

    Combining those two ideas, throw it on the internet. If you have 5,000 people that are connected (not necessarily at the same time), you can try out hundreds of thousands of strategies for the AI to see what works well, and then upgrade the AI. Actually I think that is a necessity. The AI needs to be easily upgradable, otherwise it'll just get boring as you learn how it works and you can cream the game.

    I'd love to hear some (constructive only please) comments about this, as it's been something I've been thinking about for a while.

    Want to check out about the new Master of Orion 3 [quicksilver.com]? Awesome stuff happening there. -- Telek
  • by r ( 13067 ) on Friday April 20, 2001 @06:52AM (#277354)
    Academia needs to make it more widely known to the software industry that stuff like this has been available.

    academia has been trying. :)

    there are (at least) two big problems in migration of ideas from research into development.

    1. time scales. as one developer put it, "if i want to use a new AI technique in a game, i have about two weeks to research it, and a month to implement it. any more than that, and i won't be able to justify the time spent on it to my boss."

    this is pretty standard in the industry, btw. otoh, it would take a skilled ai programmer easily more than a month-and-a-half to implement and debug an inference engine in C++. and you can forget about something like writing a compiler for building behavior-based networks - that takes too much time.

    2. different priorities. academic AI traditionally focuses on different things that games. in academia, working systems matter, but they're vehicles for the theories and techniques, which are the real crux of the matter. the programs can be slow, and they can consume vast resources, as long as they provide a novel insight into how human mind or human behavior works.

    games, otoh, run under tight performance constraints (ie. in a 30fps game, even with 10% of cpu available to AI, you have 3.3 milliseconds per frame to do all of your AI, including collision detection and pathfinding!), and its goal is not scientific insight, but believability - the creatures can be dumb as buttons, and they can be directed by simple finite state machines, so long as they look like they're doing something cool.

    with such different goals, it's not clear what can be done to bring the two closer together. for now we can just hope that if more game developers had formal training in AI techniques (as opposed to learning AI by hacking FSMs or NNs or whatever the fad-of-the-day is), and more academics were aware of constraints of the gaming industry, it would foster a better cooperation and exchange of ideas...

    It works well here, but be careful claiming this is anything bigger than excellent game AI using well-known techniques.

    amen to that.

...there can be no public or private virtue unless the foundation of action is the practice of truth. - George Jacob Holyoake

Working...