So I should preface this with the fact that I'm far from an experienced programmer. My knowledge is virtually entirely from experimentation coding pservers. I've learned quite a bit but there are quite a few topics which still are outside my full grasp .
As for the issue I'm facing... I've decided to venture into some more advanced AI for use in intelligent bots for use in pvp. I've done a bit of research into various topics and know a decent amount of the theory behind many AI systems (deterministic vs non deterministic and all that good stuff) but when it comes right down to coding the actual AI I'm running into some issues fully wrapping my head around everything I need to do.
What I've gone ahead and done so far is basically modeled my structure for decision making off a toned down neural network (the reason I avoided neural networks is the type of decisions needing to be made don't favor backpropigation in the manner I can find in existing examples as you don't have a 'right' answer from your actions). Basically I've tried to account for various internal conditions, external conditions and desires (given weights).
My issue is then turning this data into some form of usable calculation to determine actions.
I can quite easily keep track of all my conditions and such but I'm not quite sure how to go about weighing them all versus eachother. I could quite easily turn it into a giant system of if then type logic and throw in few randomized actions to spice things up but my hope here is that I can create something a tad more... dynamic.
If I change the conditions of play (put them in a different type of match... say ctf versus deathmatch) I don't want to have to re-code their entire logic... or at least not the basics of it.
TL;DR version: Basically I'm looking for options (preferably somewhat simple, at least mathematically speaking) to create a low/medium level AI that can account for at least slightly variable conditions.
Note: I'm using C# for all of this. I have some minor python experience (I'm using iron python for all my static npc scripts) but I'm not sure how comfortable I'd be writing large amounts of gameplay logic in anything but C# at this point.
For those who feel like more useless reading:
I've already done significant breakdown as to how to do the END logic (IE: abstracted targeting information for attacks based on target movements and a number of variables) but it's basically taking External stimulus, running them through internal conditions and determining a resulting action to take where I'm running into an issue.
Calculating decisions... logically.
Let's get two things out of the way:
(1) It doesn't matter what programming language you are using.
(2) Artificial neural networks are a distraction.
What you seem to be looking for is a utility-based decision system. Given a list of possible actions, you want to determine which one to pick. The basic structure of how to do that rationally is to have a function that expresses how happy you are with a given outcome (this is called a utility function), and you want to pick the action that maximizes the expected value of that function. In most situations you can't easily enumerate the possible outcomes with associated probabilities, so computing expected utility directly is not feasible.
So here are is a practical approach that fits into the general paradigm: Write a function that, given the game state and an action, returns a number indicating how much we like that action, then pick the action for which the function returned the highest value. This will result in a fairly reactive system (i.e., no planning will spontaneously arise), but it has many advantages: Easy to understand, easy to tweak, fast to execute.
If you are so inclined, you can actually take a database of games played before (between humans or AIs, doesn't matter much) and try to estimate parameters in your function so that it tries to predict the outcome of the match. But I would probably prefer to tweak the behavior by hand, so you can keep control.
There is another idea that I really like, although it is kind of experimental and I don't know of any video game that currently uses it: Come up with a mapping from game situation to action to be picked which can be evaluated really fast, even if it's not very good (purely random is OK for some games), and then use Monte Carlo Tree Search so your bot can decide what to do. This is the way the strongest computer go programs work, and -unlike the algorithms used in computer chess- it is such a general setup that it can be applied to a large class of games.
I hope that gives you some ideas. If you need more help, give us some details about your game.
(1) It doesn't matter what programming language you are using.
(2) Artificial neural networks are a distraction.
What you seem to be looking for is a utility-based decision system. Given a list of possible actions, you want to determine which one to pick. The basic structure of how to do that rationally is to have a function that expresses how happy you are with a given outcome (this is called a utility function), and you want to pick the action that maximizes the expected value of that function. In most situations you can't easily enumerate the possible outcomes with associated probabilities, so computing expected utility directly is not feasible.
So here are is a practical approach that fits into the general paradigm: Write a function that, given the game state and an action, returns a number indicating how much we like that action, then pick the action for which the function returned the highest value. This will result in a fairly reactive system (i.e., no planning will spontaneously arise), but it has many advantages: Easy to understand, easy to tweak, fast to execute.
If you are so inclined, you can actually take a database of games played before (between humans or AIs, doesn't matter much) and try to estimate parameters in your function so that it tries to predict the outcome of the match. But I would probably prefer to tweak the behavior by hand, so you can keep control.
There is another idea that I really like, although it is kind of experimental and I don't know of any video game that currently uses it: Come up with a mapping from game situation to action to be picked which can be evaluated really fast, even if it's not very good (purely random is OK for some games), and then use Monte Carlo Tree Search so your bot can decide what to do. This is the way the strongest computer go programs work, and -unlike the algorithms used in computer chess- it is such a general setup that it can be applied to a large class of games.
I hope that gives you some ideas. If you need more help, give us some details about your game.
For further reading on logical decision-making, utility-based systems, realistically varied behavior, etc... see link below. That's kinda the entirety of my book...
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement