Advertisement

intellegent game playing bot

Started by January 26, 2006 06:30 AM
6 comments, last by Timkin 18 years, 10 months ago
how pheasable do you guys think it would be to create an ai bot that could not only learn from what people say but relate what they are saying to events happening around them. like a game bot that can control itself at a high level (attack/defend/plant bomb) for example lets think about counter-strike source -> *Player says: ok, im going to plant bomb at A *PLAYER PLANTS BOMB *Player says: get out of there. its gonna blow! *BOMB BLOWS UP i know there are some quite good nlp algorithms out there but what i have never seen is text being related to object or events (unless totally scripted). what do you think?
-www.freewebs.com/tm1rbrt -> check out my gameboy emulator ( worklog updated regularly )
Well neural networks are used for making tactical decisions based on a set of inputs. Is this what you mean?

In case of Counterstrike bots being able to interpret the chat messages, well that would need a whole load of computing power that only a small percentage of the games would have.

Dave
Advertisement
Those radio messages ("plant bomb" and "bombs gonna blow") in counterstrike all have a id number, its a sound ingame... not a chatmessage typed by the user thats needs interprenting. So what was said could, in a neural network, be a single input with an integer stating the message id. Right?
maybe but how could you tie that in with the nlp stuff?
-www.freewebs.com/tm1rbrt -> check out my gameboy emulator ( worklog updated regularly )
Textual "understanding" is something that we have not the capability to do ... yet. No amount of code on the current systems will allow a computer or AI bot to "understand". You can force behavior but we do not currently have the hardware capable of making rationalized decisions. Your brain runs at a rate comparable to 168,0000 MHz and thus when we have hardware capable of that we will have AI capable of understanding at a human level.
---John Josef, Technical DirectorGlass Hat Software Inc
I can think of one way of implementing something like this. You can simplify the problem a little bit by defining precisely what information I intend to extract from a text message. For instance, I can try to only detect a well-specified set of options:
M1) The player that spoke is about to plant the bomb.
M2) The bomb will blow up in less than 5 seconds.
...
M10) Not a relevant message.

Let's say we have 10 possible messages to understand in total. We can get some training data by seeing if a given message corresponds with the thing actually happen. So our training data could look something like:

"ok, im going to plant bomb at A" => M1
"get out of there. its gonna blow!" => M2
"I'm planting the bomb now" => M10 (Maybe he got killed before planting the bomb, so we won't learn from this instance that the bomb is going to be planted)
...

We can then try to classify a message using a Bayesian filter similar to what is used in spam filters, to determine which of the categories the message is more likely to belong to.

Has anything of this sort been tried before?
Advertisement
Good point alvaro. I dont know if it has been tried out but it works well in my mind at least. :P Maybe I try do something about it once I got the back propagation training part of my neural net class to work.
Quote: Original post by wyled
Textual "understanding" is something that we have not the capability to do ... yet. No amount of code on the current systems will allow a computer or AI bot to "understand". You can force behavior but we do not currently have the hardware capable of making rationalized decisions.


These statements are simply not true. There are several examples of AI that exhibit understanding of a given domain. They can answer questions regarding the domain and explain facts and relationships, as well as make predictions about possible scenarios. By most accounts of what it means for a human to understand something, these examples of AI understand too. See, for example, NAG (Korb, Zuckermann, et al).

What these AI fail at though is the symbol grounding problem, although philosophers are still not in agreement as to whether this problem is real or imagined.

As for computers making rational decisions... that's fairly trivial to achieve. (see, for example, the Principle of Maximum Expected Utility as a model of computational rationality.) What is more difficult is replicating human reasoning, since humans are not, basically, rational beings.

As for making a statement about the 'equivalent CPU speed of the brain', that is rather nonsensical. The brain is a massively parallel architecture of coupled oscillators. Some of these oscillators operate on millisecond scales, some on the scales of seconds. Some of the computations involve the variation of phase coupling between these oscillators. It is nonsensical to attribute a processor speed to the brain, or even to postulate what processor speed would be needed in a serial computation scheme to provide equivalent computation, since there are many functions possible in a parallel architecture that are simply not reproducible in serial architectures.

Cheers,

Timkin

This topic is closed to new replies.

Advertisement