if(input[a].out())node+=weight[a] ;
i am doing it exactly like this. but i think you convinced me with your portability argument. it will be a lot easier to add N amount of layers when everything is handled in a standard manner.
thanks.
Genetic Algorithm
Everyone hates #1.That's why a lot of idiots complain about WoW, the current president, and why they all loved google so much when it was new.Forget the fact that WoW is the greatest game ever created, our president rocks and the brainless buffons of America care more about how articulate you are than your decision making skills, and that google supports adware, spyware, and communism.
in the neural net, you should use floats (even if the input could be bit based). if you would use bits for the input layer, you have to build a different algorithm (ok not really much) for the input -> 1. layer and another for the rest. so using floats everywhere you'd only need 1 algorithm. and perhaps some AI library could help you if you don't want to do it yourself.
the real problem is the heavy mass of weights you're going to have if you want to add the last N states to your input layer, therefor I'd only add the last N actions the creature has done (it's not forward planing ;)). that would be a simplification of the last state as in the 5x5 field you have now most of the fields are the same you had in the last state.
5x5x4x<count of hidden layer neurons> weights would be added for 1 last state, the last action would only add 4x<count of hidden layer neurons>. thats simplification :) remember you have to save all the values of the weights for each creature (surely some hundreds or thousands). that make a really big mess of data and every input neuron you can avoid is good for you. (of course only the ones you can remove, the 5x5 fields of the current state for example should be in :) )
and for backpropagation:
as said, you need the input and the output of this input as training samples. it would be a very difficult task to get these values in your simulation.
and another thing for backpropagation is: you need very very much training samples to get an acceptable result. and with a neural net as big as this, the data needed would be far too big.
and not even mentioned that you would need specific activation functions for the neurons that are expensive to calculate.
with using GAs to solve your neural net, you solve the task of getting many training samples. and you can use a simple activation function like "output = if sum(input)>10 then 1 else 0" as you don't have to get the integral of it and other things..
oh and i just remember *g* the number of weights of 1 neuron is the number of inputs+1!! the 1 added is the activation value (in my example function 10) and these values should be in the genetic information too
I hope I didn't confuse you too much ^^
the real problem is the heavy mass of weights you're going to have if you want to add the last N states to your input layer, therefor I'd only add the last N actions the creature has done (it's not forward planing ;)). that would be a simplification of the last state as in the 5x5 field you have now most of the fields are the same you had in the last state.
5x5x4x<count of hidden layer neurons> weights would be added for 1 last state, the last action would only add 4x<count of hidden layer neurons>. thats simplification :) remember you have to save all the values of the weights for each creature (surely some hundreds or thousands). that make a really big mess of data and every input neuron you can avoid is good for you. (of course only the ones you can remove, the 5x5 fields of the current state for example should be in :) )
and for backpropagation:
as said, you need the input and the output of this input as training samples. it would be a very difficult task to get these values in your simulation.
and another thing for backpropagation is: you need very very much training samples to get an acceptable result. and with a neural net as big as this, the data needed would be far too big.
and not even mentioned that you would need specific activation functions for the neurons that are expensive to calculate.
with using GAs to solve your neural net, you solve the task of getting many training samples. and you can use a simple activation function like "output = if sum(input)>10 then 1 else 0" as you don't have to get the integral of it and other things..
oh and i just remember *g* the number of weights of 1 neuron is the number of inputs+1!! the 1 added is the activation value (in my example function 10) and these values should be in the genetic information too
I hope I didn't confuse you too much ^^
Quote: Original post by haemonculus
the real problem is the heavy mass of weights you're going to have if you want to add the last N states to your input layer, therefor I'd only add the last N actions the creature has done (it's not forward planing ;)).
I don't think any of this is necessary. This is a fairly simple simulation. A creature which finds itself situated between some food on one side and a couple of creatures to the other should end up making the same decision, no matter where it was going before hand. You can evolve very intelligent, interesting, complex behaviour without creating a memory of previous states.
If you really think it's necessary, you should anyway only check the memory of creature-detecting neurons. After all, the only purpose of such memory is to try to make predictions about future actions, and the only thing that is changing in the environment is the position of creatures.
Still, this is all an over-complication.
Quote: Original post by haemonculus
oh and i just remember *g* the number of weights of 1 neuron is the number of inputs+1!! the 1 added is the activation value (in my example function 10) and these values should be in the genetic information too
I hope I didn't confuse you too much ^^
On the assumption that you're talking about bias neurons, again this should be simplified. All you need is a single always-on neuron to be one of the inputs, to ensure that there is always some activation in the network.
there is some benificial behavior gained by knowing past food and wall states too. knowing past wall states will let them avoid heading in a direction that is dense with walls.. knowing past food states.. will let them go back for food they didn't get yet that is now out of their vision space.
i personally don't think the last actions alone are enough. i think we need the whole past vision spaces to get real memory-induced behavior.
i personally don't think the last actions alone are enough. i think we need the whole past vision spaces to get real memory-induced behavior.
Everyone hates #1.That's why a lot of idiots complain about WoW, the current president, and why they all loved google so much when it was new.Forget the fact that WoW is the greatest game ever created, our president rocks and the brainless buffons of America care more about how articulate you are than your decision making skills, and that google supports adware, spyware, and communism.
You could create memory outputs. Just make a set of x outputs that are fed back in each time the net is run. This way memory won't be directly related to last move and your number of inputs will remain low. So if your AI doesn't want to go back into a space that has walls it could learn to use one of it's memory nodes to remember if there are walls in a direction and then that memory node could reduce the chance of going in that direction. The beauty of this method is that you don't have to explicity define what the inputs represent(the AI can evolve them for different things) and just a few slots of memory can allow for very complex behavior(as opposed to have 100s of input nodes).
edit: I believe this was already mentioned, but I'm pretty sure memory is way overkill for this type of problem.
edit: I believe this was already mentioned, but I'm pretty sure memory is way overkill for this type of problem.
I would agree that memory units are likely overkill for this problem. If the environment you're working in is very sparse - lots of empty cells - it might be more useful. It the viewable horizon of your creature (I think you mentioned 5x5) is very likely to be empty much of the time, memory would potentially help the creature know where to look to find food or where to avoid enemies. If, however, the environment is more dense and it is likely that the 5x5 horizon will contain food/creatures regularly, memory will not likely help the situation.
Quote: Original post by Alrecenk
You could create memory outputs. Just make a set of x outputs that are fed back in each time the net is run. This way memory won't be directly related to last move and your number of inputs will remain low. So if your AI doesn't want to go back into a space that has walls it could learn to use one of it's memory nodes to remember if there are walls in a direction and then that memory node could reduce the chance of going in that direction. The beauty of this method is that you don't have to explicity define what the inputs represent(the AI can evolve them for different things) and just a few slots of memory can allow for very complex behavior(as opposed to have 100s of input nodes).
edit: I believe this was already mentioned, but I'm pretty sure memory is way overkill for this type of problem.
thank you for further broadening my understanding of the ways neural nets can be used. i will definitely at least try something like this.
Everyone hates #1.That's why a lot of idiots complain about WoW, the current president, and why they all loved google so much when it was new.Forget the fact that WoW is the greatest game ever created, our president rocks and the brainless buffons of America care more about how articulate you are than your decision making skills, and that google supports adware, spyware, and communism.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement