What I think about neural nets now to do with games
What does this have to do with neural nets? And what evidence do you have that this is even possible, and not just some vague speculation?
Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]
god, its simple, but its a whole lot of thinking thats not commonplace among programmers.
all its doing is recording bits of the screen into each cell, and the connections have a simple nn weight, which just multiplies a binary number, to a normalized scalar "confidence" and then if you look down a little in the forum, youll see my initial work here, ive got a post there, its not a bogus demonstration, but im a bit of a dope, when it comes to commentaries, i do not sound intelligent.
so more proof of the validity is in the post a litte down a bit, by rouncer, me.
trust me, the problem is, and its self defeating the purpose of doing this is->
explicit whole frames/states only - as if it was just a screen recorder. the cheat mode is using the combinations of the cell fires to represent whole states, as aposed to just sticking every whole state into an individual cell (which will give you very little states, unless there was a lot of cells, 4096x4096 - 16 million...) but im working on that right now, because its much better, way more store, but i dont think you can get whole states retrievable in full (from a distributed representation) like that easily, or actually capturing them all being possible at all... cause magic isnt real.
but a neural network as i see it, is primarily a compressor and search engine.
it will be an awesome invention, if i manage to get to playback of 66 90 minute films out of a single plane (which may become a volumetric cube) of cells.
its a new way to do video compression, but theres issues with it, like it would only play on similarity to the last frame, and you can imagine how kinda wierd that would be. heheh
Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]
However, every so often he posts things like this:
god, its simple, but its a whole lot of thinking thats not commonplace among programmers.
So not only is he an idiot, he is also an asshole with a superiority complex. I am soooo done with this guy.
Whoa... I'm glad you guys dealt with this cat first.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
look, if you just look a few posts down you can see my tech demos.
ok, i just told you! the states dont come for free, in the naive implementation its a whole frame per cell, so you dont get many frames.
but the theory goes, you can daisy chain the distributed representation, and then you get state loss over a representation without recording it, but you do gain a few states over a whole frame store per cell.
see the distributed representation, is a lot of parts, that make a whole picture together, that the reuse of gives you an exponential amount of states, and all im doing is getting the most out of it possible. thats it.
and isnt that what video compression is, more frames for less ram?
the easy way to think about it, is im not storing still things twice, but it even works for the animating bits. ;)
theres more, and i think im being ignored insubstantially.
i guess the nice ones will think, well, maybe hell come out with the fantabulous video compression, but its not really prooven by this ignorant little post, but thats why i dont blame you guys.
im sorry im just too dumb to know how lame i am,,, ok i wont come back
like what do i want, a pat on the bum?
How much actual information theory do you know? Compression is about eliminating redundancy, it isn't unbounded and it doesn't come for free. There is a hard limit on how much information you can encode into a single network without substantial loss of signal. What makes you think you can circumvent that?
What? I can compress any file of size N bytes (N>=2) to N/2 bytes using a O(1) algorithm. It's a super secret patent pending algorithm though, so I can't tell you details. Also I'm still working on the decompression part...
Seriously though, I can see how a neural network could be used to compress video at much higher rates than lossless or conventional lossy video compression. However, if that worked, it would likely work in a similar way our brain compresses memories, so the movie wouldn't be a lot fun to watch.
When you recall a movie that you've watched, or generally for every memory of yours, it is no problem whatsoever to completely fit it into your brain, in full detail. No matter how many films you watch in your life, and no matter how long you live. How does that work?
The brain only memorizes maybe 0.001% of the detail, and when you recall your memory at full detail, your brain fills in the missing information either with some plausible interpolation or with something completely unrelated that it has seen at some point, whichever fits best. You have no way of knowing the difference because it's your brain telling you, so it's a perfect "reconstruction". Even if you can't recall a particular scene right now, you very clearly remember the rest of the film, don't you.
Why do you think witnesses always tell such a lot of bullshit when they're questioned?
Now imagine that kind of thing for a movie that you are to watch on a TV screen...
finite amount of different screens you can store.
reexamine the math of the problem you are proposing to solve
a smallish game screen (these days) 1024 x 768 = 786432 pixels
assume for simple example its just black and white 0/1 values per pixel (assume for later color values will be magnitude more)
what is 2 to the power of 786432 ??? (rough calc 3 decimal digits every 10 bits)
that is a number with about 235929 digits ... your different 'screens' states
finite, yes. horribly huge, yes.
you might want to think more along the lines of preprocessing your 'screens' with some radical data reduction like edge detection and then some kind of 'stroke' encoding scheme to crush the state inputs to your NN down significantly.(like by magnitude of many thousands)
Be aware though that the bigger a NN (of the type you seem to be refering to even reduced in the way Ive suggested) is the more unlikely it will successfully form. Backtracking formation of the weights also requires many cyclings of the training data - not something you can do really in anything like real time.
You also WONT be able to recreate the original raw input 'state' (the video screen) as all the (hopefully) extraneous detaiil would be crushed out of the internal net data - the best you could reconstruct would be (for the edge/stroke factoring example) crude vector lines.