Advertisement

Tetris Attack AI psuedo code please

Started by April 01, 2007 04:40 AM
7 comments, last by space warp 17 years, 7 months ago
If anyone here has played Tetris Attack(SNES) AKA Panel De Pon, Pokemon Puzzle League, Puzzle League Can you help me flesh out the AI for the computer player? It's been my fave game for a long time now and I always wanted to remake it because the AI wasnt that great. I started my remake and its (almost) playable now in single player, but I really want a 2 player mode. A few years ago i made an console based AI to solve single player puzzles in the puzzle mode but wow was it slow. I need a much better algorithm. For inspiration, have a gander at this record video from www.tetristattack.net. They have more videos there under records->tetris attack->verified records if you want to see some more. They are pretty amazing. amazing video Source www.tetrisattack.net I'd like an AI capable of doing that! Any advice on how to proceed? [Edited by - Amnesty2 on April 1, 2007 1:07:02 PM]
never played the game but mostly looks like a sorting problem. suuuuuper ugly naive approach is something like:

for each tile{    get a list of all color matches    for each match    {        do a breadth first search to find swaps that put it adjacent        to the target tile (search breaks on first chain found)                if a match was found        {           break;        }    }}


I guarantee that can have it's complexity significantly reduced. =)

-me
Advertisement
Here is my working model so far. Please add any advice. It will probably take me a while to implement so i want to adjust it before I do it and not later.
Its not really psudedo code, its more like bad english so sorry about that.

First find an easy chain at the bottom of the screen using below methods
but don't start it off, just save it for it later.


Start by finding and clearing three blocks around the center

get the row above the clearing blocks
TTTTTT
MCCCMM
XXXXXX

Where T is the top blocks
C is blocks that are clearing
X is bottom blocks


Check to see if any of the T blocks can be arranged so they chain
with the M blocks. Either by two Ts falling and linking with one M
or two Ms linking with one T
(AI Should always perfer horizontal chains first as its easier to chain)

so that
TTHHTT
MCCCHM
XXXXXX

T TT
MTHHHM
XXXXXX

The H's line up after the blocks fall

If this method is fast enough you can also check several rows above T
for the same kind of chain IE

XXXXXX
XXHHXX
XxXXHX
XxxxXX
XCCCXX
XXXXXX
XXXXXX
After the C's clear and all the blocks above fall the H's will be lined up.

If that fails Check the three falling blocks and see if you can perform
a vertical chain either by two falling into one, or one block falling into two.

repeat.

If the AI fails to see a chain, immediately kick off the saved chain at the bottom of the screen you had saved, and hope it starts before the current chain ends so that the chain is not broken. Continue looking for more chains.

No chains left.
Level bricks off.
Raise Stack.
Goto start.

TODO: Add skill chain reconigtion.
TODO: Add time-lagged chains
TODO: Add multi-threaded chains (two+ seperate chains going on at the same time and continuing both with said methods). Hopefully the regular AI is so fast that this can be done without spending too much time doing an AI update.

Perhaps a transpostion table can be used for common chain themes?

Question, should the AI look ahead more and not just one chain ahead?
The AI should consider as much as possible the factors of its gameplay. So yes, the AI could greatly benefit from looking a bit ahead. What you're looking for is what's called a brute force AI. That means it looks at all the available options and x number of turns after that and calculates the score for each sequence. It then chooses the option that leads to the best score.
.sehkteeah erthyahr gahro
Well, i havent implemented it yet. But i don't think that would be fast enough. Blocks clear and fall very fast. It has to find a continuation virtually instanly or its chain will break. I did make a brute force AI for solving single player puzzle mode. I don't exactly recall but the permutations were astromical after only a few block switches. Of course, it wasnt optimized at all, one 4 move permutation would be switching the same two blocks 4 times effectivly doing nothing, or switching two same color blocks which absolutly pointless and then branching from there.

Not to mention my game board is bigger than the original 24 rows 6 columns instead of 12 rows 6 coolumns.
You should be able to keep the permutations within an acceptible range by filtering (bad moves are ignored from the beginning) and maybe only calculating two levels.

But if you really don't want to use brute force then I suggest a neural network, they are capable of a lot more than people tend to think. You have to train the network first but after that they can work magic.

[edit]
Did some calculations:
2 levels = 14 400 possibilities
3 levels = 1 720 000 possibilities

[Edited by - space warp on April 3, 2007 10:46:26 AM]
.sehkteeah erthyahr gahro
Advertisement
Quote: Original post by space warp
But if you really don't want to use brute force then I suggest a neural network, they are capable of a lot more than people tend to think. You have to train the network first but after that they can work magic.


well... no they're not magic. It's a common misunderstanding that neural networks can do anything and will just "learn the solution".

To train it you'd need to implement the brute force solver anyway because you need to let it know if the move it made was "good" or not. I don't think it'd outperform the brute force solver anyway. It'd be fun to get it running as an academic exercise but i'm not sure it's the fastest algorithm for the job.

I think what you're finding is what I'd expect: this isn't really an easily solved AI problem because the branching gets really easily out of control. i.e. it's something that humans are great at, but computers aren't yet good at.

I'd go with brute force and try filtering as the previous poster suggests to limit your branching.

Something wacky that just occurred to me:
for each Tile{    for each OtherTile    {        if ( OtherTile is the same color as Tile )        {            apply a "force" on Tile in the direction of OtherTile        }    }}move the tile with the greatest force in that force's direction.


will it work? who knows!

-me
Quote: Original post by Palidine
To train it you'd need to implement the brute force solver anyway because you need to let it know if the move it made was "good" or not. I don't think it'd outperform the brute force solver anyway. It'd be fun to get it running as an academic exercise but i'm not sure it's the fastest algorithm for the job.


Just saying... you can teach the neural network also by example. Play the game yourself and let the network figure out the moves. Still, I wouldnt use it anyways, since the inputs and number of layers and nodes are very hard to get right to get generally good results. Probably much harder than brute force method.
Quote: Original post by teebee
Quote: Original post by Palidine
To train it you'd need to implement the brute force solver anyway because you need to let it know if the move it made was "good" or not. I don't think it'd outperform the brute force solver anyway. It'd be fun to get it running as an academic exercise but i'm not sure it's the fastest algorithm for the job.


Just saying... you can teach the neural network also by example. Play the game yourself and let the network figure out the moves. Still, I wouldnt use it anyways, since the inputs and number of layers and nodes are very hard to get right to get generally good results. Probably much harder than brute force method.


One word: evolution. I've worked with neural networks for two years, evolving NNs work. All you need is a working game that takes instructions and returns scores, no need to bother with the exact values inside the NN.

The force approach is a nice idea, I've used it quite a bit in other places and I think it could work. But I don't think it will give really good instructions because normal force calculation assumes that the state does not change between the carrying out of an instruction and the calculation of the next. But here it does: bricks fall, bricks dissapear. It also does not have the ability to look ahead, and that might be a problem. But this really could be the best choise if you don't need optimal game-play.
.sehkteeah erthyahr gahro

This topic is closed to new replies.

Advertisement