I thought I had created a perfect rule based AI system. So to test that theory I created a function that would take the place of a player and randomly place markers onto the grid. It does this for 1,000 games which takes about 9 seconds on my 2.24GHz P4, but look at this:
Looking at the results at the bottom of the window it appears that my rule based AI isn't perfect at all. There is almost a 2% chance of a player winning by randomly clicking. No idea at the mo what sequence of play causes the AI to loose and thats kind of exciting because I can't find it. So I am going to have to put somekind of logging function together to find what is causing the AI to loose.
The top left button on there runs 1,000 game simulations in nine seconds, but if I stop the code from updating the graphical display then it can run 10,000 games in just five seconds. Just goes to show that automated testing is well worth while. Every application that I have built of late has had automated testing and it has always been worth the few minuets that has taken to add.