So long.
I have a subject on AI for my CS degree (well, it's not exactly CS. The literal translation would be "Computing engineering". Whatever.)
The basic text is an old book written in the 90's by the teachers, and seems so old-fashioned. I wonder if its contents are the usual in an AI course.
It's very possible that some technical concept is badly translated by me, so take the names with a bit of salt.
After some historical context, it has a couple chapters on searching (wide/deep/backtracking and heuristics), then one about the use of logic for knowledge representation, then rules based systems and reasoning, then associative networks, frames ans scripts (I'm really unsure about this translation. It's about the Minsky theories, if that helps), then Expert systems, then learning, and the end chapter is about neural computing.
Is there anything important lacking? My university has a fame of its teachers writing lame books and reediting them each 1-2 years just for scholarship purposes, and I have noticed some subjects suddenly becoming more easy, interesting and useful after a change to some internationally well known standard book on the subject (e.g., statistics changed from said lame book to "Statistics and probability for engineering and science" by Jay L. Devore, and suddenly I began understanding mostly everything, and even having fun with it, even though I had a horribly bad translation. In most problems I didn't even know what I was being asked.)
So I'm planning to study AI with this old boring book, just enough to pass the exam, and then really learn the subject on my own with 2002 "IA: A modern approach (second edition)" by Russel and Norvig, which seems better structured, a lot more complete (a lot more content, and a deeper study of it) and, I'm not sure if it's important but, above all, it seems more updated.
Does this make any sense?
[Edited by - ravengangrel on October 7, 2010 4:19:55 PM]
Learning AI - is this a good book?
Without seeing your book it's hard to know exactly what it talks about, but my memory of the 90's seems to involve reading a lot about ANNs, fuzzy logic, and GAs/EP. Based on my recollection there has been a big shift in the 'AI' literature over the past decade.
A lot of advances in AI have not come under the obvious 'AI' heading. Some of the old tried-and-true AI problems have been solved using statistically sound methods and techniques.
There is a lot of searching going on, but most of it is exhaustive. It used to be that search strategies were neccessary, but now computers are cheap, so powerful these days, and have so much memory and drive space, that you can get away with test-every-possibility-I-dont-care-if-it-takes-5-days-because-ive got-a-laptop-too. :)
Most of the logic used in modern AI isn't 'human derived' logic. The logic comes from doing a computational analysis of a problem. There is a lot of Bayes/probability used, and a lot of stats (PCA, SVM, etc..). Even expert systems have been affected by this (ID3).
Some of the classic problems of the 90s have been beaten. Checkers has been solved, and a desktop PC is able to play grandmaster level Chess. No ANNs or GAs used.
Computer vision has taken a huge leap forward in pretty much every direction. No ANNs or GAs. Using modern methods you can build a visual object detector for just about anything in less than a day of computing time, and be certain that it will work every single time.
I can't really speak for video game AI, but Google 'Infinite Mario' and you can see just how flexible A* can be when given ample cycles. Again, no ANNs or GA.
Data mining has taken off so epicly that we didn't even see it fly!
I would guess that a book written 10-20 years ago is probably way out of date. :)
A lot of advances in AI have not come under the obvious 'AI' heading. Some of the old tried-and-true AI problems have been solved using statistically sound methods and techniques.
There is a lot of searching going on, but most of it is exhaustive. It used to be that search strategies were neccessary, but now computers are cheap, so powerful these days, and have so much memory and drive space, that you can get away with test-every-possibility-I-dont-care-if-it-takes-5-days-because-ive got-a-laptop-too. :)
Most of the logic used in modern AI isn't 'human derived' logic. The logic comes from doing a computational analysis of a problem. There is a lot of Bayes/probability used, and a lot of stats (PCA, SVM, etc..). Even expert systems have been affected by this (ID3).
Some of the classic problems of the 90s have been beaten. Checkers has been solved, and a desktop PC is able to play grandmaster level Chess. No ANNs or GAs used.
Computer vision has taken a huge leap forward in pretty much every direction. No ANNs or GAs. Using modern methods you can build a visual object detector for just about anything in less than a day of computing time, and be certain that it will work every single time.
I can't really speak for video game AI, but Google 'Infinite Mario' and you can see just how flexible A* can be when given ample cycles. Again, no ANNs or GA.
Data mining has taken off so epicly that we didn't even see it fly!
I would guess that a book written 10-20 years ago is probably way out of date. :)
Regular AI, maybe.
Game AI, likely not.
Game AI, likely not.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement