AI Founder Blasts Modern Research
Minsky is just doing what Minsky has always done... stir the pot to see what floats to the surface... and to let everyone know that he's still alive.
Remember the addage... 'Old Scientists never die, they just publish less papers'
... this should have an addendum... 'and spend more time talking to the media'.
Personally I disagree with MM. Sure, the expert systems of the 70s and 80s have died their rightful death... and for very good reasons. Efforts in the past decade though have replaced these earlier systems (For example, Bayesian networks are generally accepted as being superior to Certainty Factors and Dempster-Shafer Theory) and effort is now being spent on being able to sift through the massess amount of data that an agent perceives every second in order to make sense of its world. Yes, even those working in accelerating database retrieval are helping the AI community to advance itself.
I think that the focus of AI has changed from the 'Grand Idea' that every dreamer has (of, in this case, an artificial agent with intelligence comparable to human intelligence) to working on the nitty gritty problems that every scientist must overcome before they have a successful theory (and a working model).
Just because researchers have learned to stop saying that 'We'll have an intelligent agent in 10 years time', doesn't mean they've stopped working toward that goal.
Cheers,
Timkin
[edited by - Timkin on May 14, 2003 3:09:50 AM]
Remember the addage... 'Old Scientists never die, they just publish less papers'
... this should have an addendum... 'and spend more time talking to the media'.
Personally I disagree with MM. Sure, the expert systems of the 70s and 80s have died their rightful death... and for very good reasons. Efforts in the past decade though have replaced these earlier systems (For example, Bayesian networks are generally accepted as being superior to Certainty Factors and Dempster-Shafer Theory) and effort is now being spent on being able to sift through the massess amount of data that an agent perceives every second in order to make sense of its world. Yes, even those working in accelerating database retrieval are helping the AI community to advance itself.
I think that the focus of AI has changed from the 'Grand Idea' that every dreamer has (of, in this case, an artificial agent with intelligence comparable to human intelligence) to working on the nitty gritty problems that every scientist must overcome before they have a successful theory (and a working model).
Just because researchers have learned to stop saying that 'We'll have an intelligent agent in 10 years time', doesn't mean they've stopped working toward that goal.
Cheers,
Timkin
[edited by - Timkin on May 14, 2003 3:09:50 AM]
I''m looking forward to seeing that grand, nearly-fantasy-based idea of a fully-human-like AI that learns, communicates and, most important, makes mistakes and has failings.
Expert systems are really cool, they''re a very, very solid form of optimization. But I wonder when we''ll actually start working on Artificial Stupidity, that is a system that doesn''t perform perfectly. Such as a program that chats with the user and chooses replies based on what it feels like and what sounds right to it, not what would fit the situation best. ie, like we humans do.
Still, discovering that kind of AI, or rather AS, that works flawlessly (how contradictory ) is like trying to win the lottery: a rather nice thought but not something that''s very likely to happen. Not impossible either, but heh, better not stake all your hopes on it.
Expert systems are really cool, they''re a very, very solid form of optimization. But I wonder when we''ll actually start working on Artificial Stupidity, that is a system that doesn''t perform perfectly. Such as a program that chats with the user and chooses replies based on what it feels like and what sounds right to it, not what would fit the situation best. ie, like we humans do.
Still, discovering that kind of AI, or rather AS, that works flawlessly (how contradictory ) is like trying to win the lottery: a rather nice thought but not something that''s very likely to happen. Not impossible either, but heh, better not stake all your hopes on it.
quote: Original post by RuneLancer
I''m looking forward to seeing that grand, nearly-fantasy-based idea of a fully-human-like AI that learns, communicates and, most important, makes mistakes and has failings.
Expert systems are really cool, they''re a very, very solid form of optimization. But I wonder when we''ll actually start working on Artificial Stupidity, that is a system that doesn''t perform perfectly. Such as a program that chats with the user and chooses replies based on what it feels like and what sounds right to it, not what would fit the situation best. ie, like we humans do.
Still, discovering that kind of AI, or rather AS, that works flawlessly (how contradictory ) is like trying to win the lottery: a rather nice thought but not something that''s very likely to happen. Not impossible either, but heh, better not stake all your hopes on it.
Lets wait to see what quantum computers can do about it
[size="2"]I like the Walrus best.
May 14, 2003 08:25 AM
I somewhat agree with what he is saying... I come from his era, and if I knew then what I know now I would have never gotten my PHD in AI.
AI has moved terribly slow compared to our expectations in the 1970s... I think this is why he feels it has been brain dead.
While advances have definately been made, we are not much closer to a truly autonomous, thinking agent.
Although, advances in computer vision and sensors is exciting. I think mimicing the senses will play a key role in the creation of such an agent.
AI has moved terribly slow compared to our expectations in the 1970s... I think this is why he feels it has been brain dead.
While advances have definately been made, we are not much closer to a truly autonomous, thinking agent.
Although, advances in computer vision and sensors is exciting. I think mimicing the senses will play a key role in the creation of such an agent.
May 14, 2003 04:22 PM
quote: Cyc knows that trees are usually outdoors, that once people die they stop buying things, and that glasses of liquid should be carried right-side up,
They''re doing it wrong. Rules in our head aren''t like that, they''re simulations of what we know works. Yes, I know we have to have lots of rules to input to get to the point of that understanding, but it''s like they''re making a 3d model out of boxes and other shapes when we all know it''s easier to use one box and add vertices. Actually, that analogy is better when you think of creating a real life sculpture. The point is that they''re trying to force these things to connect when really they only have significance relative to themselves. No, seriously, try to find words that don''t eventually come back to themselves. Our concious is a closed loop. Sure, we can add new things, but it''s only another link, not another list.
Things happen not cause we calculate them(though we do, I''m in metaphysics now, shut up) they happen cause we''re exerting our will on the universe. For that dream ai, there should be high- and low-level systems at work. With the high deciding what to do and the low to figure out how to do it(this is a good analogy of organizations of many types).
More importantly they can''t essentially hard-code everything they need(typing in rules). What should be done is put in systems. Unless of course they can come up with a faster way to infer things, like 1000 times faster.
I think Marvin Minsky''s problem is that everyone is making brains, not minds.
quote: Cyc can use its vast knowledge base to match natural language queries. A request for "pictures of strong, adventurous people" can connect with a relevant image such as a man climbing a cliff
Although that''s pretty damn impressive, but it''s simplistic if you think of it symbolically and realize how it got its answer.
For a computer to truly act human, i feel, they have to find someway by which the machine can actually understand some basic things in terms of which other complex things can be expressed. because like the above poster said, every word we know can be expressed in terms of other words we know and so actually our understanding is a kinda loop. for example we all understand (hopefully ) what 'good' or 'bad' means, but how do u explain it? one knows when one has a stomachache, headache, or when one feels sleepy, etc. - how can we represent such feelings, etc. in a machine?
also, if i remember correctly, in Searle's chinese room argument he places himself in a room surrounded with cards printed with chinese symbols. through a hole in the room he is given a story and some questions all in chinese. matching the symbols following some complex mechanical set of instructions, he is able to answer back in chinese. however, Searle admits he knows nothing of chinese. this is how machines answer questions from stories and chatbots work - mechanically. Searle's argument seriously undermines the validity of the turing test.
if the machine can be made to understand some basic things and/or given feelings, human like behaviour may be achieved. but if machines continue to work on the same principle the way they do now, then idea of a machine behaving like a human seems vague to me...
any comments?
[edited by - pacman2003 on May 15, 2003 5:26:53 AM]
also, if i remember correctly, in Searle's chinese room argument he places himself in a room surrounded with cards printed with chinese symbols. through a hole in the room he is given a story and some questions all in chinese. matching the symbols following some complex mechanical set of instructions, he is able to answer back in chinese. however, Searle admits he knows nothing of chinese. this is how machines answer questions from stories and chatbots work - mechanically. Searle's argument seriously undermines the validity of the turing test.
if the machine can be made to understand some basic things and/or given feelings, human like behaviour may be achieved. but if machines continue to work on the same principle the way they do now, then idea of a machine behaving like a human seems vague to me...
any comments?
[edited by - pacman2003 on May 15, 2003 5:26:53 AM]
Realistically, a "perfect" AI would have to be extremely, outrageously, unthinkably simple and evolved through a few years (maybe months... depends just how good PCs''ll be then) by either dedicated scientists/programmers/social workers (social workers? Heck, this is getting kinda strange ) OR, even better, an other "perfect" AI.
First off, think about it: the brain doesn''t pop out of nowhere. It grows and developpes during the gestation period. With only 42 pairs of chromosomes (I hope I got the right number ^^ and sperm cells only a few neurons large, the body''s algorithm for developping the brain would have to be rather simple and quite possibly recursive or iterative (like, say, fractals, which are generated using really simple algorithms but which are pretty much infinitly complex. Just drawing a parallel here.) If anything, all''s the body would need is the basic system to be set up. Mind you, I''m no biologist or neurologist. Any incorrect assumption here is probable and unintentional.
Once the basic brain is set up and capable of learning, it would have to be trained by both people (such as parents) and the environment (for instance, a baby flailing its arms and realising that whacking the sqeaky thing makes funny noises or, through trial and error, learning to coordinate enough to walk) My theory, which may or may not be arbitrairly correct, is that the brain features a form of reinforcement learning and something more or less like our current neural network models (hey, there ARE neurons making a network in there after all, that last part can''t be completely wrong )
So why would a computer go through this instead of having a pre-built brain fed 10,000 statements? Because for things like these, mimicking nature has often given very good results. Just look at GAs, NNs, etc.. Not perfect, but they''re rather robust algorithms nevertheless. Could one realistically cut corners? Probably, it''d be kinda dumb to expect AI to be impossible short of a few years of growing it and training it. But is the article going about it the right way? No.
That''s like brute-force trying to find the solution to a really complex equation when you can just simplify its terms and solve it in about a minute or two. The issue here is finding out how to simply the brain instead of force-feeding a rough and probably very inaccurate model some info...
(Eh, it''s early and I haven''t had coffee yet. I hope I didn''t make a fool out of myself ;P)
First off, think about it: the brain doesn''t pop out of nowhere. It grows and developpes during the gestation period. With only 42 pairs of chromosomes (I hope I got the right number ^^ and sperm cells only a few neurons large, the body''s algorithm for developping the brain would have to be rather simple and quite possibly recursive or iterative (like, say, fractals, which are generated using really simple algorithms but which are pretty much infinitly complex. Just drawing a parallel here.) If anything, all''s the body would need is the basic system to be set up. Mind you, I''m no biologist or neurologist. Any incorrect assumption here is probable and unintentional.
Once the basic brain is set up and capable of learning, it would have to be trained by both people (such as parents) and the environment (for instance, a baby flailing its arms and realising that whacking the sqeaky thing makes funny noises or, through trial and error, learning to coordinate enough to walk) My theory, which may or may not be arbitrairly correct, is that the brain features a form of reinforcement learning and something more or less like our current neural network models (hey, there ARE neurons making a network in there after all, that last part can''t be completely wrong )
So why would a computer go through this instead of having a pre-built brain fed 10,000 statements? Because for things like these, mimicking nature has often given very good results. Just look at GAs, NNs, etc.. Not perfect, but they''re rather robust algorithms nevertheless. Could one realistically cut corners? Probably, it''d be kinda dumb to expect AI to be impossible short of a few years of growing it and training it. But is the article going about it the right way? No.
That''s like brute-force trying to find the solution to a really complex equation when you can just simplify its terms and solve it in about a minute or two. The issue here is finding out how to simply the brain instead of force-feeding a rough and probably very inaccurate model some info...
(Eh, it''s early and I haven''t had coffee yet. I hope I didn''t make a fool out of myself ;P)
There is another option. Create a physical simulation detailed enough to encapsulate the human brain at a chemical / electrical level. Scan in somebodys brain and it''s state, and make it go.. I''m sure it wouldn''t be that simple, but you get the jist.
Something similar has already been done with a simple virus. Maybe in 50 years this will be possbile with more intelligent lifeforms, like a goldfish.
Will
Something similar has already been done with a simple virus. Maybe in 50 years this will be possbile with more intelligent lifeforms, like a goldfish.
Will
------------------http://www.nentari.com
Cyc "knows" nothing, it performs symbol manipulation and logical operations on relational data, it is not grounded within the environment, it is not embodied, it does not receive input from the world or output to the world. The only intelligence in the system is that projected onto it by observers of the system''s behaviour.
Of course, certain modes of thinking carried out by people can be abstracted to these kinds of operations, but, to be cliched, the map is _not_ the territory. Even if you created a machine that could answer every common knowledge question you could imagine, it would not make it "know" or "understand", the machine would merely be the mechanisation of a process defined by us as intelligent (the very Minsky-esque definition of artificial intelligence).
To take this argument to another level, even if you had a machine that was situated and embodied in the environment, the understanding of the environment the machine could be said to posses would be dependant on its sensors and the nature of the dynamical process that defined its (for want of a better word) brain. To understand the world like a human you actually need to _be_ a human, with human sensors, human methods of adaptation, human effectors and all the things that define how we structurally couple with the world.
Given this, any form of communication between a human and such a machine, given whatever interface, could only be over the abstraction of experience that was similar enough between the equivilant sensors and entity-world interactions and internal adaptive processes. Outside of the intransients between you and the machine, no communication could occur because of the inherent differences in experiences.
In essence, my argument is that the goal of human - computer communication (and the idea that this would make the computer intelligent) is both a fallacy and an impossibility. To communicate in any meaningful manner, the computer would have to be so much like us in form and function that most people would be unable to tell the difference between the person and the computer.
This is an example of the difficulty in creating true AI. We believe we can abstract our behaviour from the world and from ourselves and still remain with something meaningful that remains in context. Intelligence is meaningless without environment and the coupling between agent and environment defines that intelligence.
Rant over
Mike
Of course, certain modes of thinking carried out by people can be abstracted to these kinds of operations, but, to be cliched, the map is _not_ the territory. Even if you created a machine that could answer every common knowledge question you could imagine, it would not make it "know" or "understand", the machine would merely be the mechanisation of a process defined by us as intelligent (the very Minsky-esque definition of artificial intelligence).
To take this argument to another level, even if you had a machine that was situated and embodied in the environment, the understanding of the environment the machine could be said to posses would be dependant on its sensors and the nature of the dynamical process that defined its (for want of a better word) brain. To understand the world like a human you actually need to _be_ a human, with human sensors, human methods of adaptation, human effectors and all the things that define how we structurally couple with the world.
Given this, any form of communication between a human and such a machine, given whatever interface, could only be over the abstraction of experience that was similar enough between the equivilant sensors and entity-world interactions and internal adaptive processes. Outside of the intransients between you and the machine, no communication could occur because of the inherent differences in experiences.
In essence, my argument is that the goal of human - computer communication (and the idea that this would make the computer intelligent) is both a fallacy and an impossibility. To communicate in any meaningful manner, the computer would have to be so much like us in form and function that most people would be unable to tell the difference between the person and the computer.
This is an example of the difficulty in creating true AI. We believe we can abstract our behaviour from the world and from ourselves and still remain with something meaningful that remains in context. Intelligence is meaningless without environment and the coupling between agent and environment defines that intelligence.
Rant over
Mike
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement