Advertisement

My thoughts on the subject

Started by December 10, 2005 03:30 PM
9 comments, last by Popolon 18 years, 11 months ago
I don't usually participate in these grand discussions of the future of AI, etc, but I have been thinking for sometime(extra hard this wknd, since my research paper is due in a week). Here are some of the idea's I've come across and believe in. Feel free to comment and hack at them: A) Something seems to be intelligent so long as we do not understand how it works. B) Human's are the only example of actual intelligence(we aren't considering a dog's brain, etc). C) If we look at a human, we can formalize it as a black box, with inputs coming in, and outputs going out(nerves, bringing in 5 senses, and controlling muscles, etc). Thus, our goal is to study this black box. D) The human brain is too massively complex to try and wholy understand. It would be more practical to find general patterns, and try and construct such a system, comparing the end results. E) Our intelligent black box system cannot simply be constructed and then activated to become intelligent. It will be created by a set of rules, and continue to grow by these rules, forming massive structures that cannot be reverse enginnered, much as the human brain(D). F) The different subsystems(vision, logic, memory) that we try to research cannot be researched independent of each other. If the system is to be successful, it must be constructed and tested together. G) The system will be a massive network of artificial neurons(ANN's) and although we can give initial pattern values that will evolve these large networks, we can neither reverse enginneer these networks, nor can we extract the weights and attempt to understand how the intelligence in the system works. Tell me what you think.
The more I study, the more I believe that the classical view of an intelligent black box is wrong. Cognitive scientists and computer scientists have for years tried to create an intelligent individual with minimum success. And I think that some researchers are correct in saying that intelligence does not come from just an individual. There is a reason why humans are social animals.

Just think, if interaction wasn't needed in the course of human evolution, if humans weren't social animals, would be have language? Would we need language? Would we need anything beyond the basic needs to survive?

There is no intelligence without interaction. We judge something's intelligence through interaction. We learn through interaction. There will always be more than one entity involved. Intelligence can come from interactions with other people or just simply the interaction between different ideas. So, maybe what we're looking for is not a black box, but rather a sand box. Some place where you throw in ideas and things and they can interact and play off each other.

Advertisement
Quote: Original post by WeirdoFu
The more I study, the more I believe that the classical view of an intelligent black box is wrong. Cognitive scientists and computer scientists have for years tried to create an intelligent individual with minimum success. And I think that some researchers are correct in saying that intelligence does not come from just an individual. There is a reason why humans are social animals.

Just think, if interaction wasn't needed in the course of human evolution, if humans weren't social animals, would be have language? Would we need language? Would we need anything beyond the basic needs to survive?

There is no intelligence without interaction. We judge something's intelligence through interaction. We learn through interaction. There will always be more than one entity involved. Intelligence can come from interactions with other people or just simply the interaction between different ideas. So, maybe what we're looking for is not a black box, but rather a sand box. Some place where you throw in ideas and things and they can interact and play off each other.


Exactly. This is why my project has hundreds of bots working in a small environment, forced to interact w/ one another. I try to determine if I can see any intelligence in the way they interact with one and another.

Still, if you are designing a species, you still have to come up with the basic architecture of an individual that you can then mass produce to interact with one another.


"EUREKA!!! I have an idea!"

Isn't that what intelligence is all about?

I define intelligence as not the ability to solve a problem, but the ability to discover a way to solve a problem.

Under this definition, intelligence isnt a boolean property. It is a measure.
Quote: Original post by Sagar_Indurkhya
[...]
G) The system will be a massive network of artificial neurons(ANN's) and although we can give initial pattern values that will evolve these large networks, we can neither reverse enginneer these networks, nor can we extract the weights and attempt to understand how the intelligence in the system works.


Not that I agree with what the other parts of your post, but I very strongly disagree with this part. Before we had airplanes, the main example of flying was birds, and when we finally solved the problem of flying, our flying machines looked nothing like birds. Similarly, the main example of intelligence we have is the human brain, but I doubt the first intelligent machine (if we ever get to build something that can be uncontroversially called "intelligent") will look anything like a human brain.

The way ANNs work today, they are little more than parametric formulas to approximate arbitrary functions. We can estimate the parameters through training data, but this has more to do with logistic regression than it has to do with intelligence.

The approach that I think is more promising is the one that defines intelligence as "skill in maximizing utility", which suggests a much more direct approach to a solution: "consider possible actions, estimate expected value of the utility in the corresponding scenarios, pick the maximum", coupled with some form of adjusting mechanism for our model of the world ("learning"). It sounds a lot easier than it is, but at least this plan has a well-defined main() that you can work from.

Or maybe the successful approach will be something completely different, but I don't have much hope in ANNs at this point.

Quote: A) Something seems to be intelligent so long as we do not understand how it works.


There are lots of things we don't understand, however it is not a sign of intelligence (e.g. the Univers itself).

Quote: B) Human's are the only example of actual intelligence(we aren't considering a dog's brain, etc).


D'ont agree with this, humans are not the only but maybe the most intelligents. Intelligence is not an absolute but a degree, Can you realize what a twenty pound brained individual would think about us?.

Quote: D) The human brain is too massively complex to try and wholy understand. It would be more practical to find general patterns, and try and construct such a system, comparing the end results.


Of course, we cannot understand every bit of activity in the human brain, but think we must consider it like weather, we can understand the basic parts and interactions and predict some things, even if we don't know what are doing each atom and particle in particular in every moment.

Quote: E) Our intelligent black box system cannot simply be constructed and then activated to become intelligent. It will be created by a set of rules, and continue to grow by these rules, forming massive structures that cannot be reverse enginnered, much as the human brain(D)


Not to that, we must find a basic law or mechanism that explain the thing like Newton does with universal gravitation, then we look around to see if it match reality. Of course I have my own theory, try to explain below.

Quote: F) The different subsystems(vision, logic, memory) that we try to research cannot be researched independent of each other. If the system is to be successful, it must be constructed and tested together.


Maybe, but think on blinds and deaf people, still they are intelligent.

Quote: G) The system will be a massive network of artificial neurons(ANN's) and although we can give initial pattern values that will evolve these large networks, we can neither reverse enginneer these networks, nor can we extract the weights and attempt to understand how the intelligence in the system works.


The system must be a very simple one and must work, and we must be capable of recognize it as intelligent before the making of the "SuperMegaArtificialBrain". I think we need the "Newton" or "Einstein" of the XXI century for this.

And now ... my theory:

Theory of the Short and Long Chains.

I'll try to esplain the simplest way.

The basic element is the "concept" understanding it as a simple piece of information, a perception, a sensation something that becomes in a chemical reaction into the brain, a color, a taste. Inmediatly after this "concept" is chained another one maybe pain or joy, this is a "short chain".

But maybe passing time that chain of perceptions can vary in order or quality, that was joy now is pain.

So, at this time we have two chains like this red - joy, red - pain. Well, both are stored in our memory and we can not decide what to choose for our purposes, only when this chains grow to a certain length the intelligence will begin to work.

Then as chains multiply and become larger, our brain will do a test to rearrange it and make a decision. Think the brain works in a chaotic manner with millions of chains of different lengths comparing and rearranging continuosly until a certain length so big, it needs some time to "read" it even at the speed of light and then ... TIME is born and consequently History is born and of course Science and Philosophy.

Of course, my theory is much more complex than this but I purpose that our brain does not function in any way different that the rest of animals, only can make chains long enough to perceive time.

Thank you for your time.
Advertisement
How would one split and rejoin chains?

How would one decide when chains end?

How would one find output from one of these chains?

From
Nice coder
Click here to patch the mozilla IDN exploit, or click Here then type in Network.enableidn and set its value to false. Restart the browser for the patches to work.
I actualy have a similar idea with that of a concept chain... but mine comes in the form of a conceptual network which forms after data mining a set of perceptual models which define a set of events which can repeat and form patterns of data from data patterns (lol, data patterns are the detected input... this input forms data that together with other perceptual data form patterns of data).

After a conceptual network is formed the entity realizes that it has access to several functions which can define its behaviour inside a world he is getting to know. The entity can now learn to respond in apparently intelligent ways to data patterns. You could try to make a world where a small robot must learn not to be in a certain wavelength of light because it has an inner circuit that drains its battery when this happens. It still is in a very early stage of development in my head but after my AI course in university im taking starting jan. I plan on expanding this idea into some useful code and load it into a small robot with cute lil wheels and sensors.

Perhaps we could add an infrared port and let it learn it can communicate with it and having at least two of this robots see if they develop some sort of communication.

I know its crazy :D
Mecha Engineer (Making Real Humanoid Suits)
You don't need other humans to have something to interact with.
I agree with Alvaro that the first "intelligent" AI will probably not work like anything previously considered intelligent. I think any system that can be evaluated for accomplishing a given task is capable of becoming intelligent given enough time. Maybe with the advancement of grid computers the infinite monkey approach will become more practical.

I disagree with point A. Whether or not we can understand the inner workings seems irrelevant. Being good AI requires a certain level of complexity, but it doesn't necessarily mean it will be impossible to understand, and from the other side of that statement: I don't understand how quantum mechanics works but I've never thought that atoms were intelligent. If we understood how we worked would that make us no longer intelligent?

I also disagree with F, but I'm having difficulty coming up with a great argument so I'll try an example. If you could train an optic neural net to recognize certain shapes why would you need to change it when connecting it to the whole? AI is just a process of converting between types of data. So, if shapes are more useful than pixels, then you could set your main network to deal with shapes and use the optic network as is. In fact, it would probably be much more effecient to train parts of a network individually.

This topic is closed to new replies.

Advertisement