Advertisement

Of brain and the usefulness of studying brain in designing/building AI

Started by July 07, 2020 02:13 PM
47 comments, last by Calin 4 years, 2 months ago

Calin said:
1+1 = 2 is just a small thinking process

This is correct. You can start from such small calculus and end up with advanced AI.

Boolean logic is in the core of our digital era. It is based on even smaller thinking than “1+1=2”. The most complex program made so far by humans can be broken down to simple additions.

In my opinion, the programming languages we have nowadays are enough to create GAI. It might not repeat the brain in its internal functionality, but from outside, it will be able to do anything a brain does.

And if it behaves exactly like a brain(or any other naturally happening living thing that produces intelligence), it is a brain.

And there is a whole new programming paradigm, completely different way of programming thinking that is just comming now - quantum programming.

And talking about quantum computers, it looks like ML is not doing so well. Quantum computing is a younger discipline than ML, but we already have(arguably) quantum computers, quantum programming languages, and secure quantum communication.

https://en.wikipedia.org/wiki/Timeline_of_machine_learning
https://en.wikipedia.org/wiki/Timeline_of_quantum_computing

When perceptron was built and working, the sole idea of quantum computers was not existing yet. And it seems that a stable quantum computer is going to happen sooner than GAI.

In my opinion, finding the shorthest path requires intelligence.
Now it looks like a trick because humans created the labirinth. But in real life situations, the slime effectively finds the shortest path to food recompense. And it then can transport nutrients with minimal effort. I call this problem solving using intelligence.

It is hard to see farther from our brain. It needs a switch in our mentality to can consider other sources of intelligence than brain.

And it is not only path finding, it is planning of cities -


https://en.wikipedia.org/wiki/Physarum_polycephalum#Situational_behavior

A PDF from MIT -

https://dspace.mit.edu/handle/1721.1/124999

In the very core of the super mega intelligent Brain, there are brainless cells. We can not accept brain, while discarding slime mold, because Brain is like a slime mold. It is a network of intelligence-less cells that create intelligence.

I know it is hard to understand, but it comes with practice. The conjuction of brainless units and the interaction of these brainless units can originate a brain.

Where intelligence originates? Inside a single neuron or inside the group of neurons, or inside the connections. Or it originates in the whole picture? Somehow, the conjuction originates intelligence.

It is hard to understand in the begining but it comes with the practice.

Maybe it is hard to understand, because 10 persons of IQ 60 can not work together to invent a nuclear reactor. IQ does not summs along humans. How then IQ arises from the conjunction of units each one having zero IQ?

It is a mystery, but it is part of brain as well as of slime mold.

Maybe this can help people in the understanding of “there is intelligence inside the brainless” -

https://www.newscientist.com/article/dn8718-robot-moved-by-a-slime-moulds-fears/?ignored=irrelevant

Maybe putting a human face to a slime mold can help even more -

Advertisement

Has anyone seen 100+ years old turtles?

My project`s facebook page is “DreamLand Page”

NikiTo said:
The most complex program made so far by humans can be broken down to simple additions.

to get a human like AI in a RTS you need automated beta testing (the AI player may see only one screen and scrolls it to see what`s on the map etc.) along side simulating the opponent (the AI player has a fake AI player running in his `mind` he permanently refers to when the real opponent enters in the AI player LOS and the AI players sees it the fake AI is syncronized , when the enemy is behind the fog of war, the AI has an updated `opponent` version as reference point running in his imagination)

NikiTo said:

This is correct…

Your approval forces me to stick to my statements and work to `make it happen`.

My project`s facebook page is “DreamLand Page”

Last year, John Carmack gave up game development and VR to go work on general artificial intelligence. He hasn't said much about what he's doing yet. But I wouldn't be surprised if it involves a game or virtual world where the AI is a character. There's a school of thought in AI that intelligence needs a body and a world to interact with. People have tried that with robots and simulators. In a game world is the AI can have lots of people to interact with, so machine learning has enough data to crunch upon.

I went down that road in the 1990s. I got diverted into physics engines, because no simulators existed that were any good, spent three years on that, developed the first ragdoll system that worked, sold the technology to a physics engine company, and went on to other things. This was all before machine learning worked; it was too early. Now, there's more that can be done.

Nagle said:
. In a game world is the AI can have lots of people to interact with

AI can interact with one person or with several persons. People take for granted interaction with other people. But even 1 to 1 interaction with (human class) AI is quality interaction.
As for Machine Learning there is a movie, Defcon, made in the 80`s or earlier, it`s a good explanation of computer learning in the context of games.

My project`s facebook page is “DreamLand Page”

Advertisement

If anybody thinks he needs tons of data, then in my opinion he is in the wrong path.
As they explained in my last links, one of the reasons NNs are BAD is because they need tons of data to learn. This is BAD. A kid would need one or two pictures of a crocodile to always and forever recognize correctly a crocodile.

Imagine a kid learning “hot burns” from 1mln of tries and errors… it would not survive that “learning” experience. Kid needs to get burned ONLY once and forever it will be knowing that painful lession.

“Oh it burns and hurts. I am 8% sure now that too hot is bad for my skin. let try again. Oh it burns, it hurts. I am 12% sure now too hot is bad for skin. Let try again to be sure. Oh it hurts! I am only 15% sure now too hot is bad for my skin. Let try again…” - it is ridiculous!!!!

The same people who pretend to be imitating human brain, completely fail to understand how a brain thinks.

A NN needs millions of kilometers of road data in order to learn(still fails in some situations). Many humans buy their driving licenses and learn on the road how to drive. No need of millions of kilometers of data.

This is what developers should aim for - making it learn with the less possible data.
Trying to provide it with tons of data is to be stuck in the same position for another 60 years more.

Not to mention that only big companies have access to tons of data. If you work on AI in a garage, you can not access so much data. It took lot of time for lot of humans to label that data, so it is normal for it to cost too much for a garage based developer.

Is this the “awesome” AI we humans achieved? It looks mostly human labour to me -

(a link to pricings of humans per photo labeling service here, but i don't know if the admin would consider it a spam, so i removed it. Not needed anyways. Yes - how many photos per how many labelers per how much time… Silly AI progress, right?)

“In the year 2252 Skynet finally found a reasonable in price labeling service and used millions of humans to label its world domination plan. In this way, Skynet was able to dominate the world.”

“Skynet helps education system” - human labels it like NOT a world domination plan.
“Skynet launches nukes” - human labels it as YES a world domination plan.
“Skynet helps humans” - human labels it as NOT a world domination plan.

Plot for the next Terminator movie - “We, the resistance, must find the person who labels the data for Skynet and neutralize him. This way Skynet will become brainless and the world will be saved.”

getting back to usefulness of studying brain in designing/building AI. I when you build a terminator you should not try to imitate a human brain but rather aim to make the robot perform specific tasks. That`s a job that even a Pentium 2 can do. All you need is a robot with independent /emerging behavior (like it can operate without human input). A verse in the Bible states `deep calls upon deep`, that was written with man and God in mind, but that can be very well applied as a human - AI relation description. You can build a logic net that`s a mild response to everything that goes on inside a human mind

[edit]

I think if you state you want to make a robot that `behaves like a human, its way too broad you arent saying basically anything. However if you say I want to make a robot that does this or that task like a human thats a whole different story.

My project`s facebook page is “DreamLand Page”

This topic is closed to new replies.

Advertisement