Artificial Intelligence
Garry Kasparov writes in his book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins” (p. 75):
The basic suppositions behind Alan Turing’s dreams of artificial intelligence were that the human brain is itself a kind of computer and that the goal was to create a machine that successfully imitates human behaviour.
This concept has been dominant for generations of computer scientists. It’s a tempting analogy – neurons as switches, cortexes as memory banks, etc. But there is a shortage of biological evidence for this parallel beyond the metaphorical and it is a distraction from what makes human thinking different from machine thinking.
The terms which I (Garry Kasparov) prefer to highlight these differences, are: “understanding” and “purpose”.
Andrew McAfee and Erik Brynjolfsson:
Computers and robots can - despite their intelligence - understand little of the human condition, of the unique human perception of the world.
My (Hans Damen) description of
- the Phenomenon Objective and
- of the role the Phenomenon Objective plays in the functioning of the human (animal) brain
describes the essence of the mental part of the human condition.
An answer to the question “What is understanding ?” might be derived from my answer to the question “What is language?”
History
In 1956, at the ‘Dartmouth Conference’, a group of prominent scientists started the thinking about that what they called “Artificial Intelligence” (A.I.).
Herbert Simon, one of the attendants of that conference, predicted in 1965:
“machines will be capable, within 20 years, of doing any work a man can do”.
Marvin Minsky, another attendant of that conference agreed, writing in 1967:
“within a generation….the problem of creating Artificial Intelligence will substantially be solved”.
In 1973 it had become obvious that these scientists had grossly underestimated the difficulty of building a truly intelligent machine, and funding of undirected research in Artificial Intelligence (A.I.) was stopped in the USA and the UK.
Garry Kasparov (p.99):
A.I. would not see its spring until a movement arose that gave up on grandiose dreams of imitating human cognition.
The field was “machine learning”.The basic concept of “machine learning” is that you don’t give the machine a bunch of rules to follow, the way you might try to learn a second language by memorising grammar and conjugation rules.
Instead of telling it [the rules of] the process, you provide the machine with lots of examples of that process and let the machine figure out the rules, so to speak.Language translation is a good illustration. Google Translate is powered by machine learning, and it knows hardly anything about the rules of the dozens of languages it works with.
They feed the system examples of correct translations, millions and millions of examples, so the machine can figure out what’s likely to be right when it encounters something new.Looking back one could say that “machine learning” rescued A.I. from insignificance, because it worked and it was profitable.
Future
Garry Kasparov (p.247 and 248):
Intelligent machines have been making great advances thanks to “machine learning” and other techniques, but in many cases they are reaching the practical limits of data-based intelligence.Going from a few thousand examples to a few billion examples makes a big difference. Going from a few billion to a few trillion may not.
In response, in an ironic twist after decades of trying to replace human intelligence with algorithms, the goal of many companies and researchers now is how to get the human mind back into the process of analysing and deciding in an ocean of data.
Humans do many things better than machines, from visual recognition to interpreting meaning, but how to get the humans and machines working together in a way that makes the most of the strength of each without slowing the computer to a crawl?
Thinking about the future of Artificial Intelligence is
-
thinking about replacing the mental part of the work of a person
- who is working as a nurse in a health-care situation or
- who is assisting an older person or a patient in a domestic situation or
- who is driving a car
- thinking about the working together of a human and a robot
Replacing a human by a robot
A person, who
- is working as a nurse in a health-care situation or
- who is assisting an older person or a patient in a domestic situation or
- who is driving a car
usually has
- an objective with respect to the job she is doing, and
- a plan on how to achieve that objective
in her brain.
Such a plan is a prediction, consisting of a sequence of “in between objectives” (“milestones”) along the road of that person to that objective.
After a person has finished a particular job, she can compare
- the sequence of “in between objectives” she intended to realise before she started working on that job and
- the sequence of “in between objectives” she did realise.
Such a comparison shows that these sequences are different.
These sequences are for example different because that person often got in an unpredicted situation, in which
- she got aware of the fact that she would suffer a particular damage when she would continue pursuing her current “in between objective”, and
-
in which she had to decide between
-
continue pursuing her current “in between objective”,
because she decided that the disadvantage of suffering that damage would be smaller than the benefit of pursuing/realising her current objective,
at the one hand and -
the actions
-
stop pursuing her current “in between objective”,
because she decided that the disadvantage of suffering that damage would be greater than the benefit of pursuing/realising her current objective,
- make a new plan for pursuing her current objective in which the suffering of that damage would be avoided and
- start pursuing the first “in between objective” in that new plan.
-
stop pursuing her current “in between objective”,
-
continue pursuing her current “in between objective”,
This means, that
-
a person, who
- is working as a nurse in a health-care situation or
- who is assisting an older person or a patient in a domestic situation or
- who is driving a car
- that a robot which should replace that person should be capable of making new sequences of “in between objectives” (=”new plans”) for pursuing its current objective many times a day.
Note:
Each of the computers “Deep Blue”, “Alpha Go”, “Watson”, and “Google Translate” was given
- an objective and
- a plan on how to achieve that objective
at the start, and the computer concerned did not alter that plan while pursuing that objective.
Before deciding to spend billions of dollars on designing a computer
which is capable of making new sequences of “in between objectives” (=”new plans”) for pursuing its current objective many times a day
one obviously should at least know the answers to the questions:
- “What is an objective?
- “What is the description of the Phenomenon Objective?”
- “What is the role the Phenomenon Objective plays in the functioning of human (animal) brain?”