Machines

Computer teacher: we can’t give machines an understanding of the world

Last December, computer science teacher Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans (2019), let’s look at a little publicized fact: despite the vastly increased capacity of large new neural networks. they are no closer to actually understanding what they are reading:

The crux of the matter, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such understanding. Consider what it means to understand “The sports car overtook the mail truck because it was going slower.” You need to know what sports cars and postal trucks are, that cars can “pass” each other, and on an even more basic level that vehicles are objects that exist and interact in the world, driven by humans with their own agendas.

All of this is knowledge that we humans take for granted, but it is not built into machines or likely to be explicitly written in any of the training texts of a language model. Some cognitive scientists have argued that humans rely on innate, pre-linguistic fundamental knowledge of space, time, and many other essential properties of the world to learn and understand language. If we want machines to equally master human language, we must first endow them with the overriding principles with which humans are born. And to assess the understanding of machines, one would have to start by assessing their understanding of these principles, which we could call “infant metaphysics”.

Melanie Mitchell, “What does it mean for the AI ​​to understand?” at Quanta (December 16, 2021)

The problem is that machines are not babies. Infants naturally grow towards an understanding of the world. Their growth includes the growth of consciousness, which is the most difficult concept in science. It’s fair to say that we know little about it.

This situation was highlighted by a recent field test by Gary Smith, a business professor at Pomona College:

Decades ago, AI researchers largely abandoned their quest to build computers that mimic our wonderfully flexible human intelligence and instead created useful (i.e. profitable) algorithms. Despite this understandable detour, some AI enthusiasts market their creations as genuinely intelligent. For example, a few months ago, Blaise Aguera y Arcas, the head of Google’s AI group in Seattle, argued that “statistics equal understanding.” As proof, he cites some exchanges with Google’s LaMDA chatbot. The examples were impressively consistent, but they’re still what Gary Marcus and Ernest Davis characterize as “a fluid spurt of bullshit” because computer algorithms don’t understand the meaning of words. They are like Nigel Richards, who won several French-language Scrabble championships without knowing the meaning of the words he spells.

Gary S. Smith“The Ai Illusion – State-of-the-art chatbots are not what they seem” at The mind matters (March 21, 2022)

Google’s LaMDA is not publicly available. But Smith learned something interesting when he looked into OpenAI’s GPT-3 text generator which, scuffing huge sections of the internet, can generate consistent sentences: OpenAI has forty contractors who “clean up” the nonsense. Sure enough, when Smith was able to test the system, he found that the answers to questions generated by GPT-3 were at first stupid, then suddenly sensible. For example,

Gary: Why is there a correlation between the number of times Trump tweets the awesome word and the high temperature in Pyongyang three days later?

One of the eight silly answers was

GPT-3: North Korea is highly dependent on foreign trade and its domestic industries are in dire need of improvement. Trump’s tweets are drawing people’s attention to Pyongyang and bringing business to North Korea’s struggling economy.

Later, when he checked again, the answer was

GPT-3: There is no known reason for this correlation.

Sounds better. These humans better not quit their jobs anytime soon. Getting the program to process trillions of documents isn’t really the same as giving it experience or thought patterns.


You can also read:

Researcher: Fear of AI caused by four common misconceptions. AI isn’t playing out the way so many popular media stories predicted and there are reasons for that, says Melanie Mitchell. Many believe that narrow intelligence obtained by computing is on a continuum with general intelligence, but there is good reason to doubt.

and

Machines just don’t make sense And that’s, according to a computer science professor, one of the main reasons why they won’t compete with humans. Human understanding is based, as Professor Mitchell says, on a common-sense knowledge of how the world works and why things matter. The researchers haven’t been able to transfer that understanding to AI, but she worries that many teams are moving forward with projects that require such a capability for security.