Infinite Monkey Theorem believes that lets a monkey press the keys randomly on typewriter . When the key time reaches infinite, it is almost inevitable to type out any given text , such as the full set of works by Shakespeare .
In this theorem, "almost necessarily" is a mathematical term with a specific meaning. "Monkey" does not mean a monkey in the true sense, but is used to compare it to an abstract device that can produce infinite random alphabetical sequences.
Picture | A chimpanzee typing randomly, as long as there is enough time, it will be possible to type every book in the French National Library. (Source: Wikipedia)
This theory shows that it is wrong to regard a large but finite number as infinite inference. Even if the observable universe is full of monkeys that keep typing, the probability of their ability to type a " Hamlet " is still less than 1/10^183800.
Also, even if gives countless monkeys unlimited time, they will not know how to appreciate the poetic wording of the minstrel.
"Artificial Intelligence (AI) is also ," said Michael Wooldridge, professor of computer science at at Oxford University.
Figure|Michael Wooldridge
In Wooldridge's view, although AI models such as GPT-3 have shown surprising capabilities with the help of tens of billions or hundreds of billions of parameters, their problem lies not in the size of their processing capabilities, but in the lack of experience from the real world.
For example, a language model might learn well "rain is wet". When asked whether rain is wet or dry, it is likely to answer that rain is wet, but unlike humans, this language model has never really experienced the feeling of "wet", for them, "wet" is nothing more than a symbol, just often used in combination with words such as "rain".
However, Wooldridge also emphasizes that the lack of knowledge about the real physical world does not mean that the AI model is useless, and will not prevent a certain AI model from becoming an experienced expert in a certain field, but on issues such as understanding, it is indeed doubtful that the AI model has the same capabilities as humans.
related research papers under the title "What Is Missing from Contemporary AI? The World" and have been published in the journal Intelligent Computing.
In the current wave of AI innovation, data and computing power have become the basis for the success of AI systems: the capabilities of AI models are directly proportional to their scale, the resources used to train them, and the scale of training data.
For this phenomenon, DeepMind research scientist Richard S. Sutton previously said that the "hard lesson" of AI is that its progress is mainly to use larger data sets and more and more computing resources.
Figure|AI generated works (Source: Wired)
When talking about the overall development of the AI industry, Wooldridge gave affirmation. “In the past 15 years, the speed of the AI industry, especially the field of machine learning (ML), has repeatedly surprised me: we have to constantly adjust our expectations to determine what is possible and when it is possible.”
However, Wooldridge also pointed out the problems in the current AI industry. “Although their achievements are commendable, I think most of the current large ML models are limited by a key factor: AI models have not really experienced the real world.
In Wooldridge’s view, most ML models are built in virtual worlds such as electronic games, They can be trained on massive data sets, and once the application of the physical world is involved, they will lose important information, They are just AI that is separated from the entity System.
Take artificial intelligence that supports autonomous vehicles as an example.It is not realistic to have self-driving cars learn on their own on the road, and for this and other reasons, researchers often choose to build their models in the virtual world.
" But they simply don't have the ability to run in all the most important environments (i.e. our world)," Wooldridge said.
(Source: Wikimedia Commons)
On the other hand, the language AI model will also be subject to the same limitations. It can be said that they have evolved from absurd and terrible predictive text to Google 's LAMDA. Earlier this year, a former Google engineer claimed that the artificial intelligence program LAMDA was perceptual, making headlines.
"No matter how valid the engineer's conclusions are, it's clear that LAMDA's dialogue ability impressed him - for good reason," Wooldridge said, but he didn't think LAMDA was perceptive and AI wasn't approaching such a milestone.
"These basic models demonstrate unprecedented capabilities in natural language generation, can generate relatively natural text fragments, and seem to have gained some common sense reasoning capabilities. This is one of the major events in AI research in the past 60 years."
These AI models require massive parameters input and understand them through training. For example, GPT-3 uses hundreds of billions of English texts on the Internet for training. The combination of large amounts of training data and powerful computing power makes these AI models behave like the human brain, which can go beyond narrow tasks, begin to recognize patterns, and establish connections that seem unrelated to the main task.
(Source: OpenAI)
However, Wooldridge said that the basic model of is a bet, "Training based on massive data makes them useful in a series of fields, and can be used specifically for specific applications."
" Symbolic AI is based on the assumption that ' intelligence is mainly a knowledge problem ', and the basic model is based on the assumption that ' intelligence is mainly a data problem '. If enough training data is entered into the big model, it is considered that there is hope of improving the model's capabilities."
Wooldridge It is believed that in order to generate smarter AI, 's "maybe right" approach continues to scale the AI model, but ignores the real-life physical world knowledge needed to truly advance AI.
"To be fair, there are some signs that this is changing," Wooldridge said. In May this year, DeepMind announced Gato, the basic model based on large language sets and robot data, which can run in a simple physical environment.
"It's great to see the basic model take the first step into the physical world, but only a small step: to make AI work in our world, the challenges that need to be overcome are at least as great as the challenges that AI faces working in a simulated environment."
At the end of the paper, Wooldridge wrote: "We are not looking for the end of the AI road, but we may have reached the end of the starting point of the road."
What do you think about this? Welcome to leave a message in the comment area.
Reference:
https://spj.sciencemag.org/journals/icomputing/2022/9847630/
https://www.eurekalert.org/news-releases/966063