In December last year, Ali Rahimi, an artificial intelligence researcher at Google, criticized his own research field in a speech at the NIPS conference. He said that machine learning algorithms, in which computers learn through trial and error, have become a form of "alchemy."

2024/06/2403:37:33 hotcomm 1204

In December last year, Ali Rahimi, an artificial intelligence researcher at Google, criticized his own research field in a speech at the NIPS conference. He said that machine learning algorithms, in which computers learn through trial and error, have become a form of

Big Data Digest Works

Compiled by: Feng Chen, Aileen

At the just past ICLR conference, Google artificial intelligence researcher Ali Rahimi criticized the entire machine learning industry for its over-reliance on rules of thumb, trial and error and superstition. .

In December last year, Ali Rahimi, an artificial intelligence (AI) researcher at Google, criticized his field of research in a speech at the NIPS conference. He said that machine learning algorithms, in which computers learn through trial and error, have become "alchemical." A sort of.

researchers don’t actually know why certain algorithms work while others fail, and they don’t have strict criteria to define the choice of AI architecture. As a result, he received a 40-second cheer from the audience.

On April 30, Rahimi once again emphasized his point of view at the International Conference on Learning Representations (ICLR) in Vancouver, Canada. After he and his colleagues published a paper called "The Winner's Curse?" In the paper "Winner's Curse? On Pace, Progress, and Empirical Rigor", they recorded relevant cases of "alchemy of machine learning" and provided solutions to the problem of strengthening the rigor of AI.

In December last year, Ali Rahimi, an artificial intelligence researcher at Google, criticized his own research field in a speech at the NIPS conference. He said that machine learning algorithms, in which computers learn through trial and error, have become a form of

This paper was selected into this year’s ICLR workshop

Big Data Abstracts WeChat public account backend dialog box reply "Alchemy" to download this paper ~

Rahimi said: "There is a kind of 'pain' in the field of artificial intelligence. Many of us feel like we are using alien technology.”

In modern science, alchemy is often used as a metaphor for research work that lacks scientific rigor, is not supported by a clear theoretical basis, and does not know why.

Alchemy is the thought and ancestor of a chemical philosophy in the Middle Ages, and is the prototype of contemporary chemistry. The goal was to chemically transform some base metals into gold, create elixirs and prepare elixirs of life. The science now shows that this approach doesn't work. Carl Gustav Jung, the founder of modern analytical psychology , believed that ancient alchemy was actually a kind of projection behavior of people on natural phenomena using their own spiritual development as a reference.

- Wikipedia

The "alchemy problem" is different from the "AI reproducibility problem": the reproducibility problem refers to the inability of researchers to replicate each other's results due to the discontinuity of experiments and the inconsistency of public practices in the research process. Research result.

There is also a difference between the "alchemy problem" and the "black box problem" and "interpretability" problem in machine learning : the latter refers to the difficulty in explaining how a specific AI reaches its conclusion.

, as Rahimi pointed out, is the difference between "a certain machine learning system in is a black box" and "the entire field has become a black box ".

Without a deep understanding of the basic tools needed to build and train new algorithms, researchers creating AI will resort to hearsay like medieval alchemists. François Chollet, a Google computer scientist from Mountain View, Calif., adds: "People worship straw science, relying on folklore and magic." Mr. Feynman 》)

For example, use some small algorithm to adjust the "learning rate" of their AI - the algorithm can correct itself after each error - without understanding why one result is better than the other. In other cases, AI researchers train algorithms more like stumbling in the dark.

, for example, implements so-called "stochastic gradient descent" to optimize the parameters of the algorithm to keep the failure rate as low as possible. However, currently, despite thousands of academic papers and countless methodological applications, the entire research process relies on trial and error.

In December last year, Ali Rahimi, an artificial intelligence researcher at Google, criticized his own research field in a speech at the NIPS conference. He said that machine learning algorithms, in which computers learn through trial and error, have become a form of

Gradient descent relies on trial and error to optimize the algorithm, pictured here, finding the minimum in a 3D landscape.

Rahimi's paper highlights the potential for wasted effort and suboptimal performance.For example, the paper points out that when other researchers heavily trained a state-of-the-art language translation algorithm, the simplified algorithm was actually better, translating English to , German, , or French more efficiently, suggesting that the original algorithm was The creator doesn't understand the purpose of those extra bits that could be simplified.

But sometimes the bells and whistles in an algorithm are the only good part, says Ferenc Huszár, a machine learning researcher at Twitter in London. In some cases, the core of an algorithm is technically flawed, meaning that the algorithm's decent results are entirely due to other tricks applied to the surface.

Rahimi offers some advice on understanding which algorithms work best and when. For starters, he argued, researchers should conduct "research by elimination" the same way they study translation algorithms: removing parts of the algorithm at a time to see what each part does.

He calls for "slice analysis," in which an algorithm's performance is analyzed in detail to understand how improvements in some parts may come at a cost elsewhere.

" researchers should test their algorithms with many different conditions and settings, and should report on how the algorithms perform in all cases."

Ben Recht, a computer scientist at the University of California, Berkeley, who was the co-author of Rahimi's Alchemy keynote co-author, who believes that AI needs to draw from physics, where researchers often reduce problems to a smaller "toy problem." "Physicists are good at explaining phenomena from their roots with simple experimental designs."

Some artificial intelligence researchers have begun to adopt this method. In order to better understand the internal mechanism of the algorithm, they first process a small number of color photos before processing them. Test image recognition algorithms on black and white handwritten characters.

Csaba Szepesvári, a computer scientist at DeepMind in London, believes that the field of machine learning also needs to downplay the emphasis on competitive testing. Currently, a paper reporting an algorithm that outperforms some benchmarks is more likely to be published than a paper that reveals more about the inner workings of software.

This is how fancy translation algorithms pass peer review. He also said, "The purpose of science is to produce knowledge, and scientists should produce something that others can adopt and serve as the basis for others' research."

Of course, not everyone agrees with this criticism.

Yann LeCun, chief artificial intelligence scientist at Facebook, worries that shifting too much energy from cutting-edge technology to core understanding may slow down innovation and hinder the practical application of artificial intelligence. He said, "This is not alchemy, but engineering, and engineering is inherently messy."

Yann LeCun responded that in the history of science and technology, engineering progress almost always precedes theoretical understanding: the birth of telescopes precedes optics Theory, the steam engine preceded thermodynamics , the airplane preceded aerodynamics, radio and data communications preceded information theory , and the computer preceded computer science.

In December last year, Ali Rahimi, an artificial intelligence researcher at Google, criticized his own research field in a speech at the NIPS conference. He said that machine learning algorithms, in which computers learn through trial and error, have become a form of

Ali Rahimi also responded to Yann LeCun’s criticism of him. If you are interested, you can check out the debate on alchemy on Reddit (the link contains a video of Ali Rahimi’s speech at 2017 NIPS):

https://www.reddit.com /r/MachineLearning/comments/7hys85/n_ali_rahimis_talk_at_nipsnips_2017_testoftime/

Recht, however, believes that there is a balance between being "methodical" and "adventurous" in research: "We need both. We need to understand where failures occur so that we can build Reliable systems, and we must advance cutting-edge work so that we can make more powerful systems."

Big Data Digest WeChat official account backstage dialog box reply "Alchemy" , download Ali Rahimi's latest criticism of AI alchemy. Paper~

related reports:

http://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy

hotcomm Category Latest News