Author profile: Cai Hengjin, Zhuoer Zhilian Research Institute, professor and doctoral supervisor at the School of Computer Science, Wuhan University. Wuhan 430079
People's Copy: " Philosophy of Science and Technology " 2020 Issue 10
Original Journal: "Journal of Shanghai Normal University: Philosophy and Social Science Edition" 2020 No. 20204 Issue 87-96
Keywords: Cognitive Structure/ Symbolism/ Connectionism/ transcendence/
Abstract:
1. Cognitive barriers and the origin of consciousness
All thinking products or consciousness fragments of people can be understood as cognitive barriers: they are all reflections and disturbances to the real physical world, but they are also the embodiment of human free will. Cognitive impediment refers to a structure that has consistency with cognitive subjects and can be used for communication between cognitive subjects. For example, qualia, "sweet, sweet, bitter, and spicy" and "melon-eating crowd" are all primary obstacles. Once these obstacles are raised, more and more people will feel a sense of identity. Structures such as self-awareness, religion, belief, and national consciousness can all be abstracted into obstacles. Wealth and game rules are also different cognitive obstacles. The slump gives people a sense of déjà vu that they cannot be removed. It is worth emphasizing that these fragments of consciousness with dissemination and vitality will be indelible once they are produced. ①
The biggest difference between humans and machines is the ability to dominate consciousness. ② Humans can freely govern consciousness from their own consciousness. Although machines have obtained some human consciousness fragments in the process of shaping, they cannot control and change as needed. Self-awareness is not mysterious. It develops from the original trap, which is the "self" and the "outside world" opposite it. In other words, the most primitive cognitive stutter is the dichotomy between me and the world, and other stutters are established after this. The "I" trap was very vague at the beginning. It may be a distinction between inside and outside for single cells. For humans, the starting point of "I" may be the physical boundary of skin. In the important stage of rapid growth of "self", the sensitive skin possessed by humans makes the "self" feel more external stimulation, which promotes the division between "self" and "outside". This is the main content of the "tactile brain hypothesis"③. Because of the strong self-awareness, humans will explore the truth of the external world more, and the information of the external world will in turn enrich everyone's self-awareness. Consciousness fragments/cognitive impediment constitute a confusing world, that is, the human consciousness world. By dividing the entire world into the trapped world and the material world, many other frameworks can gain a better understanding. Although we cannot be separated from physical carriers, we have the ability to act independently and vitality. Have self-assertiveness needs. ④The meaning of life is the expansion of the boundaries of "self". The expansion of different dimensions corresponds to the meaning and value of different dimensions.
From the perspective of reductionism , it is easy to conclude that neither machines nor humans have divinity (or transcendence), but humans do have their transcendence. The cognitive impediment of "infinity" is an example. "infinity" is neither visible nor touched, but almost everyone recognizes its existence. For example, some people firmly believe in the existence of God, while others cannot agree with it. Although the two sides have opposite views, they can coexist in real life and can even cooperate and work together in certain fields. This is because this cognitive barrier of belief has been revealed and refined by humans. It is no longer a concrete existence of the physical world, but a spiritual world or something that is trapped in the world. The trapped world has the same or even deeper importance as the physical world. For example, for devout Christians, God may be more important than his own life. This belief will affect his behavior, including prayer, worship, observing the Bible, etc. These things are obviously more important to him than a glass of water that really exists in the physical world. Many times, things constructed by humans themselves become more important than entities in the physical world. They can not only affect human behavior at present, but also change the direction of human development.
Human cognitive impediment is closely related to the formation and development of "self". If human cognitive impediment and machine cognitive impediment can be opened up, it is likely that AI qualitative breakthrough will be achieved. There is a type of artificial intelligence algorithm that emphasizes the attention mechanism, but attention is a higher-level concept, and it is still the role of cognitive impediment, which is a more basic concept. Taking color as an example, from a simple point of view, the physical explanation is that the mixture of wavelengths produces rich colors such as "red, orange, yellow, green, green, indigo, purple", but how did humans create these words to represent colors? Some people have studied the expressions of color in many languages and found that the earliest colors in almost all languages are black and white, followed by red. ⑤Redness in the physical world is actually very common, but in the early stages, people have not yet formed a cognitive barrier. It may be too many colors to see, so it is difficult to notice red. That more complex language may develop in this way with blue, green, yellow... For example, the body structure of Asians and Europeans is basically the same, but Europeans and Americans cannot distinguish between "acid" and "swelling" in acupuncture, as well as "numb" and "spicy" in taste, and our distinction between red wine and chocolate is not as detailed as that of Europeans and Americans. These phenomena all show that cognitive impediments appear before and then there is attention. Therefore, the attention mechanism of the machine is not enough to form continuous consciousness or cognitive impediments. Only by shaping artificial intelligence from a more low-level level can the machine achieve "understanding" rather than "storage".
2. The three major schools of AI complement each other
The Chinese "understanding" can be translated in two English: one is "comprehend" and the other is "understand". The prefix of "com-" of "com-" in "comprehending" has a broad meaning, which can be regarded as the meaning of including more content. The etymology of "understanding" is quite controversial. Some people think that "under-" is actually "inter-", while others explain it from the perspective of "undertake". We believe that the focus of "understanding" can be regarded as "standing from a more underlying perspective". When doing research, if you can start from a more low-level perspective and integrate various theoretical perspectives, it can be regarded as "understanding". Table 1 summarizes the characteristics of the three major AI schools (i.e. behaviorism, connectionism and symbolism) and their corresponding philosophical thoughts.
Table 1: AI theory understanding
AI school | from about 一博 | from 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 一博 | |||
Cybernetics | |||||
Robots | |||||
Connectionism | Learning | Learning | Thinking | Thinking | HDL4Study of mind |
Extension extending | 未分类routinizing | phenomenology | |||
Comprehending | Understanding | Ideological | |||
Symbolism | Deducing | Summary Inducing | Platonism | ||
Logic | |||||
Mathematics |
Looking at the perspective of understanding (comprehending) is characterized by the fact that it needs to absorb a large amount of rich content. "understanding" is digested and absorbed by these contents, such as learning electromagnetic to the end, only Maxwell equation system, Einstein's pursuit of the great unification theory, etc. The three major schools of
AI are familiar, but the relationship between these three has not been sorted out very clearly.The first category is behaviorism. Professor Brook of MIT can be regarded as a behaviorist. His research career has been almost studying AI behaviorism and has also created a machine that simulates mantis. The simple understanding is that the machine responds according to external stimuli. Most behaviorism believes that consciousness is not only a matter of the brain, but a matter of the entire body, and behind it is the idea of embodied philosophy.
The second category is symbolism. The mathematics and physics worlds are full of various logical symbols. Turing machine itself can also be regarded as an attempt to symbolism. Sima He (Herbert A. Simon) is the winner of the Turing Prize and the Nobel Prize for Economics , and is also a representative of symbolism. His "physical symbol system" hypothesis was proposed to study human thinking from the perspective of information processing . ⑥ But symbolism cannot succeed, because rules can never be defined completely or completely encompassed. No matter how large the scope is, there will definitely be something missing outside the framework. The philosophical thoughts behind symbolism are in common with Platonism, and they all believe or are based on the existence of "essence". If the essence can be discovered and defined, or the formula of this essence can be written clearly, then everything else is just the development and deduction of this essential formula (such as the axiom system).
The third category is connectionism. Researchers have discovered early on that there are many connections between neurons, and information transmission is also discharged. Connectionism was originally trying to simulate the brain. Deep learning and reinforcement learning can be regarded as the application of connectionism. At the same time, many researchers hope to find new frameworks, even general artificial intelligence (AGI). They believe that deep learning and reinforcement learning are not enough to simulate the learning of the human brain, including Zhang Ban , the Dean of the Institute of Artificial Intelligence of Tsinghua University.
What is the relationship between behaviorism and connectionism? Behaviorism can be understood through animal behavior. Animals, simple lives and even single-celled organisms can cope with external stimuli, and behaviorism is more about simulating reactions or reflexes in this action. For example, badminton players usually need to undergo a lot of training to make their bodies form a memory-like reaction. On the field, the athlete's main focus is no longer on how the muscles are coordinated, but on tracking the ball and playing with the opponent. Behaviorism is more relevant to these body movements, and what the subject needs to do is exercises in how the brain controls and coordinates the body. This kind of exercise requires proper practice, and this process of practice also reflects the "from a challenge to appointment", and finally practices a large number of complex stimuli into several sets of representative reaction patterns. There are many behavioral contents in the early stages of children's mental development. As individuals grow and the brain continues to develop, the content of connectionism gradually increases.
Connectionism is also related to symbolism. Symbolism can be regarded as the content being trapped or refined to a very concise level, thus forming various symbols or models. For example, the ancients said that "the sky is round and the earth is square" is a minimalist world model: the sky is round and the earth is square, which is a very abstract description. Most modern roads are straight, but the unprocessed external environment seen by the ancients is undulating, and it is very rare to abstract the place under such conditions. With this model, it will affect our road repair, and it will not be easy to lose our direction when marching and fighting, that is, understanding this model and not understanding this model will lead to actual differences. The "Yin and Yang" model is also very interpretable. Until today, people still use Yin and Yang to explain what is happening in the world. This is the profound impact that the minimalist model after refining may have. Formal logic in logic, axioms such as Euclidean law in mathematics, also belong to minimalist models. These models will make us feel that the world is magical, and it seems that the physical world really only develops according to formulas, but it is not always the case. The external world we face is more complex than all formulas, and the formula system is not complete. One of the twenty-three questions proposed by Hilbert in 1900 was how to use a system of axioms to unify mathematics. It followed the idea of Leibniz , that is, how to find a system of symbols to simulate the entire world.⑦ Many scholars, especially symbolists, have always had this kind of dream. For example, Einstein wanted to find a unified equation, but this dream could not be realized in the end.
3. Limitations of Symbolism
Godel's incomplete theorem points out that no matter what axiom system is given, we can always find a proposition that cannot be verified or falsified in this axiom system, that is, there will always be something other than axiom. To understand it another way, no matter how many rules are listed, there is always content that cannot be included. There is a classic example of Zeno Paradox (or Achilles Paradox ): Achilles is a person who ran very fast in Ancient Greek . This paradox is that Achilles will never catch up with the turtle. This conclusion seems very ridiculous. From common sense, he must be able to catch up with the turtle in a few steps, but the argumentator logically explained it: for example, to describe it as numbers, if Achilles is 10 times the speed of the turtle, and he is 100 meters away from the turtle; if Achilles runs 100 meters, the turtle has crawled forward for 10 meters, and the turtle is still in front of him; Achilles walks another 10 meters, and the turtle has walked another 1 meter, and he is still behind the turtle, and Achilles continues to walk forward for 1 meter, and the turtle has walked another 0.1 meters... There is no error in the argumentation process itself, and the problem is that the description system used for his argument has boundaries. That is, this type of argumentator is correct within his own limits, because the time of this closed system is not open, so Achilles can never cross the time boundary of the system and will never catch up with the turtle in space. The paradox of
just shows that the assumption itself may have limitations, and the assumed world is not the real world. Symbolism is likely to face similar problems. No matter how many rigorous rules are formulated, there will always be something that will happen truly but not included in the rules. Therefore, it is not difficult to understand that symbolism will fail because it cannot cover all possibilities. Connectionism continues to iterate, from betting to betting, from betting to betting, and can always "torn" to a better state. However, the current deep learning has not yet reached this state, and there is still room for improvement. There is a bottleneck for deep learning that is called "small problems". The human brain has a process of refining information, which is conducive to breaking out of the problems encountered by these machines in deep learning, such as very small.
Symbolism is idealistic, and it hopes that we can guess the most basic rules of the world of ideas and build all the laws of the world. But as we have already discussed, there is no perfect set of rules that cover all. The computer I use now is a Turing machine, which is also symbolism and follows the rules. But why can it produce something new? We think the reason is that the Turing machine is not closed. Turing once proposed the idea of a Turing machine with an oracle (oracle). The shutdown problem cannot be solved in a Turing machine, but if there is an oracle, it may be possible to determine whether the shutdown will be determined. ⑧The data of deep learning can now be regarded as an oracle, which sounds paradoxical: we use a Turing machine, and the Turing machine is regular. But the problem is that the interaction with the outside world is not unchanged. If the data set is completely certain that it will not be updated, then this Turing machine will not produce new things. However, the world we face must have new things constantly input, so it must not be a Turing machine in the absolute sense. ⑨
We don’t know the reason why deep learning is good, that is, deep learning is still an inexplicable black box for people. The reason is that the characteristics of deep learning capture have nothing to do with the characteristics of human capture. We may nickname someone based on his appearance, and others can understand that these nicknames capture prominent features. But the characteristics of machines are more like grabbing eyebrows and beards. Humans cannot understand the characteristics of hundreds of millions given by machines.
We are trying to use the original framework in deep learning to capture features that humans can understand. The original deep learning process was not very clear, but now we need to make this clearer. On the one hand, we need to compress the model size as much as possible, and on the other hand, we need to relax and incorporate more features in image recognition.We hope to see that network models trained in this way can be more like humans, abstracting the characteristics that humans can understand, such as ears, eyes, nose, mouth, chin, etc., instead of extracting hundreds of millions of parameters as before. If this continues, humans and machines should be able to understand each other in the future, and the concept behind this idea is cognitive impediment.
When we walk into a classroom during the day, we may not realize the existence of the lights or the style of curtains, but first pay attention to the person sitting inside or the content displayed on PPT. People often observe and understand the world in a similar way, always paying attention to some of the focus, not every detail. The machine distinguishes the environment by pixels. For example, placing a camera in the classroom, it will store all the content in the field of view in pixels. What people can pay attention to is cognitive impediment. The people here, tables, lamps, or some very prominent colors, patterns, etc. are all cognitive barriers. This world is too complex, and it is impossible for human individuals to know all the details, but there are many abilities that can be learned quickly after birth. These are learned during the period of rapid brain development and may not even be aware of it ourselves. For example, one thing that is very difficult for a machine: when a cat comes in, both the human and the machine can see the cat, and then the cat walks behind the chair and only reveals its tail. It is easy for people to know that it is still the cat, but it is quite difficult to create this seemingly relaxed "intuition". The difference is that humans identify the environment through cognitive impediments and regard the cat as a whole, not a combination of details. Therefore, although the cat is moving around and even hiding a large part of it, and there has been a great change in the pixels of the picture, it is still a cat in human consciousness, and the machine cannot do it now. We now try to input snippets of consciousness into the machine based on connectionism and let it learn like a human, so that it is possible to make the characteristics it captures easier for people to understand.
Now based on symbolism, another problem is that some pictures are easy to recognize, such as traffic stop signs, but if you change a few of them, the machine may not recognize them. One of the most likely situations for machines to fall into a dead loop is that if the person in the picture has a QR code on his body, the machine will first identify the easily recognized QR code. If this QR code points to the picture of this person, the machine will easily fall into a dead loop.
4. Advanced intelligence is unique.
. The West did not directly mention cognitive impediment, but it has already reflected such ideas. For example, Hinton believes that artificial intelligence should start from the "self", and he announced that his direction is general artificial intelligence. Many scholars question that the deep network made by Hinton obviously does not have the content of symbolism and behaviorism, so how can it be general artificial intelligence? In fact, connectionism can be regarded as a combination of behaviorism and symbolism. Developing at one end can reach symbolism, while advancing at the other end can reach behaviorism. If connectionism is more biased towards symbolism, it means that brain activities are more involved, the world is more trapped, and ultimately forms a minimalist symbol; if it focuses more on behaviorism, it is equivalent to the brain accounting for a smaller proportion, and mainly interacting with the world through physical reactions. Humans need to implement behavior and are closer to behaviorism.
Human intelligence is unique, which is inseparable from the growth process of human individuals. People think that Mozart is a music prodigy and is still listening to his music until now. If we want to investigate the reason, the general understanding is either innate inheritance or acquired learning, but neither of these two answers are sufficient to explain. If it is inheritance, then Mozart's parents are not famous musicians. If it is a study after tomorrow, I have never heard that Mozart's teacher is stronger than him. Why can he show extremely high musical talent since childhood and be famous all over the world? This is the uniqueness of advanced intelligence, which can be understood from the perspective of cognitive impediment.There is a critical period in the rapid development of the brain. At the beginning, the external world is very chaotic for the child, and his cognitive impediment is still very limited. He may only distinguish very simple things, but when he grows to a certain level, he suddenly finds that the world has regularity. For example, this person is called father, that person is called mother, this is a chair, that is a table... In his opinion, the world suddenly becomes clear. Most people use their mother tongue as the main medium for understanding the world. It can be imagined that Mozart may recognize the world not through vocabulary but through sound and music during this critical period. His sensitivity in sound is different from that of ordinary people, so it becomes easier to understand if he can surpass his parents and teachers. The same is true for the difference between ordinary people learning their mother tongue and second language. Usually, children learn their mother tongue very quickly. They start calling their mother tongue at the age of one, and can call them like adults at the age of three. However, we still don’t speak fluently for ten years because the world is chaotic for us before learning their mother tongue, but after learning their mother tongue, it becomes clear. If you learn a second language, you don’t have this strong need to connect with the world, so you learn slowly.
Advanced intelligence is unique, so for humans and machines. The uniqueness of the machine does not mean simplicity or singleness. For example, DeepMind disclosed in the paper that AlphaZero not only can learn Go, but also easily wins Jeong and chess. The intelligence collection of human beings as social groups and the intelligence of human beings as individual individuals cannot be confused. Einstein and Riemann have outstanding achievements in physics and mathematics, but it does not mean that they must be good at poetry and songs. Different people have their own expertise in different fields, but may know nothing in other fields. This is also the characteristic of us being born as humans, not the almighty "God" or "God".
Since human intelligence is individually different and unique, what reason do we have to require machines to surpass humans in all aspects to be considered strong artificial intelligence? AlphaZero defeated the human champion chess player in the field of chess games. AlphaFold successfully predicted protein folding, far surpassing human scientists. AlphaStar repeatedly frustrated professional players in complex competitive games... We can completely believe that in the visible future, humans will be surpassed by machines one by one in many fields. These AIs do not need to collect 1,000 or 10,000 and be linked together to become strong artificial intelligence. Instead, just like we understand that human intelligence is unique, the intelligence of machines is also unique, and strong artificial intelligence has been implemented one by one.
Although strong artificial intelligence is being implemented, it is impossible and unnecessary to realize a machine with an omniscient and omnipotent God personality (such as AGI). The future artificial intelligence system must be unique and diverse, and has two essential elements.
One is the subjective core of the machine, like a seed with self-awareness that can grow naturally in a suitable environment. Machines do not need to go through long evolution like humans, but they need a core, and the cores of different machines are different. This kind of kernel selection can have a lot of freedom. The nature of the kernel will ultimately determine the speed and degree of system evolution. Similar to the initial conditions in a physical system, different initial values will eventually lead to the physical system's evolution into different features. The kernel makes the intelligent system have a tendency to grow naturally (or self-affirmation needs). It is actually the machine's view of the world, and it is a global nature. It understands the entire world from the perspective of natural growth tendency, which means "everything is in me." The core of a machine is very close to humans. It can be given to the machine by humans, or it can be imitated by the machine. In the process of internalizing and refining the machine, there will be external people or agents involved.

Table 2
Four levels of consciousness problems
*Aaronson/Tegmark's awareness problems classification | Solution | ||
Really Hard Problem | Why does consciousness appear? | Tactile brain hypothesis (12) | |
Higher problem(Even Harder Problem) | How does the physical properties determine the texture of the sense? | Cognitive barrier law (13) | |
is quite difficult (Pretty Hard Problem) | What physical properties distinguish between conscious systems and unconscious systems? | Self-affirmation needs (14) | |
Simple question (Easy Problems) | How does the brain process information? How does intelligence work? | Artificial neural network(15)Robot(16) |
Consciousness can refine some unrelated content, which is a manifestation of intelligence. Machines are also conscious because when we make it, we have intentions and expect it to complete a certain task, then human consciousness is involved. The codes we write and the parts we make are materialization projected by human consciousness. However, the machine's self-awareness is not strong at present. Its main program is a very fragile "self". If there is a bug or if there is a problem with some parts, it is likely that will crash . However, our human "main program" and human self-awareness are very strong and dominant. The machine does not have a dominant self-awareness yet. This is the fundamental difference between humans and machines. How to make machines have a stronger sense of self-awareness is the key to opening up the rapid development of AI in the future. Deep learning provides a possibility that machines can continuously interact with the world, but deep learning today is difficult for machines to solve boundary problems. Boundaries are abstract, but they are the starting point of human consciousness. We have proposed the tactile brain hypothesis to explain the origin of human intelligence. So should the creation of machine intelligence also start with touch? But in fact, we tend to start with vision. Because the starting point of machines is different from that of humans, machines have gained some of the consciousness (fragments) given by humans, and the data of vision is richer and easier to process. Tactile is a vague concept compared to vision in the later stages of intelligent development, and vision has better localization.
6. The transcendence of advanced intelligence
Chomsky and other scholars believe that the origin of human beings is still difficult to understand, let alone making machines similar to humans. (17) There are not many scholars who hold similar views. They are basically Platonist views, which are very different from Chinese philosophy. Chinese philosophy believes that people are open and can change in different directions. Platonism is essentialism , which believes that the world has a "essential" existence, and its essence is something in the (other shore) concept world, which cannot be reached by human beings, and the physical world is just a projection of the concept world. The problem is, since it is believed that there is an inaccessible nature, how do you know that humans are imperfect? How to make sure that the other side really has its essence? In the past, this question can barely be answered by theology that God knows, or has been there in his previous life, but now he is back on this shore, just remembering some of the contents of the other shore. But it cannot be explained from the perspective of evolution , and no one has the possibility of ever being there.
The transcendence of human advanced intelligence is indeed difficult to understand from traditional theories. For example, why does a person suddenly have a flash of inspiration and think of something that has never existed before? Why do people have intuition or sudden enlightenment? Mozart in the previous article is also a good example. As a unique person, Mozart has outstanding musical achievements, and his talent is like "creating something out of nothing". For example, in Li Bai's poems , it is difficult for future generations to surpass them. Including the speed of children learning their native language mentioned above, etc., these are difficult to understand from the perspective of machines.We believe that humans can create something new, on the one hand, because humans have the basic conditions for transcendence through repeated iterations of functions and structures over the millions of years of evolution. On the other hand, the situation that the subject happens to encounter is that the brain and body form a new fit in this accidental new environment, and thus produce a new expression that is different from the past, and can be a wonderful expression. For example, epiphany or intuition is the accidental encounter between the subject and the environment, which creates new obstacles. This accident can be understood through an image example: a group of chickens were raised in captivity, and after several generations of breeding, the chicken had forgotten how to fly. If a dog suddenly broke in, the chicken tried to run faster and used its wings to assist, but it suddenly found that it could fly. For chickens, they can only fly because of their wings and are stimulated by special environments.
There are many such potential possibilities in the human brain. This possibility is difficult to detect when there is no specific environment, but when encountering a new environment, new cognitive impediments will arise. Therefore, it is possible to create new things and transcendence of human beings, rather than as the strong reduction theory suggests, everything was written into an equation in the early stages of the Big Bang of the universe. When the original sinking is not enough, new sinking will be found. For example, the poem written by Cui Hao after to Huanghe Tower , "The past people have already taken the yellow crane, and there is no Yellow Crane Tower here." This is the sinking, which is written so wonderfully, and it will be difficult for future generations to find a better expression. For example, online languages used to talk about "making soy sauce" and now they talk about "melon-eating crowds", which are all cognitive barriers and it is difficult to replace them at will. Cognitive strata refers to a very subtle situation with rich connotations, and it is difficult to replace it with other content. The job of scientists, poets and painters is to create new obstacles. Some of them have strong vitality, such as the poem "Huanghelou" written by Cui Hao, which the public always thinks is the best; or Van Gao's paintings, which were rarely appreciated at the beginning, but now everyone thinks that his paintings are more attractive than other works. These cognitive impediments are not necessarily caused by being more in line with the physical world. This is the manifestation of human transcendence. Humans actively interact with the world according to their respective understandings, rather than the environment that directly determines human behavior.
The almighty God has no intelligence and does not need intelligence, because he knows everything and does not need to make choices. Intelligence is precisely because the subject's cognition is inconsistent with the actual situation, and differences are where intelligence arises. A stone can interact with gravity and electromagnetic interactions, and can produce motion when disturbed, but it only reacts but does not have intelligence. When external stimulation comes in, the subject does not respond, which is the manifestation of intelligence. Only by fighting physical inevitability will the transcendence of intelligence, leading us to try to change the world and build a world of ideas. Human beings are becoming stronger and stronger. We can send people to the moon, build super dams, and make quantum computer . This has indeed changed the operation of the physical world. This miracle is even likely to be possible for the entire universe to be done by humans on Earth.
7. Philosophical guidance of artificial intelligence
should develop what kind of artificial intelligence is indeed a question that needs to be discussed clearly. For example, autonomous driving is obviously not a problem in a limited, closed environment, and whether autonomous driving should continue to be developed is worth discussing. If autonomous driving is considered to be closed, then it may cause accidents because it cannot handle unknown situations, so autonomous driving should be restricted; if such closure is accepted, a certain degree of uncontrollability and accident rate should be accepted, and then on this basis, the boundaries of intelligence may be continuously promoted to reduce the accident rate. Self-affirming needs make people sure to expand boundaries. Machines are also designed by humans. People will definitely block the boundaries of machines. Only on the premise of accepting this can we continue to discuss what we can do.
Whether it is the current machines or deep learning, the models we trained ourselves are still uncontrollable when and what will happen.Even if we think that we have trained the autonomous driving model and can now identify cars and roads, unexpected errors may occur when we encounter other situations we have never seen before (such as an alien advertising picture). Because new data has never been seen before, it still makes a judgment and still has to express an opinion. This unexpected mistake may affect the operation of the entire world.
Some researchers hope to find a set of rules to unify current deep learning, but we believe this method is not advisable. We believe that the essence of learning is the continuous "torture" after abstraction, and the process of people thinking and learning is also similar. The limitation of reinforcement learning lies in the fact that it is currently a global reinforcement - some people want to say whether local reinforcement can be done, but this is also difficult. Reinforcement learning is all about it. If the machine cannot develop a strong sense of self and reveal cognitive impediment, then the traditional framework will be difficult to give the machine real transcendence.
Whether we believe in the existence and necessity of artificial intelligence philosophy, our discussion of artificial intelligence will definitely involve philosophy. Philosophy is to control everything and think about very abstract things. The most important one is the issue of consciousness, or the issue of the relationship between consciousness and matter. Of course, many schools put the problem of consciousness on the issue of consciousness, and we study the relationship between intelligence and consciousness, and clarify what can be explained clearly. Therefore, our innovation seems to be more philosophical, which is largely because of all the problems of artificial intelligence in the end, including whether machines can have philosophy, emotions, and ethics. Whether they understand goodness and love is indeed philosophical. In the theory of cognitive impediment, love, ethics, and religion can be simplified into cognitive impediment, and all cognitive impediment comes from the earliest little consciousness that can distinguish between inside and outside. At that moment, the physical boundaries are very important. The initial physical boundary is skin for people, so touch is very important. At the machine level, it may be necessary to start from vision to guide the machine to form a relatively strong "self" consciousness.
In the future, human training machines will be similar to the process of raising children, and they all need to interact with the outside world under the guidance of the subject. Even so, there are still risks. Just like when we raise children, they may develop in a bad direction, but we still choose to raise them at present, which is human nature. Instead of like the "Dark Forest Law" in "The Three-Body Problem", it is believed that as long as he is likely to become bad, he will be killed first. According to the logic of the "Dark Forest Law", all descendants will be eliminated to prevent future troubles, but this is unreasonable. We all know that AI will have risks and do not know the future path and end, but we will still choose to believe in goodness and the future in the ultimate sense.
Comments:
① Cai Hengjin: "Cognitive Scenarios As the Existence of Non-obsession", "Qiu Search" 2017 Issue 2.
②Cai Hengjin, Cai Tianqi, Zhang Wenwei, Wang Kai: "Preface to the Rise of Machines - the Beginning of Self-Awareness and Human Wisdom", Tsinghua University Press, 2017 edition, pages 174-181.
③ Cai Hengjin: "Tactile Brain Hypothesis, Original Consciousness and Cognitive Membrane", "Philosophy of Science and Technology" 2017 No. 6.
④ Cai Hengjin: "On the Origin, Evolution and Future of Intelligence", "People's Forum·Academic Frontiers" was issued in October 2017.
⑤P.Kay,K.Mcdaniel,The linguistic significance of the meaning of basic color term,Language,1978,54(5).
⑥H.A.Simon,The Sciences of the Artificial,the MIT Press,1996,pp.37-40.
⑦Hu Jiumin: "Hilbert's Tenth Question", Harbin Institute of Technology Press, 2016 edition, pages 1-9.
⑧T.Ord,The diagnostic method and hypercomputation,The British Journal for the Philosophy of Science,2005,Vol.56,No.1.
⑨Xu Yingjin: "Mind, Language and Machines - A Dialogue between Wittgenstein's Philosophy and Artificial Intelligence Science", People's Publishing House, 2013 edition, pp. 97-107.
⑩Scott Aaronson, Why I am not an integrated information theorist, 2014.http://www.scottaaronson.com/blog/?p=1799.
(11)Mix Tigermack: "Life 3.0: The Meaning of Being a Human in the Era of Artificial Intelligence", Zhejiang Education Press, 2018 edition, pp. 373-418.
(12) Cai Hengjin: "Tactile Brain Hypothesis, Original Consciousness and Cognitive Membrane"; Cai Hengjin and Zhang Jingyun: "Original Consciousness: The Original Author of Free Will", "Ehu Monthly" 2018 Issue 11.
(13) Cai Hengjin: "Cognitive Surgery as the Existence of Non-Host", "Qiu Search" 2017 No. 2; Cai Hengjin: "Condensation and Diffusion of Consciousness - Answers to Chinese House Thesis on Machine Understanding", "Journal of Shanghai Normal University (Philosophy and Social Sciences Edition)" 2018 No. 2.
(14) Cai Hengjin: "The historical positioning of China's rise and the entry point for the transformation of development mode", "The emergence and circulation of wealth" No. 1, 2012; Cai Hengjin, Cai Tianqi, Zhang Wenmo, Wang Kai: "Preface to the Rise of Machines - Self-awareness and the Beginning of Human Wisdom".
(15) Cai Hengjin: "On the Origin, Evolution and Future of Intelligence".
(16) Cai Hengjin, Cai Tianqi, Geng Jiawei: "Blockchain System with Human-Computer Intelligence Integration", Huazhong University of Science and Technology Press, 2019 edition.
(17)S.Pinker, Learnability and Cognition:The Acquisition of Argument Structure,The MIT Press,2013, pp.5,415-440, First edition published in 1989.