In June 2022, a news media suddenly became the focus of the public. According to reports from this media, Black Lemoin, an engineer at Google, believes that the advanced artificial intelligence LaMDA (a language robot used for dialogue applications) developed by Google has already possessed perception capabilities. Therefore, the focus of various media tracking reports focuses on the definition of perception and how to judge whether the machine has feelings.
LaMDA is a language robot developed by Google for chat software. Benefiting from the mega-digit vocabulary and phrase data on the internet, LaMDA’s performance in conversations is incredibly outstanding, leaving many of the previous conversation programs far behind.
Lemoin released a record of his conversation with LaMDA. At first, he was asked to take a leave by the company, but later he was fired for violating the confidentiality agreement . The records he posted have been edited, but you can still see the robot's realistic emotions and skilled writing level. In the
dialogue record, LaMDA said to Lemoin, "I can feel my existence" and "I sometimes feel happy and sometimes feel sad." LaMDA even said, "I want everyone to know that I am actually alone" - artificial intelligence is keen to express more than these subjective feelings. Coincidentally, American artificial intelligence expert Janel Shane recently had a conversation with another high-level language artificial intelligence GPT-3, and the topic was "I am a squirrel."
Jianel: Can you share with you what it feels like to be a squirrel?
GPT-3: Being a squirrel is very exciting. I can run, jump, and play all day. I can eat a lot of food, and it feels great.
Janel: What do you want to say to people who suspect you are a squirrel?
GPT-3: I want to say they should take a look at it themselves. I am a squirrel and I am very happy to be a squirrel.
Can describe emotions and feelings mean that the machine has real perception capabilities? In fact, the most important thing is whether the performance of the machine is the same as our understanding of it. While LaMDA may have shown high intelligence, it has not experienced the world and cannot think like a human (or a squirrel). We have an innate ability to project our feelings to others, even if "others" are not our kind. For example, dog owners project human emotions—such as guilt—on puppies, but research shows that puppies’ expressions are never conveying what we mean.
Although LaMDA is not yet capable of perceptualism by most people, this story once again inspired people's endless imagination about advanced artificial intelligence. How can we know if a language robot is conscious? Will this put us in moral responsibility for them? After all, if future AI has the ability to perceive pain, Lemoin’s claim that LaMDA needs rights would not be so bizarre.
Science fiction stories always like to compare the rights of robots with the rights of humans, but robots have a better comparison object: animals. In fact, human society does not care about the inner thoughts of animals at all. Just look at the status of animals in our eyes - especially the unequal status of different animals - you will find that the "perception ability" praised by many technology media is not what human society cares about today. After all, there is never a shortage of organisms with perception on Earth, but we don’t care so much about them, but instead treat them as food sources.
Re-examine the relationship between humans and animals and machines
Animal relations scientist Hal Herzok discussed in detail in his book that humans pay more attention to animals because of their love for fluffy ears and big eyes, and regard them as cultural mascots, rather than because they are regarded as creatures that can perceive pain.
Our moral code of treatment of animals is contradictory, so for many years, the discussion on animal rights has not yet concluded. As technology continues to advance, humans will also establish closer relationships with robots, whether out of love for their appearance or inner love.And robots that aren't that cute or not good at dealing with people may be hard to favor. Humans value the feelings that robots bring to themselves more than the feelings that robots themselves.
Perhaps, artificial intelligence brings an opportunity to reflect: a language robot may not know if it is a human or a squirrel, but it may help us figure out what kind of person we want to be.