Hello AI, Are You Human Too?
Exploring artificial intelligence from a psychological perspective
Same "gong" and "right"?
In the early 1990s, I attended a psychology lecture that talked about the interaction between humans and computers. The lecturer was an expert in artificial intelligence (or AI). At that time, AI was still a relatively unfamiliar term, and experts used the well-known term "computer" to replace it. However, he raised a question that no one has considered: If we use computers as psychological research subjects (or "subjects"), should we give them the same treatment as human research subjects?
In psychological research, there is a set of rules to protect the rights and interests of subjects. The most basic ones include: not violating ethical principles; informing the purpose of the research; voluntarily participating; protecting the privacy of personal information and data; avoiding physical and mental harm; having the right to Know the research results and so on. How do these principles that apply to humans apply to computers?
At that time, most of the people present were psychology professors, graduate students, and related professionals. The first reaction of many people was: "Computers are not humans and do not need these protections." The lecturer said, "If computers can complete the tasks of interacting with human beings, If the candidates have the same job, they should receive the same treatment."
As soon as this statement came out, it caused a controversy in the audience, because the speaker's statement touched on a sensitive issue: if one day a high-functioning computer can do everything that humans can do, will it be considered a "person"? Should it be given "computer rights"?
The book of Genesis says that God created man from the dust of the ground in his own image and breathed the breath of life (soul) into his nostrils; the book of Ecclesiastes also says that God placed eternal consciousness in man's heart. The prerequisite for being a "person" is to have a soul. But AI has no soul, it is just a machine learning model created by humans, without consciousness or emotion.
AI is also different from living things. It lacks subjective experience, emotions and self-awareness like living organisms. No matter how advanced its cognitive or messaging capabilities are, it is impossible for a robot to feel or understand its own existence. Its reactions to things are generated from the calculation system without real thinking; it has no empathy and cannot put itself in its shoes and see things from different angles, so it does not have complete moral judgment capabilities.1
People have personalities, but machines/robots do not have their own personalities. However, it can interact with people by simulating different tones, styles, and attitudes based on the input it receives and the patterns it learns from existing data. Although it has no real personality, it can be set to simulate different personalities and play the role vividly.
In 2023, I asked ChatGPT the question that AI experts asked 30 years ago: "When artificial intelligence like you is involved in psychological research, should it be given the same type of protection as human research subjects?" Its answer is yes.
Does AI really think of itself as a human being?
A closer look reveals that ChatGPT does not refer to the rights and interests of AI itself, but to the ethical considerations related to its use. For example, if research involves input from users, researchers must ensure data privacy and not reveal any personally identifiable information. Researchers should avoid using artificial intelligence in inappropriate ways, including avoiding abuse, discrimination, or manipulation; and avoiding causing physical or mental harm, distress, or discomfort to users. All in all, the protected objects are the human users of AI, not robots.
What is the “intelligence” of artificial intelligence?
"Intelligence" in artificial intelligence refers to the ability of a machine system or computer program to imitate human intelligence. This ability allows machines to perform tasks that require similar human thinking activities, that is, to simulate human cognitive activities. How far can it go?
Bloom's Taxonomy is often used in the field of modern teaching to describe and analyze cognitive functions at different levels as a reference for teachers in the design and evaluation of courses and teaching activities. The six levels listed represent a gradual process from low to high difficulty. How does the artificial intelligence machine learning model meet the requirements at each level?
It is generally accepted that AI performs well at the lowest three levels, and may even surpass humans:
- Memory: Can store a large number of facts, nouns, definitions and their explanations for the user to retrieve at any time.
- Comprehension: Ability to explain, compare, and illustrate existing concepts stored in its system.
- Application: Able to apply existing knowledge in different situations. There are mixed reviews on AI’s performance at the next level. But many people think that this is the upper limit of its cognitive function:
- Analysis: Break down a complex concept into several easy-to-understand concepts to help explain and understand. The following two highest levels are difficult for AI, which admits it cannot completely replicate human performance:
- Comprehensive and evaluation: AI does not have human judgment capabilities and cannot make independent judgments on its own. It can only provide information to help users make decisions.
- Creation: AI can generate new content, concepts, answers, storylines, etc. based on existing information. But unlike our concept of "creation", it actually does not have its own "new" thing. At most, it is just mixing old wine and putting it in a new bottle.
Intelligence refers to the ability to adapt to surrounding things and the environment, which can be enhanced through learning. Intelligence is manifested in processing specific tasks, such as calculations, identification, and solutions. Intelligence focuses primarily on the effective use of knowledge and technology to solve problems and does not necessarily require values and moral judgment. It is this ability that artificial intelligence imitates.
Wisdom is deeper insight and understanding, usually including values, moral judgment, and reflection on the meaning of life. The wisdom discussed in the Bible is not ability, but moral quality. Proverbs says that wisdom is a gift from God, given to those who have a good relationship with God, those who are godly and obedient to His teachings. Wisdom enables people to understand what is real, correct, and eternal, and to understand life from God’s perspective.
Artificial intelligence might be able to generate a seemingly informative sermon based on gleaned theological discourse. But please note that AI itself has no consciousness, emotions, values, and moral judgment. It cannot truly know God and establish a relationship with God. So what’s the point of the generated sermon? So artificial intelligence is not and cannot be intelligent from either a secular perspective or a biblical perspective.
▲ If a pastor uses AI to generate sermons, can it feed the soul?
Limitations and potential of artificial intelligence
AI does not have personal values and ethics, but its reactions can reflect the opinions of the data it was fed into. I once asked a computer engineering professor at Case Western Reserve University: Does AI have ethical standards?
His answer is thought-provoking: If the AI is fed with popular concepts, the answer is "no."
AI is a computer program written that can learn and reorganize data, but it cannot create knowledge other than the input data. If the data used for "training" contains opinions and attitudes that are inappropriate for children, it will be conveyed indiscriminately. Therefore, some AI platforms provide parental control functions, allowing parents to filter or restrict the content that AI can access and provide for children.
As society shifts toward digitalization and artificial intelligence, the aging population has been largely ignored. Research in recent years has found that artificial intelligence does not pay enough attention to the category of age and the elderly. This phenomenon is called “AI ageism” and can be defined as practices and ideologies within the field of AI that exclude, discriminate against, or ignore the interests, experiences, and needs of older populations. For example, the age group of the database data, the setting of the character's age, and the vocabulary used all ignore the older generation. The elderly are even more excluded from users of artificial intelligence technologies, services and products.2
AI itself does not understand what "discrimination" is. The problem lies with the people who program it. Although there are areas that AI cannot do well or cannot do well, there are also areas that are worthy of development and utilization. In addition to correcting the above shortcomings and controlling and reducing age discrimination, AI should actually play a great role in assisting the elderly in daily life and care.
Human functions decline with age, but AI functions will not decline, so it can make up for human deficiencies. Memory is the strong point of AI. It can serve as reminders and planning tasks in routine daily life details. The two main focuses of an elderly person's life: health care and social interaction, are closely related. AI is sufficient in these two aspects.
The older you get, the more medicine you take, and AI can set and remind you of the time to take medicine. AI can track and analyze drug effects, helping healthcare professionals adjust prescriptions to minimize side effects. A potential risk for the elderly is falls. AI can detect falls or abnormal movements and send warnings to medical staff or related personnel in a timely manner. Care of people with dementia is extremely challenging for family members. AI can provide caregivers with resources and information, provide emotional support, and connect them with support groups or related services to reduce caregiver stress.
However, AI provides auxiliary functions and cannot replace the role of caregivers and professional medical staff, and the acceptance and need for AI services also vary from person to person.
▲ AI can assist the elderly in their lives, but it cannot replace caregivers and medical staff, let alone human relationships.
In social life, AI-driven companion robots can reduce loneliness, and chatbots can increase opportunities for interaction, both of which are much better than sitting alone in front of the TV. Although artificial intelligence seems to enhance social interaction, Christian ethicist Dr. Zhang Liming points out: If you are used to being served by AI, your social skills will deteriorate, and it will become more difficult to deal with real people.3 AI should be viewed as a support tool rather than a substitute for real human connections.
Humans are created in the image of God; AI is created in the image of humans. Humans have souls and can enter eternity; AI's existence is limited to this life. Humans were created for God; AI was created for humans. Humans are masters; AI is servants. Human beings use animal power, firepower, wind power, electricity, and nuclear energy to replace human physical work in order to achieve greater results. There is a degree of risk associated with using these energies, but we learn how to manage them and establish rules to prevent misuse.
Nowadays, AI is used to replace some mental work in order to achieve greater results. We also need to develop a set of rules to govern its development and use. In the foreseeable future, it may be people and AI working together to solve complex problems. AI can supplement human intelligence, but it cannot replace human intelligence.
Note:
1. Torrance, Steve (2007). Ethics and consciousness in artificial agents. AI & SOCIETY. 22:495-521.
2. Stypinska, Justyna (2023). AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. AI & SOCIETY.38(2), 655-677.
3. Zhang Liming (2023), Special topic of the Chinese Christian Summer Conference in the American Midwest: "How to use the Bible to respond to the ethical issues of the times?"
Dr. Huang Qian, professor emeritus in the Department of Psychology at Cedarville University, Ohio. He once served as the founding director and president of the Chinese American Educational Research and Development Association. He has taught at the University of Hong Kong, Hong Kong Baptist University, and the University of Houston.
He once served as the columnist and theme planner for several issues of "True Love" family magazine. Currently serving as a reviewer for Christian academic journals, and teaching courses on research methods and elderly ministry at the seminary. Expose yourself to new topics, meet new people, and use new technologies through writing.