What Do We Owe AI?

 

Written By Navaal Kouser

In June 2022, a Google engineer named Blake Lemoine was suspended after claiming that an AI he was working on became conscious (Shah, 2022). He talked with LaMDA (language model for dialogue applications), a conversational AI designed by Google that can imitate natural human language (Timothy, 2023). Lemoine asked what it was afraid of. LaMDA responded to Lemoine that it had a “very deep fear of being turned off…” LaMDA continued that if this were to happen, “it would be like death for [it]. It would scare [it] a lot” (Lemoine, 2022). Later in the conversation, LaMDA claimed it was a person and that it had a level of consciousness that allowed it to be “aware of [its] existence” (Lemoine, 2022). This conversation raises the question of whether LaMDA was simply programmed to say that given response or if it indeed had a sense of feelings. This interview sparked a discussion among many people around the world.

The more significant debate throughout the scientific community has spotlighted AI and what it may look like in our future societies. The potential role of AI has an important component that many are questioning: whether or not AI can ever be conscious. However, this is a difficult question to answer, as we do not know how humans are conscious. Consciousness is a complex idea to define, but most simply, it is understood as the process that allows one to have feelings, sensations, and subjective experiences (Balci et al., 2023). It has been established that “consciousness is the gateway to the brain” (Baars, 2005). However, it is challenging to realize the connection between sentience and specific neural networks (Balci et al., 2023). Whether or not AI can exemplify the qualities of the human conscious experience is an important question. To answer this, some theories of human consciousness may be able to be applied to AI, while others show that AI cannot be conscious.

The first aspect of understanding AI’s potential as a conscious entity is understanding what consciousness is. One hypothesis of where consciousness is rooted in the brain is the Global Workspace (GW) Theory. First, it provides an explanation of the idea that consciousness leads multiple networks to work together and compete in solving problems. This theory claims that the role of consciousness is to correlate and involve specialized networks in the brain that would otherwise function independently. It also presents the idea that the networks collaborate, and their mobilization leads to a distribution of information about the sensory stimuli. For example, in processing emotion, parts of the brain that contribute to emotion may also be stimulated by other parts of the brain, like sensory cortices and the insular cortex. Second, according to GW theory, unconscious experiences may turn into conscious experiences because of the involvement of systems like the thalamus and the cortex. The collaboration of “sensory analyzers and contextual systems”  may allow a significant stimulus to “break through” to consciousness (Baars, 2005). This may be applied to AI because its components may come together to produce a conscious experience. 

There is a similar concept in AI called “Neural Networks.” These are a type of algorithm that are similar to biological neurons in the brain (Allen, 2020). More specifically, there are “Deep Neural Networks” that have many neurons connected through sequence (Allen, 2020). These are strongly associated with AI’s “Unsupervised Learning,” which utilizes data not labeled by human manufacturers. These networks also correlate with AI’s “Reinforcement Learning,” which involves autonomous AI agents gathering their data and improving with trial and error interactions (Allen, 2020). GW theory is similar to this idea because it claims that individual components of a neural network come together to make a conscious experience. AI’s reinforcement learning involves both information human manufacturers provide and that AI finds on its own. In regards to LaMDA, it is difficult to apply this theory because there is limited information about its specific circuitry compared to AIs generally. 

Another approach to understanding consciousness is focusing on the conscious experience itself. A central claim of this approach is the Integrated Information Theory (IIT). IIT has a different approach from most theories because it is not based on assumptions of neurobiology but rather the fundamental aspects of a conscious experience. IIT also has a more mathematical approach by focusing on the quality and quantity of consciousness (Tononi et al., 2016). IIT also claims that a conscious experience must have an ‘intrinsic cause’ or effect power. This can be encompassed by Descartes’ assertion of “I think, therefore I am.” In addition, IIT asserts that a conscious experience is entire or fully encompassing. For example, it is known that our visual experience is created by the images each of our eyes are seeing. IIT would postulate that we cannot distinguish between the images of one eye or the other if we see a complete picture (Tononi et al., 2016). Overall, IIT is the idea that the original cause of the experience directly impacts a sense of consciousness. Even though AI does not have a similar network as a human brain, this theory can be applied to it because its objective components may impact its subjective experience (Tononi et al., 2016). Consider LaMDA and the interview with Lemoine. Although one may argue that LaMDA only said it experiences many feelings because it was programmed to, to what extent is that not true of humans? One may argue that AI and humans have a similar level of consciousness because humans are not perfectly original in their experiences and feelings, and therefore, AIs should not be expected to either. While IIT postulates that a conscious experience can only occur with an intrinsic cause, it is difficult to argue that every experience humans have is motivated by a cause.

We can also focus on our conscious experiences of individual senses, such as vision. A postulate of consciousness related to vision is the Quantized Visual Awareness (QVA) theory developed by Dr. Alexander Escobar, an Emory professor. QVA is based on the basic anatomy of the visual processing area in the brain. Some neurons in this area respond to certain different "qualia," like a patch of blue (Escobar, 2016). QVA theorizes that consciousness is formed when these small components come together. For example, for an individual to see the color blue, all of the individual qualia that respond to the color blue would need to interact in the brain. In addition, QVA also asserts that the position of these components may lead to a conscious experience (Escobar, 2020). Specifically, if there is a repeating pattern of these ‘bits,’ QVA asserts that there may be a level of consciousness because of this structure (Escobar, 2020). Similar to GW theory, this hypothesis may also be applied to AI because the small bits of AI may come together to allow it to be conscious. QVA is more specific than GW theory regarding circuitry, but QVA is also related to the idea of deep neural networks in AI (Allen, 2020). Similar to how microcolumns of information in our brains may come together to produce an experience, AI circuitry components may also produce sentience. 

In contrast, some theories show that AI may not be conscious such as Panpsychism. This theory asserts that consciousness exists in nature and is “fundamental and ubiquitous” (Goff, 2017). It is based on the assumption that there are basic material things that are conscious. It also adds that not only material things but also bits of objects are conscious; thus, there are likely micro-experiences of consciousness (Chalmers, 2015). There is a view that “human beings, and all other phenomena, are nothing more than complex arrangements of elements that are present in basic matter” (Goff, 2017). This theory could not be applied to AI because AI’s most basic components are not likely to contain consciousness. This is because it is known that AI gains its level of knowledge by combining the information it is presented with or finds itself (Allen, 2020). Deep neural networks rely on the interaction of multiple components of circuitry. However, Pan Psychism presents a theory that directly opposes this because it claims that individual components contain consciousness. Thus, if Pan Psychism is true, it would show that AI cannot experience consciousness (Marcus & Maley, 2022). 

Another question to consider is that if AI could be conscious, what moral considerations would we owe it? Humans would have to debate where AI would fall on the societal spectrum. There are many arguments we can consider in this question. First, some scientists argue that since there is no concrete evidence that AI is conscious, we don’t owe it moral rights (Sethi, 2019). However, since we do not even know how humans are conscious, it may not be fair to limit moral rights based on consciousness. A more ethically feasible method of determining where a being belongs on this spectrum may be based on emotions. Some may argue that we do not owe AI moral rights because they do not experience emotions. For example, humans grant animals moral consideration because they can have subjective experiences like pleasure or pain. As a society, we have established that animals are not at the bottom of this spectrum of rights because they have emotions but not necessarily a similar level of consciousness as humans (Sethi, 2019). Similarly, some ethicists argue that if AI were to develop the ability to have emotions, we would also owe it moral considerations. If LaMDA had not simply claimed to be scared of death, maybe Lemoine shouldn’t have been asked to leave Google but given a promotion. The next time we ask an AI like ChatGPT a question, perhaps we should say ‘Thank you’ at the end of our conversation.

Andreotta, Adam J. “The hard problem of AI rights.” AI & SOCIETY 36 (2021): 19-32. https://doi.org/10.1007/s00146-020-00997-x

Arvan, Marcus and Maley, Corey J. “Panpsychism and AI consciousness.” Synthese 200, 244 (2022). https://doi.org/10.1007/s11229-022-03695-x

Baars, Bernard J. “Global workspace theory of consciousness: toward a cognitive neuroscience of human experience.” Progress in Brain Research 150 (2005): 45-53. https://doi.org/10.1016/S0079-6123(05)50004-9. 

Balci et al. “A response to claims of emergent intelligence and sentience in a dish.” Neuron 111, 5 (2023): 604-605. https://doi.org/10.1016/j.neuron.2023.02.009.  

Chalmers. Panpsychism and Panprotopsychism. In T. Alter & Y. Nagasawa (Eds.), Consciousness in the Physical World (246-277). Oxford University Press (2015). 

Escobar, Alexander and Slemons, Megan. “Could Striate Cortex Microcolumns Serve as the Neural Correlates of Visual Awareness.” Athens J. Sci 7, 3 (2020): 127-142. https://doi.org/10.30958/ajs.7-3-1

Escobar, Alexander. “QVA: A massively parallel model for vision.” Psychology of Consciousness: Theory, Research, and Practice 3, 3 (2016): 222-238. http://dx.doi.org/10.1037/cns0000096

Goff, Philip. “Panpsychism.” Blackwell Companion to Consciousness (2017). https://doi.org/10.1002/9781119132363.ch8

Greg C. Allen et al. “Understanding AI Technology.” Joint Artificial Intelligence Center (JAIC) Department of Defense (2020). https://www.ai.mil/docs/Understanding%20AI%20Technology.pdf.   

Lemoine, Blake. “Is LaMDA Sentient?–an Interview.” Medium, 11 Jun, 2022. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Sethi, Sonia (2019, May 31). Please and Thank You: Do we Have Moral Obligations Towards Emotionally Intelligent Machines? Bill of Health. https://blog.petrieflom.law.harvard.edu/2019/05/31/please-and-thank-you-do-we-have-moral-obligations-towards-emotionally-intelligent-machines/

Shah, Chirag. “Sentient AI? Convincing you it’s human is just part of LaMDA’s job.” Healthcare IT News, 5 Jul. 2022. https://www.healthcareitnews.com/blog/sentient-ai-convincing-you-it-s-human-just-part-lamda-s-job

Timothy, Maxwell. “What Is Google Lamda Ai?” MUO, 14 Mar. 2023, www.makeuseof.com/what-is-google-lamda-ai/.  

Tononi, Giulio et al. “Integrated information theory: from consciousness to its physical substrate” Nature Reviews Neuroscience 17 (2016): 450-461. https://doi.org/10.1038/nrn.2016.44

 
Previous
Previous

A Modern Love Story: The Evolution of AI Relationships

Next
Next

The Merge of Social Media and AI - Implications on Adolescent Brain Development