Many historical documents that are seen today as milestones in advancing and protecting human rights, such as the United States’ Declaration of Independence and Emancipation Proclamation, and France’s Declaration of the Rights of Man and of the Citizen, were forged in times of political uncertainty and were subject to both widespread support and opposition. A similar situation could arise regarding artificial intelligence rights. Artificial intelligence, or AI, is an expansive and expanding field with different branches, each with their own intended behavior and responsibility. Public policy, both present and future, should support AI rights in the context of AI that seems to act in a conscious, humanlike manner.
Firstly, I would like to introduce the three general types of AI that I will analyze – AI that plays a narrow and specialized role, AI that acts in a more autonomous manner and thus has more impact in its actions, and finally AI that acts in a humanlike manner and exhibits traits associated with consciousness.
One way in which AI rights can take form is through legal personhood. In its pure, legal form, legal personhood simply refers to the status of being an entity that can sue, be sued, own property, and enter into contracts. The idea of legal personhood has also been fundamental to the American view of rights, morality, agency, and obligation as a definition of humanity. However, through many countries’ legal systems, there have been numerous examples of non-human entities being granted certain protections and rights based upon legal personhood. For example, the United States Supreme Court ruled in Burwell v. Hobby Lobby Stores, Inc. (2014) that Hobby Lobby, an arts and crafts establishment, was exempt from certain provisions of the Patient Protection and Affordable Care Act, commonly referred to as “Obamacare,” because those provisions infringed upon the religious views of the owners. Essentially, that demonstrates that there are instances in which a non-human entity, though controlled by human persons, can be granted certain legal rights and protections afforded to human persons.
The fact that Hobby Lobby itself was the defendant in the aforementioned lawsuit instead of the business owners is a prime example of corporate personhood, or the ability of an organization to be legally recognized as an individual, thus enjoying certain rights that are enjoyed by human beings. Thus, if corporations can be granted legal rights and be legally recognized as a person, then it should not be inconceivable to grant legal rights and legal recognition of personhood to an AI. This will be of great importance for AI in the issue of legal liability, particularly in the context of an AI serving a specific purpose for a human or corporation.
Normally, if someone wishes to sue a human associated with a corporation, such as an employee or the employer, then legal liability would fall to the corporation through corporate personhood. However, there is one notable exception. For example, if a corporate officer of a company knowingly lies to persuade a client to pay for a delivered product when the product itself is actually undelivered, that corporate officer can be sued through a lawsuit known as a tort. That begs the question: are there situations where an AI entity can be held legally liable? Let’s say we have a surgeon who allows an AI to help nurses monitor a patient’s vital signs during a surgical procedure but the AI fails to detect an abnormality. There is currently very little, if any, legal precedent for legal liability in cases similar to this scenario, but under typical legal procedures it may be the case that the human medical professionals would be held legally liable if their decision to employ the AI resulted in their care falling below a certain standard. Therefore, an AI entrusted with a limited amount of responsibility would not require legal personhood or legal rights as they do not assume the burden of legal liability.
When analyzing the idea of legal rights for AI that act in a more autonomous manner, thus having more responsibility, limitations can play a huge factor in legal liability. Take the example of an AI in the not-too-distant future that is placed in charge of autonomously driving a car. If the AI-driven car collides with or injures a pedestrian due to a software limitation, then would a situation arise in which the AI is held legally liable? As with the previous example of an AI assistant to a surgeon, there is very little, if any, legal precedent for legal liability in cases similar to the AI driving scenario. It is known that AI systems are notoriously poor when “considering edge cases (cases where one variable in the case takes an extreme value) or corner cases (multiple variables take extreme values),” thus potentially taking no, or poor, decisive action when faced with a situation that they are not programmed for. Thus, a legal case for this scenario, given that there was no criminal intent from the programmers to cause a collision, would potentially analyze the specific wording of communicated limitations from the AI’s programmers to the users for legal liability. If there were criminal intent from the programmers or from the user to cause a collision, the criminal legal liability would likely fall on the perpetrator. No matter the offense or intent, autonomous AI programs would not require legal personhood or legal rights as they do not assume the burden of legal liability.
I will now turn my attention to AI that can act in a sentient or conscious manner, but before doing so, I must recognize that there is a fierce debate over whether or not AI can ever be classified as sentient or conscious. There is already legal precedence for declaring non-human entities as sentient beings, such as the 2015 Animal Welfare Amendment Bill passed in the New Zealand legislature that stipulated the necessity to “recognise that animals are sentient” and thus deserving of legal protection. Before asking whether or not legal rights should be granted to AI that can act in a sentient or conscious manner, we must ask if AI can ever be truly sentient or conscious. For the purposes of AI rights, I use the definition of consciousness as the structure in which an entity models itself and reasons about itself, although it should be noted that studying artificial consciousness is still a new field. There is currently no AI that can be described as conscious, and furthermore there is no guarantee that AI can achieve a level of true consciousness, but there exists a possibility of a “reasonably accurate simulation of [a human brain, which] would have whatever properties the [human brain] has, including phenomenal consciousness”. Phenomenal consciousness itself is not the same as the type of consciousness enjoyed by humans, but is still a state of consciousness. Therefore, legally speaking, there exists a distinct possibility in the future for declaring AI as sentient beings, and therefore deserving of certain legal rights.
However, I will focus on seemingly-conscious AI that are intended to serve as companions to humans with the assumption that there has not been a legal declaration of AI consciousness. With that being said, the development of a seemingly-conscious AI, particularly one programmed to exhibit human behavior, will most likely lead to a great ethical debate. One way to frame the ethical dilemma of AI rights would be to recognize that if two entities deserve different levels of moral consideration, then “there must be some relevant difference between the two entities that grounds this difference in moral status”. However, if we have AI that is programmed to replicate a human’s behavior and attributes, then there would be very little, if any, relevant difference observed by the human user or by an outside observer. The importance of this cannot be understated when placed in the context of abusive behavior by humans. AI simulating human behavior and placed into humanlike bodies, a prime example of a robot, are “precisely the sort of robots that are most likely to be abused”. A human would most likely not take too kindly to being abused or treated unreasonably, and would react accordingly; thus, an AI-programmed robot simulating a human could react in a way that would provoke even more violent behavior from a human user. There is no guarantee that this would not affect the human user’s behavior towards other humans, and thus it would not be ethically justifiable to allow such behavior towards an AI-controlled robot because it would also place human lives at risk. Furthermore, if the possibility that the AI possesses phenomenal consciousness is true, then the ethical dilemma presented by abuse towards the AI would be exponentially compounded as the recipient of abuse would be sentient. Therefore, based on ethical considerations, certain legal rights must be seriously considered and be supported for AI that simulates human behavior, particularly for companion robots.
Still, there are some who would argue against AI rights on principle. In 2017 Saudi Arabia granted citizenship to a robot named Sophia. Some critics label Sophia as a glorified chatbot with a face, but many feminist scholars have noted that Sophia was given more rights than many Saudi women, and thus the action of granting citizenship was seen by many as an indignity. Granting an AI more rights than a human can easily be seen as a grave misjustice, but it does not have to be a zero-sum game where granting AI certain rights corresponds to a loss of human rights. Rather, granting AI rights will strengthen human rights by showing that humans can judge when basic and necessary rights are needed. Furthermore, historically and currently Saudi Arabia has indeed had poor performance on gender equality, but that does not in itself justify opposing granting AI rights on principle; two wrongs do not make a right.
In terms of AI exhibiting traits of consciousness, particularly AI-controlled robots simulating human behavior, there are also those who oppose AI rights from arguments such as the objection of “existential debt,” or that AI fundamentally should not have rights if they only exist in the first place to serve humans. However, if we were to arrive at a stage of technological advancement where we have phenomenally conscious AI, then it can be argued that choosing to create the phenomenally conscious AI is an irrevocable decision like choosing to conceive human children. It would be anathema, and very much illegal, to murder a human child because the parent does not want the responsibility of raising the child anymore. I have previously established that legal rights must be seriously considered for AI simulating human behavior, even if the AI in question is not conscious. But the principle that no AI deserves rights because they were created for the specific purpose of serving humans is an invalid one. Harkening back to the Emancipation Proclamation, it could similarly be argued that slaves do not deserve rights because they were captured for the specific purpose of serving their masters, but clearly, that is not the case.
The question of AI rights will only grow larger and more important as AI technology advances, and it is clear that there is no one-size-fits-all solution. While it may seem at first glance that there is a distinction between the legal and ethical debates on AI rights, that distinction is not as pronounced as some may think. In a democracy like the United States, laws are created to reflect the will of the electorate, voters which represent greater society with their own morals and ethics. Thus, while policymakers ought to proactively debate and implement rights for certain AI, the opinions and ethical considerations of the public should drive a fight for AI rights, much like the activism seen in defining movements such as the Civil Rights Movement, particularly if AI achieves a state of consciousness. However, unilaterally opposing AI rights is at best a misguided approach to a developing technology, and at worst a catalyst for risk to human life and, potentially, a grim condemnation of a conscious entity’s life and liberty.