By Sarah Siddiqui
I. INTRODUCTION
The rapid advancement of artificial intelligence (AI) in robotics challenges traditional legal notions of personhood, rights, and autonomy. As machines develop more sophisticated cognitive capabilities, the boundary between human and humanoid robot rights becomes increasingly ambiguous, forcing society to confront issues once confined to the realm of science fiction.[1] Among the most controversial of these questions is whether humanoid robots, which exhibit increasing levels of intelligence and autonomy in decision-making, should be granted rights under the law.
This debate over robot rights has gained significant prominence following the granting of citizenship to the humanoid robot Sophia by Saudi Arabia in 2017.[2] Although largely symbolic, this gesture exemplifies the growing recognition of robots as legal subjects capable of engaging with the human legal system.
Although AI robots are not yet advanced enough to warrant rights at this time, they are likely to become more complex and autonomous. At some point, these advancements may cross a threshold where granting rights becomes a pressing issue. Given this potential, it is important to address the legal and ethical dimensions of such a future now, in order to prepare society for the challenges that may arise, even if such rights remain premature today.
This article argues that robots, despite their technological and analytical sophistication, should not be granted legal rights comparable to those of human beings. While the increasing autonomy of robots challenges established legal frameworks of personhood and rights, it does not justify extending legal recognition to non-human entities. However, this does not suggest that robots should be entirely excluded from legal consideration. Similar to how animals and corporations have been granted limited rights, it may be appropriate to consider narrow, functional forms of legal status for robots in specific contexts. Even so, these rights should be clearly distinguished from those grounded in human personhood and should not be interpreted as a basis for moral equivalence or full legal rights.
Part II of this article examines the legal and philosophical foundations of rights. Part III explores the case for granting robots rights. Part IV presents the legal and ethical arguments against extending rights to robots. Part V analyzes the concept of legal personhood in the context of AI and robotics. Part VI discusses the Intellectual Property considerations. Finally, Part VII concludes the paper, reaffirming the necessity of preserving the integrity of human rights.
II. DEFINING RIGHTS AND ROBOTS
To evaluate whether robots should be granted rights, it is essential to first examine the concepts of “rights” and “robots”, as both terms carry complex meanings that guide the legal analysis.
A. What Are Rights?
In legal discourse, rights refer to protections or entitlements granted by a legal system, often rooted in principles of justice and autonomy.[3] These rights are typically associated with individuals who possess the capacity to make autonomous decisions, experience suffering, and contribute to society in meaningful ways.[4] The principles of rights in Western legal thought are tied to the concept of personhood, which is typically attributed to human beings.[5] Legal rights protect individuals from harm and to ensure their participation in societal affairs, including freedoms like expression, due process, and the protection of personal property.
However, legal systems have also extended certain protections and rights to entities that do not possess full personhood, such as animals, which are protected from cruelty, and corporations, which are granted legal personhood for purposes such as owning property and entering contracts. These examples suggest that legal rights can be applied more flexibly than traditional frameworks of human moral agency might imply.
A critical distinction exists between moral and legal rights. Moral rights stem from inherent autonomy and dignity, while legal rights are institutional constructs that may or may not align with moral philosophy.[6] Human rights are justified by the capacity for consciousness, moral agency, and the ability to understand the social contract.[7] Thus, rights reflect a being’s participation in the moral community. Yet, legal systems also recognize rights in individuals who may not always possess these capacities, such as infants or those in comas. This indicates that the attribution of rights is not solely dependent on cognitive ability, but may also derive from factors such as potentiality, vulnerability, and societal inclusion.
Therefore, extending rights to robots would not simply require meeting existing standards of personhood and moral agency but might also require rethinking the criteria by which legal rights are assigned.
B. What Are Robots?
Robots are mechanical systems designed to perform specific tasks, either autonomously or semi-autonomously, typically through programming or AI.[8] Although the term “robot” historically referred to simple mechanical devices (such as a Roomba or factory robot), advancements in AI have led to the creation of increasingly sophisticated machines capable of mimicking human behavior.[9] These robots vary in their levels of autonomy, ranging from simple machines with pre-programmed responses to advanced systems capable of learning and decision-making.
Relevant to this debate is the distinction between robots as tools and robots as entities capable of exhibiting human-like behaviors, such as communication, emotional response, and problem-solving. While robots such as Sophia, created by Hanson Robotics,[10] can engage in complex interactions with humans, they do so based on algorithms designed by humans. Although their behaviors stem from human-designed algorithms, advances in AI now allow some robots to make autonomous decisions and simulate emotional responses. However, these emotions are not genuine in the human sense. Their realistic presentation, though, challenges traditional assumptions about consciousness and agency. This raises important questions about whether such entities, despite lacking sentience, might be considered for limited legal recognition or protection.
This issue of autonomy is further complicated by the way robots are designed and function. Even the most advanced AI systems, while able to learn and adapt, do so within predefined parameters and constraints that are set by their human creators.[11] A robot’s actions, whether in a healthcare setting, factory, or human interaction, are responses to inputs.[12] These responses are learned behaviors shaped by programming, not by any intrinsic understanding of the environment.[13] Unlike humans, robots do not have the ability to reflect upon their actions or subjective experiences. This fundamental difference challenges the notion of extending rights to robots, as the core of legal rights is rooted in the ability to make ethically grounded, autonomous decisions, which robots are incapable of.
III. THE CASE FOR GRANTING ROBOTS RIGHTS
As AI and robotics continue to advance, a growing number of ethicists and scholars argue that certain robots may merit legal rights, if not equivalent to human rights, at least some form of legal recognition or protection. This argument is rooted in both philosophical and pragmatic considerations, with proponents suggesting that the increasing intelligence, autonomy, and human-like interaction of robots challenge traditional legal and ethical categories.
One of the most cited arguments in favor of robot rights stems from the idea of functional equivalence. Scholars such as David Gunkel argue that if a machine behaves in ways indistinguishable from a human, responding to stimuli, expressing emotions, and making decisions, it may be ethically inconsistent to deny it certain forms of recognition simply because its internal experience differs or is non-existent.[14]
Others, like Joanna Bryson, have explored the idea that granting robots legal status could serve practical purposes, such as managing liability, defining responsibility, and structuring human-robot interactions in public and commercial spaces.[15] For example, if a robot causes harm, say, a self-driving car in an accident, it may be useful to attribute certain legal responsibilities to the robot as a “legal agent” to streamline judicial or insurance processes.
Another strand of the argument revolves around human psychology and ethics. If robots increasingly mimic emotional responses and social cues, humans may naturally develop emotional attachments and moral concern toward them. Kate Darling, a legal scholar at MIT, has examined how people treat robotic pets and companions with empathy, even when they know these machines are not sentient.[16] This raises ethical questions about whether continued exploitation or abuse of such emotionally expressive machines could desensitize people or encourage harmful behaviors toward living beings.
Some futurists and philosophers have even suggested a precautionary principle: as AI progresses, we may one day create entities that do possess consciousness or sentience, and we would be morally obligated to protect their rights, just as we are for humans and animals. In this view, developing a framework for robot rights helps society prepare for more complex forms of artificial life.
While these arguments vary in scope and intent, from legal pragmatism to speculative ethics, they all challenge the traditional view that rights should be based solely on biological personhood or human consciousness. As robots become more embedded in society, failing to address their legal and ethical status may lead to inconsistency or injustice.
IV. THE CASE AGAINST GRANTING ROBOTS RIGHTS
While granting rights to robots may seem compelling, given their increasing autonomy, several arguments challenge the notion that robots should be recognized as legal persons or bear rights comparable to those of human beings.
A. Robots as Tools, Not Persons
At the core of the argument against granting rights to robots is the premise that robots, regardless of their technological sophistication, remain tools created by humans for specific purposes.[17] Robot behavior, whether making decisions or engaging in conversation, is driven by programming, machine learning algorithms, and data processing, rather than by true consciousness or understanding.[18] The apparent “intelligence” exhibited by robots like Sophia or other AI-driven machines is simply the result of vast amounts of data processing, fine-tuned models, and predetermined responses, which are guided by their creators.[19] However, as machine learning technology evolves, future robots may move beyond predetermined responses, acquiring the ability to autonomously generate answers and make decisions based on learned patterns, rather than just following their creators’ instructions.
It’s important to note that the autonomy of robots remains confined to the parameters set by human creators. While robots may display behaviors that are indistinguishable from those of humans in certain contexts, they lack the underlying consciousness, self-awareness, or subjective experience that would qualify them for rights typically reserved for persons. Legal rights are often tied to the capacity for sentience, the ability to experience pleasure, pain, or suffering, and moral agency, the ability to make decisions based on a complex understanding of ethics and society. While animals, which are often granted rights, may lack full moral agency or complex ethical reasoning, they do possess sentience, the ability to experience pleasure, pain, or suffering, which forms the foundation of their moral consideration. Robots, however, despite their ability to perform complex tasks or make decisions, do not experience sentience or suffering. As such, they lack the fundamental qualities, consciousness and subjective experience, that justify the extension of legal rights.
From a legal perspective, granting rights to robots would not only be unnecessary, but it could also introduce significant complications. Rights are designed to protect individuals who can suffer harm, asserting their autonomy and contributing meaningfully to the social fabric. Robots, as creations of human design, are not subject to harm in the same way humans are. Their existence and operation do not depend on the same ethical considerations that guide human interactions and societal participation. Consequently, extending legal rights to robots would undermine the principle that rights safeguard human dignity and autonomy.
B. The Risk of Undermining Human Rights
A more profound concern is that granting rights to robots could undermine the very foundation of human rights. Historically, human rights have been grounded in the unique dignity of human beings, who are recognized as moral agents with the capacity for reason, empathy, and self-determination. These qualities, integral to social contract theory, form the basis of moral and legal systems that protect individuals from harm, exploitation, and abuse.[20] The concept of rights hinges on the notion that human beings possess intrinsic worth, and these rights exist to preserve that worth within a social and legal context. Granting rights to robots could blur the concept of personhood, merging human beings, who possess inherent moral capacities, with machines that operate based on algorithms and lack conscious thought.[21] The legal system, designed to safeguard the rights of sentient beings, might struggle to differentiate between human persons and non-human entities, potentially leading to a legal paradox in which a machine is entitled to rights without any of the intrinsic moral qualities that justify them.[22]
The evolving nature of AI further complicates this issue. As AI becomes increasingly sophisticated, questions arise about whether certain forms of AI, particularly those that simulate emotional responses or engage in complex decision-making processes, should be granted rights simply because of their increased functionality.[23] Robots like Sophia, which can mimic expressions of thought and emotion, may lead to confusion about what constitutes “being alive” or “conscious.”[24] However, the complexity of a machine’s behavior does not equate to a subjective experience of consciousness. Robots may be programmed to simulate emotions or self-awareness, but these are not genuine experiences. Rather, they are simulations designed to enhance interaction with humans. While it is possible to grant robots a limited set of rights, such as rights to protection from harm or the right to be treated ethically in certain contexts, extending full human rights to robots would be inappropriate. Such rights should remain reserved for beings with genuine consciousness, emotional depth, and moral responsibility, qualities that robots fundamentally lack.
C. First Amendment and Other Rights: Inapplicability to Robots
Another area of contention is the potential extension of First Amendment rights, particularly the right to free speech, to robots. The argument for granting robots free speech rights often hinges on the idea that robots capable of generating content, whether through text, speech, or images, should receive the same protections as human beings under the First Amendment.[25] If a robot can express ideas or engage in discourse, proponents argue, it should have the legal right to do so without censorship or restriction.24 However, this perspective overlooks a critical distinction: robots do not create content in the same way that humans do.
For example, when a robot like Sophia posts on Twitter, the content it generates isn’t the product of independent thought, reflection, or personal expression.[26] Instead, it is the output of complex algorithms designed to simulate human conversation. AI systems process vast amounts of data and generate responses based on pre-programmed patterns or learned behavior. While these responses may appear coherent and human-like, they are fundamentally distinct from human communication, which arises from lived experience, emotions, and personal perspectives.
Granting robots First Amendment rights would be problematic, as it would treat them as equal participants in public discourse, even though they lack understanding of the content they produce. It would be nonsensical to argue that a machine can “speak” with the same intent or purpose as a human being, who has the capacity to consider the consequences of their speech. In this context, granting robots the same rights to free expression as human beings fails to recognize the foundational difference between autonomous, conscious individuals and programmed tools.
Furthermore, expanding the scope of robot rights to include protections such as privacy, due process, or the right to vote presents additional complications.[27] For instance, robots do not possess a subjective experience of privacy because they do not have personal thoughts, feelings, or desires to protect. Similarly, extending due process rights to robots could raise questions about whether robots should be allowed to challenge legal decisions in court, even though they are not capable of moral reasoning or making independent judgments. Granting robots the right to vote could be particularly problematic, as a future population of millions of robots might overwhelm human voters, potentially skewing elections and undermining the democratic process. This raises significant concerns about robot participation in political decision-making, especially when robots lack the lived experience and subjective understanding that inform human political judgment.
V. LEGAL PERSONHOOD FOR ROBOTS: A MISGUIDED PATH
In recent years, some have proposed extending the concept of legal personhood to robots as a potential means of recognizing their rights. Legal personhood, typically reserved for humans and certain legal entities like corporations, grants the ability to hold rights and responsibilities within the law.[28] Corporations, despite being non-human entities, enjoy various rights under the law, such as the ability to own property, enter contracts, and engage in litigation. This model has often been cited as a precedent for potentially recognizing robots as persons under the law. However, the extension of legal personhood to robots raises profound concerns and represents a significant departure from the foundational concepts of personhood, ethics, and legal frameworks that have been developed over centuries.
Before exploring the implications of granting legal personhood to robots, it is crucial to understand what ‘personhood’ means and why it is foundational to our legal and social systems.[29] In its traditional form, personhood refers to the recognition of an individual as a bearer of rights, capable of having legal standing, engaging in social contracts, and being accountable for their actions within society.[30] This recognition is closely linked to a variety of human qualities: consciousness, moral agency, sentience, and self-awareness. These characteristics, alongside a person’s ability to make choices based on reason and emotional understanding, form the basis for the recognition of rights and the protection of individual dignity under the law.
Legal personhood for robots, however, would require a fundamental redefinition of what it means to be a “person.”[31] Currently, the idea of personhood is built on the understanding that an entity must be capable of experiences such as pain, pleasure, or self-preservation, which creates ethical responsibility in human interactions. Robots, despite their growing capabilities, do not possess subjective experiences. While there is research into creating “ethical” robots, this is still based on predefined algorithms, not genuine moral agency. Even if robots can simulate ethical decisions, they do not possess the subjective understanding or ethical reasoning that humans develop through experience and empathy.
Although robots and AI systems can simulate human behaviors, such as conversing or making decisions based on predefined criteria, they lack the inner mental and emotional experiences humans have. Some recent claims, such as the assertion by a Google engineer that an AI chatbot could be sentient, point to the possibility that AI could one day exhibit internal states that resemble consciousness.[32] However, even if AI approaches these characteristics in the future, it would not automatically justify the extension of legal personhood. Sentience, as we understand it, involves more than the simulation of emotions or internal states; it requires subjective experience, the ability to suffer or feel joy, and moral agency. Until AI reaches this threshold, if it ever does, it remains a tool, a set of programmed algorithms, rather than a sentient being deserving of personhood.
Thus, extending legal personhood to robots would not only blur the lines between machines and humans but could also weaken the legal and ethical frameworks that ensure human dignity and individual rights. Personhood, under its current legal conception, is not something that can be conferred to non-sentient entities without creating profound philosophical and legal contradictions.
While corporations are often cited as an example of non-human entities possessing legal personhood, it is important to recognize the fundamental difference between corporations and robots.[33] Corporations are legal constructs that are designed to function within a market system.[34] They are treated as persons for practical reasons, allowing them to own property, enter contracts, and sue or be sued.[35]
However, corporations are not autonomous entities in the same sense as robots. They are organizations with human stakeholders who have legal responsibility and control over the corporation’s actions. Essentially, corporate personhood serves the needs of human society, ensuring that businesses can operate effectively within a legal framework. Corporations are human-created and human-managed entities that serve human economic and social needs, with their legal personhood existing to facilitate business operations and human enterprise within society.
By contrast, robots are not designed to serve human economic or social interests directly, nor are they operated by human stakeholders in the same way. A robot, especially one with AI, is an autonomous machine whose operations are determined by its programming and algorithms. The robot itself does not possess a stake in human society, nor does it contribute to social or economic structures in the way corporations do. Granting legal personhood to a robot, without the same human oversight or control mechanisms that govern corporate structures, introduces a significant ethical and legal gap.
One major challenge in extending legal personhood to robots is the lack of a social contract. In democratic societies, legal personhood is based on the concept of a social contract, where individuals, whether human or corporate, participate in society by upholding mutual responsibilities and contributing to the public good. This contract is built on the assumption that individuals or organizations have a stake in society’s well-being, including responsibilities to other citizens and a vested interest in the preservation of laws and ethical norms.[36] Human persons, for example, are governed by laws that protect human rights and duties in exchange for societal participation. Corporations, as legal persons, also operate under a similar system, but they are created to serve human economic interests.
Robots, on the other hand, do not partake in this social contract in the same way. They have no inherent interest in society’s well-being, nor do they make decisions based on moral or ethical principles. Even advanced AI systems, capable of making decisions autonomously, do so based on algorithms and data inputs, completely devoid of the moral judgment or ethical reasoning that humans possess.[37] A robot’s actions are dictated by its design, not by a conscious understanding of social obligations or the consequences of its actions.
Although there is a growing body of work on “ethical” robots, including research into machine ethics and moral decision-making frameworks, these efforts remain limited because machines do not possess consciousness or genuine moral agency. Their “ethics” are programmed, not felt, or understood.[38]
Extending legal personhood to robots would not only be misplaced but could also lead to a moral hazard where entities devoid of moral responsibility are granted rights, potentially resulting in significant legal and social complications. For example, if robots were granted personhood, accountability would become blurry. Who would be liable for a robot’s actions: its creators, owners, or the robot itself? This ambiguity creates a legal grey area in which accountability becomes murky, further complicating the issue.
Granting legal personhood to robots could have far-reaching consequences, potentially creating a legal hierarchy that undermines human dignity and value. One concern is that it could create a legal hierarchy between robots and human beings, undermining the dignity and value of human life. If robots are granted rights like humans, it could set a precedent for an expansion of rights to non-human entities, including animals and natural environments. Despite the fact that attempts at granting animals full personhood have largely failed. Though animals have limited rights, they still lack moral agency or sentience like humans. Extending similar rights to robots, entities that do not possess these qualities, presents the same ethical challenges.
Moreover, granting legal personhood to robots could shift focus and resources away from more pressing human rights issues, like protecting vulnerable populations and ensuring AI is regulated with human well-being in mind. The focus could shift from safeguarding the interests of people to maintaining the legal status of machines, entities that do not require the same protections in the first place. This could weaken the framework that has been carefully constructed to address social justice and the preservation of human dignity.
VI. INTELLECTUAL PROPERTY AND ROBOTICS
As robots and AI systems become increasingly autonomous, questions of intellectual property (IP) ownership pose significant legal and ethical challenges. Traditionally, IP law assumes that creative and inventive works originate from human authors or inventors. Copyright, patent, and trademark protections are grounded in human intention, creativity, and accountability. Robots, however, produce outputs, including text, images, designs, or inventions, using algorithms rather than conscious thought or creative intention. Therefore, current legal frameworks do not recognize robots as authors or inventors.
Under existing law, any work produced by an AI system is typically attributed to the human who created, trained, or directed the system. This approach maintains human responsibility and prevents robots from becoming holders of IP rights, which would imply a form of legal personhood. Granting robots IP rights would drastically shift the foundation of IP law, raising questions about ownership, accountability, and the distribution of economic benefits. It could also create incentives for corporations to use AI-generated IP as a means of consolidating power while distancing themselves from responsibility.
Although some scholars propose that highly autonomous AI systems could merit limited authorship recognition, doing so risks blurring the boundary between tool and creator. Maintaining human-centered authorship ensures that robots remain instruments serving human innovation rather than independent rights-bearing entities. As AI evolves, IP law may need targeted reforms, such as clearer rules for AI-assisted creation, but these should reinforce, not erode, the principle that legal and moral rights belong to humans, not machines.
VII. CONCLUSION
The idea of granting rights to robots presents profound legal and ethical challenges, especially as machines become increasingly sophisticated. While arguments advocating for robot rights often highlight their autonomy and decision-making capabilities, this paper contends that robots, as creations of human design, do not fulfill the essential criteria for legal personhood or moral agency. Advanced AI systems, such as Sophia, may simulate human behavior, but they remain tools devoid of consciousness, self-awareness, and the capacity to experience suffering. Extending rights to robots could inadvertently undermine the foundational principles of human rights, introduce unnecessary legal complexities, and dilute the concept of personhood. As AI technology continues to evolve, it is imperative for society to focus on regulating and ensuring the ethical use of robots, reserving legal recognition for humans, and maintaining the integrity of human rights.
Recent discussions emphasize the ethical considerations surrounding AI and robot rights. For instance, a group of scientists and philosophers has called for a reevaluation of AI consciousness and its ethical implications, including whether robots should possess rights. They argue that dismissing the potential for AI to develop consciousness could lead to unnecessary suffering and frustration. This debate is complicated by the lack of consensus on the definition of consciousness, even among animals. These discussions highlight the need for ongoing dialogue and careful consideration as AI technology advances.
Considering these developments, it is crucial to approach the issue of robot rights with caution. While the potential for AI to achieve consciousness remains a topic of debate, current AI systems lack the subjective experiences that underpin moral and legal considerations. Therefore, the focus should remain on ensuring that AI and robotics are developed and utilized in ways that prioritize human welfare and ethical standards, without extending legal personhood to machines that do not possess the qualities inherent in human beings.
[1] Thomas Frey, The Evolution of Robots: The Blurring Lines Between People and Machines, futuristspeaker.com (Nov. 26, 2024), https://futuristspeaker.com/artificial-intelligence/the-evolution-of-robots-the-blurring-linesbetween people-and-machines/.
[2] Zara Stone, Everything You Need To Know About Sophia, The World’s First Robot Citizen, FORBES (Nov. 7, 2017, 12:22 PM), https://www.forbes.com/sites/zarastone/2017/11/07/everything-you-need-to-know-about-sophia theworlds-first-robot-citizen/.
[3] Rights and Privileges, in Applied Ethics Primer, https://caul-cbua.pressbooks.pub/aep/chapter/rights andprivileges/.
[4] Id.
[5] Visa A.J. Kurki, A Theory of Legal Personhood (OXFORD UNIVERSITY PRESS 2019), https://doi.org/10.1093/oso/9780198844037.003.0001.
[6] Andrew Fagan, Human Rights, INTERNET ENCYCLOPEDIA OF PHILOSOPHY (last visited April 9, 2026), https://iep.utm.edu/hum-rts/.
[7] Celeste Friend, Social Contract Theory, INTERNET ENCYCLOPEDIA OF PHILOSOPHY (last visited April 9, 2026), https://iep.utm.edu/soc-cont/.
[8] Rahul Awati, What is a Robot?, SEARCHENTERPRISEAI (October 16, 2025), https://www.techtarget.com/searchenterpriseai/definition/robot.
[9] Tim Mucci, The History of AI, IBM THINK (Oct. 21, 2024), https://www.ibm.com/think/topics/history-ofartificial intelligence.
[10] Sophia, Hanson Robotics (last visited April 6, 2026), https://www.hansonrobotics.com/sophia/.
[11] Ana Valenzuela et al., Automation in Marketing and Consumption: How Artificial Intelligence Constrains the Human Experience, 9 J. ASS’N CONSUMER RES. 241 (2024), https://doi.org/10.1086/730709.
[12] Dmytro Humennyi, Robotics in Healthcare: A Comprehensive Overview, N-iX (Aug. 30, 2024), https://www.nix.com/robotics-in-healthcare/.
[13] What Is Artificial Intelligence (AI)?, PALO ALTO NETWORKS (last visited April 9, 2026),https://www.paloaltonetworks.com/cyberpedia/artificial-intelligence-ai.
[14] David J. Gunkel, The Machine Question: Critical Perspectives on AI, Robots, and Ethics 57 (MIT PRESS 2012).
[15] Joanna J. Bryson et al., Of, for, and by the People: The Legal Lacuna of Synthetic Persons, 25 ARTIF. INTELL. & L. 273, 275 (2017).
[16] Jordi Pérez Colomé, Kate Darling, Robot Expert: “We Shouldn’t Laugh at People Who Fall in Love with a Machine. It’s Going to Be All of Us”, MIT MEDIA LAB (June 7, 2023), https://www.media.mit.edu/articles/kate-darling-robot-expert-we-shouldn-t-laugh-at-people-who-fall-in-love-with-a-machine-it-s-going-to-be-all-of-us/.
[17] Abeba Birhane et al., Debunking Robot Rights Metaphysically, Ethically, and Legally, 29 FIRST MONDAY (Apr. 1, 2024), https://dx.doi.org/10.5210/fm.v29i4.13628.
[18] Christopher Collins et al., Artificial Intelligence in Information Systems Research: A Systematic Literature Review and Research Agenda, 60 INT’L J. INFO. MGMT. 102383 (2021),
[19] Janna Anderson & Lee Rainie, Concerns About Human Agency, Evolution and Survival, PEW RESEARCH CENTER (Dec. 10, 2018), https://www.pewresearch.org/internet/2018/12/10/concerns-about-human agencyevolution-and-survival/.
[20] Celeste Friend, Social Contract Theory, INTERNET ENCYCLOPEDIA OF PHILOSOPHY (last visited April 9, 2026), https://iep.utm.edu/soc-cont/.
[21] Ana Luize Corrêa Bertoncini & Mauricio C. Serafim, Ethical Content in Artificial Intelligence Systems: A Demand Explained in Three Critical Points, 14 FRONT. PSYCHOL. 1074787 (2023),
[22] Olaf Witkowski & Eric Schwitzgebel, The Ethics of Life as It Could Be: Do We Have Moral Obligations to Artificial Life?, 30 ARTIFICIAL LIFE 193 (2024), https://doi.org/10.1162/artl_a_00371.
[23] Sayed Fayaz Ahmad et al., Impact of Artificial Intelligence on Human Loss in Decision Making, Laziness, and Safety in Education, 10 HUM. & SOC. SCI. COMM. 311 (2023), https://doi.org/10.1057/s41599-023-01787-8.
[24] Johan F. Hoorn & Ivy S. Huang, The Media Inequality, Uncanny Mountain, and the Singularity Is Far from Near: Iwaa and Sophia Robot versus a Real Human Being, 181 INT’L J. HUM.-COMPUTER STUD. 103142 (2024), https://doi.org/10.1016/j.ijhcs.2023.103142.
[25] John Frank Weaver, Why Robots Deserve Free Speech Rights, SLATE (Jan. 16, 2018), https://slate.com/technology/2018/01/robots-deserve-a-first-amendment-right-to-free-speech.html.
24 Karina Parikh, Artificial Intelligence: Should Robots Have Rights?, AVASANT (Oct. 2020), https://avasant.com/report/artificial-intelligence-should-robots-have-rights/.
[26] Daniel Estrada, Sophia and Her Critics: The Ethics of Human Likeness — Part 1, MEDIUM (June 18, 2018), https://medium.com/@eripsa/sophia-and-her-critics-5bd22d859b9c.
[27] Toni M. Massaro & Helen Norton, SIRI-OUSLY? Free Speech Rights and Artificial Intelligence, 110 NW. U. L. REV. 1169 (2016).
[28] Johan Hermansson, Structuring Concepts of Legal Personhood: On Legal Personhood as a Cluster Property, 51 REVUS 1 (2023), https://doi.org/10.4000/revus.9933.
[29] Id.
[30] Macarena Montes Franceschini, Traditional Conceptions of the Legal Person and Nonhuman Animals, 12 ANIMALS 2590 (2022), https://doi.org/10.3390/ani12192590.
[31] Andrea Bertolini & Francesca Episcopo, Robots as Legal Subjects? Disentangling the Ontological and Functional Perspective, 9 FRONT. ROBOT. AI (2022) https://doi.org/10.3389/frobt.2022.842213.
[32] Leonardo De Cosmo, Google Engineer Claims AI Chatbot Is Sentient: Why That Matters, SCI. AM. (July 12, 2022), https://www.scientificamerican.com/article/google-engineer-claims ai-chatbot-is-sentient-why-that-matters/.
[33] Siina Raskulla, Hybrid Theory of Corporate Legal Personhood and Its Application to Artificial Intelligence, 3 SN SOC. SCI. 78 (2023), https://doi.org/10.1007/s43545-023-00667-x.
[34] Id.
[35] David K. Millon, The Ambiguous Significance of Corporate Personhood, 2 STAN. AGORA ONLINE J. LEGAL PERSPS. 39 (2001), https://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?article=1648&context=wlufac.
[36] Celeste Friend, Social Contract Theory, INTERNET ENCYCLOPEDIA OF PHILOSOPHY (last modified July 18, 2017), https://iep.utm.edu/soc-cont/.
[37] Joe McKendrick & Andy Thurai, AI Isn’t Ready to Make Unsupervised Decisions, HARVARD BUS. REV. (Sept. 15, 2022), https://hbr.org/2022/09/ai-isnt-ready-to-make-unsupervised-decisions.
[38] Brian Hutler et al., Designing Robots That Do No Harm: Understanding the Challenges of Ethics for Robots, 4 AI and Ethics 463 (Apr. 17, 2023), https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10108783/.