sdoc-col logo

Praxis: The Online Publication of The McCarthy Institute

By Obren Manjencich. Obren Manjencich is a second-year law student at the Sandra Day O’Connor College of Law, where he serves as an Associate Editor for the Sports and Entertainment Law Journal. With a strong focus on Intellectual Property and Sports & Entertainment law, he is committed to building a career at the intersection of these dynamic fields. Obren has a particular interest in trademark prosecution, licensing and branding. In addition to trademarks, Obren is also keen to explore and engage with other forms of intellectual property, including copyrights, patents, and trade secrets. In addition to his IP studies, Obren is deeply interested in the legal implications of emerging technologies, including blockchain, AI, and quantum computing. His interest in these areas lies at the crossroads of these technologies with privacy, data protection, and security.

I. Introduction

    A. Issue

    As Artificial Intelligence (“AI”) evolves in use cases, it is becoming a divisive tool. While some laud its ability to streamline tasks, others hold it in contempt for exacerbating existing problems. Regardless of perspective, AI currently operates in a “wild west” state with multiple companies vying for supremacy. This race to be the best has exposed critical pitfalls that have yet to be addressed, prompting calls for Congress to place guardrails on the technology through regulation. This paper explores the definition of AI, the history of Congressional regulation of emerging technologies, the challenges posed by AI, and the arguments for and against federal oversight.

    B)        Defining Artificial Intelligence

    AI is defined as “a field of science concerned with building computers and machines that can reason, learn, and act in such a way that would normally require human intelligence or that involves data whose scale exceeds what humans can analyze.”[1] The technology is broken down into categories and types based on system capabilities and functionalities. It encompasses three main categories[2] according to what it can do and how it operates within specific constraints or domains.

    i. Weak AI

      Also known as “Narrow AI,” Weak AI is the only form of AI that exists today. It can be trained to perform a single or specific task and cannot perform beyond its predefined scope, such as Amazon’s Alexa.

      ii. Strong AI

      Also known as “General AI,” Strong AI is a theoretical concept that uses prior knowledge and skills to accomplish tasks in a different context without the need for additional human training.

      iii. Super AI

      Super AI is strictly theoretical and, if ever realized, would surpass human cognitive abilities in reasoning, learning, and decision-making.

      1. Types

        The four types of AI are often defined by their level of cognitive sophistication and ability to learn from experiences. They reflect how AI systems are structured and how they handle tasks.

        a. Reactive Machines

          Limited to reacting to different kinds of stimuli based on pre-programmed rules, this type does not use memory, so it cannot learn new data. An example is IBM’s Deep Blue.

          b.   Machine Learning

          Considered the basis of modern AI, this type uses memory to improve over time by learning from new data, typically through an artificial neural network or other training models.

          c.  Theory of Mind

          Usually referred to as Artificial General Intelligence, this type of AI currently does not exist but could emulate the human mind and decision-making capabilities.

          d. Self-Aware

          Referred to as Superintelligence, this is a theoretical AI that, if realized, would possess self-awareness, along with the intellectual and emotional capabilities of a human.

          II. Historical Context Congressional Regulation of Technology

            Since the inception of the internet and the World Wide Web, advancements like cloud computing have driven rapid innovation. Cloud computing refers to the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the internet (“the cloud”) to offer faster innovation, flexible resources, better data security, and cost efficiency.[3]

            In 2018, Congress passed the Clarifying Lawful Overseas Use of Data Act (Cloud Act)[4]  following a legal dispute between the U.S. government and Microsoft over access to emails stored on a Microsoft server in Ireland. The Act establishes a legal framework for law enforcement to access data stored overseas on servers owned by U.S. companies. The regulation and overall accessibility of personal data has led to a unique juxtaposition. While the Act is a critical tool for investigating serious crimes, it raises concerns about government access to personal data stored outside U.S. borders. Ultimately, regulation provides a framework to balance the advantages of cloud computing, like data protection, with obligations around legal compliance and oversight.

            III. Current state of AI Technology

            A. Advancements and Applications

              While many view AI as a new and emerging technology that will continue to play a significant role in our lives, many would be surprised by how often they interact with AI without realizing it. AI’s seamless integration can be observed through a few everyday examples. First, AI has transformed customer service. AI-backed virtual assistants and chatbots have enabled companies to streamline support systems by providing around the clock answers to questions.[5] Second, facial recognition utilizes AI. Facial recognition software is a form of AI that captures facial features, creating a pattern of those features, thus resulting in the ability to identify a face.[6] Third, video games, specifically virtual reality and augmented reality games, use AI in various ways.[7] For example, AI can analyze personal behaviors, recognize gestures, and track objects. One final example is self-driving cars. Specifically, AI is being used through sensors to generate detailed maps and make informed decisions.[8]

              B.       Key Stakeholders AI

              Explainable AI outlines the role of stakeholders in the technology. It is defined as a set of processes and methods that allows human users to comprehend and trust the results created by machine learning algorithms.[9] This framework may offer some utility in the future, but it is presently not feasible for most systems. There are four distinct communities of stakeholder’s imperative to future developments to the field from the Explainable AI perspective.[10]  

              Developers: Consisting of professionals from large corporations, small and medium enterprises, the public sector, and academia,[11] they are invested in the quality and reliability of AI models. Developers’ main goal is to ensure robust system testing, debugging, and evaluating to improve application performance.

              Theorists: Composed of those accountable for advancing AI, theorists are focused on pushing the boundaries and limitations of AI knowledge rather than practical applications of it.

              Ethicists: Ethicists consist of policymakers, commentators, and critics from a wide range of disciplines, including social science, law, journalism, and politics. They are primarily concerned with the fairness, accountability, and transparency of AI systems.

              Users:  Users are people who use AI.

              IV. Current Legislative Frameworks

              A. Frameworks in U.S.

                Currently, there are no federal laws specifically regulating AI in the United States, but there are federal laws that apply to AI systems.[12] former President Biden issued an executive order to advance “trustworthy AI.”[13]  Provisions included sharing test results and other critical information with the U.S. government, protecting Americans from AI-enabled fraud and deception through the development of a best practices guide for AI-generated content. This established an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, and various measures regarding protecting Americans’ privacy.

                            In response, the Cybersecurity and Infrastructure Security Agency (CISA) developed its own roadmap focusing on four primary goals: enhancing cybersecurity capabilities, ensuring AI systems are protected from cyber-based threats, and deterring the malicious use which threaten the critical infrastructure Americans rely on every day.[14] These goals consist of Cyber Defense – using AI tools to defend against both traditional and emerging AI-based threats, Risk Reduction and Resilience – supporting secure, risk-aware adoption of AI systems for critical infrastructure organizations, Operational Collaboration – sharing AI-related threat and risk information with the public and critical infrastructure sectors, and Agency Unification – integrating AI responsibly across CISA’s operations.

                Additionally, the Department of Defense released its 2023 Data, Analytics, and Artificial Intelligence Adoption Strategy, building on earlier versions from 2018 and 2020.  The strategy prioritized improving decision-making by leveraging AI to enhance the speed, quality, and accuracy of decisions, ensuring U.S. warfighters maintain a competitive edge.

                Lastly, in July 2024, the National Institute of Standards and Technology released its  “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” to fulfill former President Biden’s Executive Order.[15] This framework outlines various risks posed by generative AI ranging from data privacy, to dangerous, violent, or hateful content, to information integrity.[16] It provided possible solutions that could also function as starting points for future AI regulation. Some are as simple as establishing terms of use and terms of service for generative artificial intelligence (GAI) systems to implanting fact-checking techniques to verify the accuracy of information produced. For context, GAI, can generate text, images, or other forms of media. GAI systems learn the patterns and structures of input training data, and then generate new data with similar characteristics.

                B. International AI laws

                i. European Efforts

                While the United States may be lacking in federal AI laws, the European Union has taken affirmative steps towards dealing with AI through the Artificial Intelligence Act, General Data Protection Regulation, and the Digital Services Act.

                First, the Artificial Intelligence Act assigns four types of risk categories to AI applications.[17] The first category is unacceptable risk, which bans AI systems that pose serious dangers, like government social scoring. The second category is high-risk, including tools that rank job applicants, and assigns them specific rules. The third is limited risk, which are subject to lighter transparency obligations, such as ensuring end users are aware they are interacting with AI. Lastly, there is minimal risk, which are mostly left unregulated. The majority of the Act’s obligations fall on developers of high-risk systems regardless of whether they are located in the European Union.

                              Second, the General Data Protection Regulation (GDPR) is designed to provide a clear set of rules related to the protection of natural persons’ personal data in terms of processing and free movement.[18] There are several articles within the GDPR that affect AI systems. One of those is Article 5:  Principles Relating to Processing of Personal Data. This article establishes propositions such as data minimization and purpose limitation.[19] Data minimization is the principle that a data controller should limit the collection of personal information to what is directly relevant and necessary to accomplish a specified purpose.[20] Purpose limitation is a principle that personal data should only be collected for specified and legitimate purposes and not further processed in a manner that is incompatible with those purposes.[21] The measures in this section of the Act pressure AI developers to perform in a more responsible and ethical manner with regard to individual’s rights.

                Another provision of the Act that affects AI systems is Article 35: Data Protection Impact Assessment. This article clarifies that if a type of processing could pose a high risk to people’s rights and freedoms, the organization must assess how this processing will impact personal data protection before proceeding.[22]A DPIA occurs in three different scenarios: when there is a large-scale automated evaluation of personal data that affects individuals’ legal rights or has significant impacts on them; when special categories of data, like health information, or data about criminal convictions are processed on a large scale; and when there’s large-scale monitoring of publicly accessible areas. Understandably, an AI system could touch any of these three situations. In doing so, AI developers would need to be aware of the implications, adding another layer of scrutiny and accountability for AI systems. 

                Lastly, the Digital Services Act regulates online intermediaries and platforms with the goal of preventing illegal and harmful activities and the spread of disinformation.[23] Several provisions within the Act have implications for AI systems. The transparency requirements of the Act allow users and regulators to understand how online platforms operate, particularly regarding algorithmic decision-making. Many AI systems, especially those used in content moderation and recommendation, operate using complex algorithms that can be difficult to understand. Transparency requirements help demystify these processes, allowing users to know how decisions are made regarding what content is removed or flagged and address concerns about bias in AI decision-making. When users understand how AI algorithms work, the company fosters trust in the system. Transparency can reassure users that AI is being used responsibly and ethically. Ultimately, the DSA’s transparency obligations for algorithms are part of a broader regulatory framework addressing AI usage. Compliance with these regulations is crucial for platforms utilizing AI technologies, ensuring they meet legal and ethical standards.

                 Ultimately, these laws may lead to restrictions or complete bans on many types of AI. A recent analysis found that EU laws would result in a ban for all existing GAI products. As a result, Facebook has refused to release their AI products in the EU, potentially stifling economic development.

                ii. China’s Efforts

                 As arguably the United States’ biggest rival in the technology sphere, China has also taken affirmative steps to address the future of Artificial Intelligence. Specifically, China’s Personal Information Protection Law (PIPL), enacted in 2021, and the Data Security Law (DSL), also enacted in 2021, shape how personal data may be used, thereby affecting the availability and accuracy of datasets for AI systems.

                  The PIPL is aimed at protecting the personal information of people in Mainland China, referring to the continental landmass under direct control of the People’s Republic of China (PRC) [24] The law applies not only to organizations and individuals who process personally identifiable information (PII) in China but also those who process data of Chinese individuals’ PII outside of the country.[25] Personal information is defined broadly in Article 4 of the law as, “…various kinds of information related to identified or identifiable natural persons recorded by electronic or other means, excluding the information processed anonymously.”[26] The article also clarifies that processing means “…the collection, storage, use, processing, transmission, provision, publication, and erasure of personal information.”[27]

                   The PIPL also requires providing notice to data subjects. Notice must be provided in various situations, including: the purpose and method of collecting and/or processing PII; the transfer of data subjects’ PII to cloud service providers; processing of PII by third parties on behalf of the organization; and cross-border transfers of PII to recipients outside the country.[28] The PIPL also requires consent before collecting and processing the PII of data subjects.[29] Article 28 of the law identifies sensitive categories of personal information such as biometrics, medical health, financial accounts, and religious beliefs. [30]

                  Similarly to the EU’s GDPR, Article 55 of the PIPL requires data impact assessments in five instances: (1) processing sensitive personal information; (2) making use of personal information to make automatic decisions; (3) entrusting others to process personal information, providing other personal information processors with personal information, and disclosing personal information; (4) providing personal information to overseas parties; and (5) other personal information processing activities that have a significant impact on individuals’ rights and interests.[31] Moreover, Article 56 notes three considerations that are evaluated in the data assessment.[32] First is whether the purpose and method of processing personal information are legitimate, justifiable, and necessary.[33] Second is the impact on individuals’ rights and interests and the security risks.[34] Last  is whether the security protection measures taken are legitimate, effective, and appropriate to the degree of risks. [35]

                  There are other notable articles of the law. Article 39 notes that if a personal information processor shares an individual’s data with a party outside of China, they must inform the individual about the recipient’s details, the purpose and method of processing, the type of data, and how the individual can exercise their rights.[36] The processor must also obtain the individual’s separate consent.[37] Article 47 identifies the situations where a personal information processor must delete personal information, and if they fail to do so, the individual can request its deletion.[38] Article 49 includes what happens in the event of death, as close relatives may access, copy, correct, or delete the deceased’s personal information for legitimate reasons unless the deceased made other arrangements prior to their death.[39] Article 66 addresses the legal liability for violations of the PIPL, distinguishing minor violations from more serious violations.[40]

                   The Data Security Law (DSL) is slightly different in terms of its scope and purpose than the PIPL but is complementary to China’s data governance landscape and AI. For starters, the DSL aims to regulate data processing, ensure data security, promote data development and use, protect the rights of individuals and organizations, and safeguard the country’s sovereignty, security, and development.[41] It also distinguishes data as “core data” and “important data.” Core data refers to data related to national security, key economic sectors, important public services, and major public interests whereas important data is left relatively undefined.[42] The DSL is seen as a counter to the United States’ CLOUD Act, which gives U.S. law enforcement agencies the ability to demand companies under U.S. jurisdiction to produce the data requested regardless of where it is being stored.[43] The DSL, however, makes this much more complicated. Article35highlights that if public security or national security agencies need data for national security or crime investigations, they must follow legal procedures and get approval before obtaining the data.[44] Relevant organizations and individuals must cooperate. Following suit, Article 36 clarifies China’s gatekeeping role, noting that Chinese authorities will handle data requests from foreign law enforcement or judicial bodies based on Chinese laws, international treaties, or mutual agreements.[45] No organization or individual in China can provide data stored within China to foreign authorities without the approval of the relevant Chinese authorities.[46]

                 In conclusion, China’s PIPL and DSL significantly shape the regulation of AI by establishing a legal framework that prioritizes data privacy, security, and accountability. These laws are crucial in ensuring that AI systems, which rely on vast amounts of data, operate within clear boundaries regarding personal data protection and national security. The PIPL enforces strict guidelines on data collection, processing, and consent, compelling AI developers to adopt more transparent and privacy-conscious practices, especially when dealing with sensitive information. Meanwhile, the DSL addresses the security risks associated with data management, requiring companies to adopt robust measures to protect sensitive data and comply with regulations on data localization.

                      While these laws help safeguard individual privacy and national interests, they also impose significant compliance burdens on AI developers and businesses, potentially slowing innovation and increasing operational costs. The regulations’ broad scope and potential for vague interpretations may create uncertainties, particularly for international companies seeking to operate in China or collaborate with local firms. Nevertheless, these legal frameworks push AI development toward more ethical and responsible practices, aligning technological advancement with societal values.

                 Ultimately, China’s PIPL and DSL serve as critical tools in balancing the rapid growth of AI with the need for responsible data governance, though their impact on global AI innovation and cross-border data flows remains an area of ongoing debate. As the regulatory landscape evolves, these laws will likely continue to influence not only domestic AI practices but also global standards in the data-driven future.

                V. Challenges Presented by AI

                A. Consequences of Rapid Advancements

                i. Environmental Concerns

                AI is already taking a toll on the environment in terms of energy consumption, as most GAI models are trained and run through cloud computing.[47] Coincidentally, U.S. companies own 70% of the cloud computing market, with the bulk of it split between three tech giants: Amazon, Microsoft, and Google. [48] ChatGPT, a GAI model, takes ten times the amount of energy to answer a question than a Google search.[49] Put another away, a ChatGPT search requires 2.9 watt-hours of electricity, and a Google search only requires 0.3 watt-hours.[50] Research estimates that the overall increase in data center power consumption from AI will reach 200 terawatt-hours per year between 2023 and 2030.[51] Analysts expect AI to represent roughly 19% of the power demand for data centers.[52] As a result, CO2 emissions would produce a social cost of $125 to 140 billion.[53] This will likely change with the greening of the grid.

                ii. Vulnerabilities

                AI is a work in progress and, as a result, is exposed to a plethora of vulnerabilities. These vulnerabilities can be broken down into various categories, including integrity and confidentiality risks.

                Integrity risks refer to the potential for attacks that could cause systems to produce results that are not intended by designers, implementers, and evaluators.[54] Data poisoning attacks, a type of integrity risk, occur when an attacker disrupts the data used to train a machine learning algorithm. They allow an attacker to affect how the trained algorithm performs during testing and use, either by reducing its accuracy or causing it to give wrong results in certain cases. It undermines the training process, leading to models that make inaccurate predictions or decisions. It can affect the overall performance and reliability of the system, potentially causing harm in critical applications like healthcare or finance.

                Evasion attacks, another type of integrity risk, happen when an attacker tries to make a trained model give incorrect outputs while the system is running. These attacks involve the attacker manipulating the input or query to the trained model. Evasion attacks are usually classified as untargeted (trying to get any wrong answer) or targeted (trying to get a specific wrong answer). These attacks are concerning in situations like autonomous driving or security systems, where incorrect results could lead to serious consequences.

                              Confidentiality risks involve the accidental exposure of training data or details about the neural model’s design. Their risk category can be broken down into sub-types. First, prompt injection occurs when an attacker crafts specific inputs to trick the model into producing outputs that it normally would not generate. Arguably the most important confidentiality risk occurs when a person inputs sensitive data into a prompt, resulting in that data becoming integrated into a model’s training set and potentially becoming available to others. Second, “jailbreak” attacks force LLMs to produce outputs that go beyond the limits set by their designers, allowing them to bypass safeguards meant to prevent harmful responses. While these two subtypes of confidential risks sound similar, their end goals are very different. Prompt injection produces misleading or harmful outputs by embedding instructions within the prompt, but the model still operates within its usual framework.the goal of a jailbreak is to gain features or responses that the model is designed to restrict, often leading to inappropriate or dangerous content. Ultimately, these confidentiality risks can result in a loss of trust and reputational damage to an AI system.

                B. Balancing Innovation with Societal Impacts

                i. The Spread of Inequality Across Sectors

                The use of AI in the financial services industry has led to unequal and biased treatment. In 2019, a team of researchers from University of California-Berkeley found that lenders who used algorithms to assist in generating decisions on loan pricing discriminated against borrowers of color, resulting in a collective overcharge of $765 million a year for home and refinance loans.[55] Additionally, a Connecticut federal district court ruled that a company, whose tenant screening algorithm used data regarding criminal records, discriminated against tenants of protected classes and was liable for claims brought under the Fair Housing Act.[56]

                Moreover, AI algorithms may exacerbate racial disparities in education. developers may input historical data into the technology that ultimately replicates pre-existing biases that models may be trained to believe are accurate. For example, Nevada is one of six states where every district uses the program “Infinite Campus” to track attendance, behavior, and grades.[57] It supports an “early-warning system” that employs a machine-learning algorithm to assess the likelihood that students whose data is in the system will or will not graduate.[58] While tools like this are intended to assist education in improving outcomes for students, predictive analytics tend to rate racial minorities as less likely to succeed academically.[59] This is due to the fact that race is used as a risk factor in these algorithms and is treated as an indicator of success or failure based on the historical performance of students with those racial identities.[60]

                ii. Privacy Concerns

                AI systems present significant privacy risks like those of existing technologies, but they are exacerbated by minimal constraints on data collection.[61] AI systems consume vast amounts of data through means that are not transparent. As a result, it is basically impossible to escape the systematic forms of digital surveillance that span many aspects of life.[62] Additionally, there is a chance that other individuals may use data and, consequently, AI tools for the wrong reasons. Generative AI systems that are trained with data scraped from across the internet, have the potential to memorize personal information. This results in spear phishing, which refers to deliberately targeting people for the purposes of committing identity theft or fraud.[63]

                While data minimization and purpose limitation are elements of the GDPR and privacy law altogether, questions remain about the way regulators may use these concepts operationally. For example, how would a regulator determine whether a company has collected too much information? While the answer may be clear in some situations, it becomes more difficult when a company could argue that they use data for a number of tasks, thereby justifying their data collection. Another argument that may be advanced by the companies is by using more data, the more accurate their algorithms would become.

                VI. The Case for Congressional Regulation of AI

                A. Ensuring Ethical Uses

                i. Deepfake Pornography

                Deepfakes are media content generated by AI technologies that are designed to deceive the public[64] through machine-learning algorithms and facial mapping software, which insert data into unauthorized digital content.[65] The emergence of deepfakes on the internet, particularly pornography, is cause for concern, as this type of content can ruin lives. Last year, the hashtag #ProtectTaylorSwift was trending after AI-generated nude images of her spread across the internet, including the social media platform X.[66] The images were taken down across various platforms, and X disabled searches tied to Swift’s name days later.[67] but damage was already done.

                While Swift, a global superstar, has the command for platforms to take quick action and stop any attempt to harm her reputation, this is not the case for non-celebrities affected by this trend. For instance, AI-generated nude images of a 14-year-old girl were spread across her high school in New Jersey.[68] in Seattle, a teenage boy used AI to create and distribute similar fake images of teen girls at a suburban high school.[69] The impact of these situations  is debilitating. Oftentimes, victims have no knowledge who created the images, and viewers may not know they are fake, leading to isolation and mistrust of those around them.[70] Moreover, victims can experience depression or anxiety in addition to the compounding emotional damage that results from reputational harm.[71] From a legal perspective, several states have passed legislation to tackle this growing problem, varying in scope. For example, some states criminalize nonconsensual deepfake porn, while others only provide victims with the ability to sue perpetrators for damages.[72] Currently, there is no federal law prohibiting this content.[73]

                The rise of AI-generated deepfake pornography necessitates robust federal legislation to protect individual rights, particularly regarding privacy and consent. Such laws are essential to mitigate risks of exploitation, harassment. and defamation, addressing the profound psychological harm these technologies can inflict on victims. A cohesive federal framework would ensure consistent protections across states, empowering individuals with legal recourse while fostering responsible innovation in AI.

                ii. Deepfake Political Misinformation

                Deepfake videos also pose a significant danger in the political environment. The ramifications are already apparent. During an election in Slovakia, fake audio surfaced days before voters went to vote, in which a candidate discussed rigging votes and raising the price of beer.[74] As a result, the pro-Western party lost to one led by a pro-Russian politician.[75] Audio clips or videos like the previous example highlight the ability of these deepfakes to subvert elections.

                American lawmakers in 27 states introduced bills to regulate deepfakes in elections within the first six weeks of the year.[76] Many of these bills focus on transparency by proposing mandates that campaigns and candidates put disclaimers on AI-generated media.[77] Others aim to ban deepfakes within a certain window before an election.[78] However, regulating these videos, whether they are at the state level or federal level, poses a difficult balancing act with the First Amendment. Dan Weiner, director of the elections and government program at the Brennan Center for Justice at New York University School of Law, highlights this challenge saying, “It is important to remember that under the First Amendment, even if something is not true, generally speaking you can’t just prohibit lying for its own sake.”[79]

                 Nonetheless, to safeguard democratic integrity, it is imperative to federally regulate misinformation deepfakes, particularly those aimed at subverting elections. This regulation must delicately balance the necessity of protecting electoral processes with First Amendment rights, ensuring that free expression is preserved while preventing harmful distortions of truth that threaten civic discourse and democratic elections.

                B. Economic Implications

                i. Automation

                The rapid advancement of AI and automation is transforming industries at an unprecedented pace, leading to profound implications for the workforce. The 118-day SAG-AFTRA strike serves as a poignant illustration of these challenges, as workers in the entertainment sector voiced legitimate concerns over job security in the age of AI, among other issues such as fair compensation and residual increases.[80] The strike underscores the urgent need for federal regulation of AI to protect workers of all kinds and ensure a just transition as technological innovation reshapes the job landscape.

                As the strike reveals, workers are not merely resisting change but are advocating for their right to participate meaningfully in a transforming economy. Their demands highlight the necessity of federal intervention to establish regulations that govern the use of AI in a manner that prioritizes human employment. It is imperative that policymakers address the potential for mass job loss, ensuring that technological advancement does not come at the expense of human livelihoods. A robust regulatory framework could facilitate the development of policies aimed at retraining workers, creating new job opportunities, and ensuring that the benefits of AI are distributed equitably across society.

                Regulation can foster innovation that complements human labor rather than completely replacing it. By incentivizing the development of AI tools designed to enhance human creativity and productivity, the government can help cultivate a landscape in which technology and talent coexist harmoniously. This approach not only safeguards existing jobs but also encourages the emergence of new roles that leverage AI’s capabilities while preserving the uniquely human aspects of creativity and storytelling.

                ii. Infringing on Intellectual Property Rights

                As AI continues to evolve and permeate various industries, the need for regulation becomes increasingly clear, particularly in addressing the economic ramifications of potential infringements on intellectual property rights, such as copyright. As AI systems are capable of generating content that closely mimics or outright replicates copyrighted material, the risk of infringement has escalated, raising profound economic implications. Businesses that rely on creative content—such as publishers, musicians, and artists—are particularly vulnerable, as their revenue streams hinge on the protection of their intellectual property. Without stringent regulatory measures, the proliferation of AI-generated works could dilute the value of original creations, undermining the incentives that drive innovation and artistic expression.

                Moreover, the ambiguity surrounding authorship and ownership of AI-generated content complicates the enforcement of copyright laws. For example, Kristina Kashtanova initially received a copyright registration for her comic book Zarya of the Dawn, which used AI-generated images.[81]  After learning that AI was used, the U.S. Copyright Office cancelled the original certificate and gave Kashtanova a partial registration. The Copyright Office cited the use of Midjourney, an AI image creation platform, as not being the product of human authorship therefore ineligible for protection.

                As a result, current legal frameworks have only begun to address whether the AI itself, its developers, or the users should hold rights to the output, leading to uncertainty and potential disputes. In August 2023, a federal judge rejected an attempt to copyright an artwork that was created through AI.[82] Stephen Thaler, an inventor, listed his computer system as the work’s owner, arguing a copyright should have been issued and transferred to him.[83] After the U.S. Copyright Office repeatedly rejected his request, he decided to sue the agency’s director.[84] Thaler appealed the decision seeking registration, but the Court of Appeals affirmed the denial in March.[85] This ruling underscores the complex challenges that AI poses to copyright law, and potentially to other areas of intellectual property. Without clear legal guidance, the uncertainty surrounding AI-generated content not only threatens the livelihoods of human creators but may also discourage investment in creative industries, as stakeholders grow wary of legal risks. To address these concerns, comprehensive regulations are crucial. They can help protect intellectual property rights, promote economic stability, and ensure that advancements in AI support, rather than undermine, creativity and innovation.

                iii. National Security

                a. Cybersecurity

                With AI becoming increasingly embedded in critical infrastructure, financial systems, and everyday applications, the potential for cybersecurity vulnerabilities grows exponentially, as each new integration introduces complex interdependencies that can be exploited. These systems, whether they control transportation networks, healthcare databases, or personal devices, are often interconnected, creating a web of potential entry points for attacks. For example, a breach in one area—such as a hospital’s AI-driven patient management system—could cascade, impacting other systems reliant on the same data or technology. This interconnectedness makes it essential to develop regulations that not only address individual systems but also consider the broader ecosystem in which they operate.

                Furthermore, without clear guidelines and oversight, companies may prioritize innovation over security, leading to lax practices that heighten risks. This tendency to prioritize rapid deployment and competitive advantage often results in a culture where security measures are viewed as impediments rather than essential components of development. In such an environment, companies might rush to market with AI products that have not undergone thorough security assessments, leaving them vulnerable to exploitation by cybercriminals. Comprehensive regulations can address these challenges by establishing standardized security protocols that all organizations must follow. By mandating regular security audits, vulnerability assessments, and compliance checks, regulations can ensure that AI systems are built with security as a foundational element. This not only protects the organizations themselves but also safeguards users and the broader community from potential fallout due to security breaches.

                b. Supply Chain Vulnerabilities

                The proliferation of AI technologies has significantly transformed various sectors, yet it concurrently exposes critical vulnerabilities in supply chains. Congressional regulation of AI is imperative to mitigate risks that could undermine national security and economic stability. Firstly, AI systems often rely on complex, global supply chains that are susceptible to disruptions, whether from cyberattacks, geopolitical tensions, or natural disasters. Such vulnerabilities can lead to catastrophic failures in essential services and industries. For instance, an AI-driven logistics network may malfunction due to compromised data integrity, resulting in significant economic losses and jeopardizing public safety. Moreover, the rapid development of AI technologies outpaces existing regulatory frameworks, leaving gaps that malicious actors may exploit. By establishing a comprehensive regulatory framework, Congress can ensure that AI systems are designed with robust security measures, thereby enhancing the resilience of supply chains against potential threats. Lastly, federal oversight would facilitate collaboration between public and private sectors, fostering innovation while ensuring compliance with safety and ethical standards. As AI continues to reshape the economic landscape, proactive legislative action is crucial to safeguard the integrity and reliability of our supply chains, ultimately protecting both consumers and national interests.

                VII. The Case Against Congressional Regulation of AI

                A. Stifling Innovation

                i. Freedom of Expression

                AI regulation raises concerns about the First Amendment and freedom of expression. As a tool for communication and creativity, AI enables new forms of self-expression, and regulation could stifle this by limiting the development and sharing of ideas. The subjective nature of regulation risks censorship, suppressing certain viewpoints and undermining democratic discourse. Additionally, AI-generated content reflects diverse inputs, complicating responsibility and potentially leading to a chilling effect on creators. Instead of regulation, promoting ethical guidelines and self-regulation can protect free expression while addressing societal concerns.

                ii. Compliance Costs

                Regulating AI can impose heavy compliance costs that disproportionately burden smaller firms and startups, creating barriers to innovation. Large corporations, with more resources, can absorb these costs, consolidating market power and stifling competition. This limits diverse contributions to AI development, as only well-funded companies can navigate complex regulations. Moreover, rigid regulations risk becoming outdated, hindering creativity in a fast-evolving field. Instead of heavy regulation, fostering self-regulation and ethical practices among developers may better encourage innovation and competition. A lighter regulatory approach can promote a thriving, diverse AI ecosystem, ensuring broader access and societal benefits.

                VIII.  Alternatives To Regulation

                1. Instituting a Precautionary Principle

                The precautionary principle in AI offers a proactive approach to innovation, focusing on identifying and mitigating risks like bias, privacy violations, and ethical concerns before they become societal issues. Unlike rigid regulation, it encourages responsible development by embedding ethics and risk assessments into AI design. While the precautionary principle has led to technology bans in the past, a balanced approach is needed to ensure both innovation and safety in the face of rapidly evolving AI.

                B. Existing Soft Law Governance

                Soft Law, which includes non-binding agreements and principles, has become the dominant form of AI governance due to its flexibility. Between 2001 and 2019, 634 Soft Law programs were created, with 90% enacted from 2017 to 2019.[86] The public sector played a key role, crafting 36% of these programs, highlighting Soft Law as a viable alternative to strict AI regulation.[87]

                IX. Conclusion

                              AI is undeniably here to stay, evolving rapidly and presenting both opportunities and challenges. Like any emerging technology, it is imperfect, and the pursuit of excellence can lead to unintended consequences alongside existing issues. While alternatives to formal regulation exist, they often lack strong enforcement mechanisms and depend on ethical participation. In the long run, comprehensive regulation may be necessary to ensure responsible development and use of AI. Simply relying on the industry’s adherence to ethical standards is insufficient; a robust regulatory framework will be essential for navigating the complexities and risks posed by AI technologies.


                  [1] Google, What is Artificial Intelligence (AI)?,https://cloud.google.com/learn/what-is-artificial-intelligence#types-of-artificial-intelligence.

                  [2]  IBM, Understanding the different types of artificial intelligence, https://www.ibm.com/think/topics/artificial-intelligence-types.

                  [3] MICROSOFT, What Is Cloud Computing?,https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-cloud-computing. (last visited Apr. 20, 2025).

                  [4] KITEWORKS, Demystifying The US CLOUD Act, https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-cloud-computing. (last visited Apr. 20, 2025).

                  [5] Bernard Marr, 15 Amazing Real-World Applications Of AI Everyone Should Know About, FORBES,https://www.forbes.com/sites/bernardmarr/2023/05/10/15-amazing-real-world-applications-of-ai-everyone-should-know-about/  (May 12, 2023, 4:45 PM) .

                  [6] Microsoft, What Is Facial Recognition? MICROSOFT, https://azure.microsoft.com/en-us/resources/cloud-computing-dictionary/what-is-face-recognition.

                  [7] Nandini Agarwal, Does Virtual Reality (VR) Or Augmented Reality (AR) Use Artificial Intelligence (AI)?, MEDIUM(Oct. 16, 2023),https://medium.com/@irusubacklink/does-virtual-reality-vr-or-augmented-reality-ar-use-artificial-intelligence-ai-c46089ce2614.

                  [8] Sitaram Sharma, How Is AI in Self-Driving Cars Advancing The Automotive Industry Towards Excellence? APPVENTUREZ, https://www.appventurez.com/blog/ai-in-self-driving-cars (Sept. 16, 2024).

                  [9] IBM, What Is Explainable AI?https://www.ibm.com/topics/explainable-ai.

                  [10] Alun Preece et al., Stakeholders In Explainable AI, ARVIX, https://ar5iv.labs.arxiv.org/html/1810.00184.

                  [11] Lumenova, Who Is Accountable for AI: The Role Of Stakeholder Engagement In Responsible AI (Sept. 10, 2024),https://www.lumenova.ai/blog/responsible-ai-accountability-stakeholder-engagement/.

                  [12] WHITECASE, AI Watch: Global Regulatory Tracker – United States (May 13, 2024),https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states.

                  [13] Press Release, White House, President Biden Issues Executive Order On Safe, Secure, And Trustworthy Artificial Intelligence (Oct. 30, 2023).

                  [14] U.S. Cybersecurity and Infrastructure Security Agency, 2023-2024 CISA Roadmap for AI (2023), https://www.cisa.gov/sites/default/files/2023-11/2023-2024_CISA-Roadmap-for-AI_508c.pdf.

                  [15] U.S. Department of Commerce, National Institute of Standards and Technology, AI Risk Management Framework (2023), https://www.nist.gov/itl/ai-risk-management-framework.

                  [16] U.S. Department of Commerce, National Institute of Standards and Technology, NIST AI 600-1: A Taxonomy and Framework for Artificial Intelligence (2023), https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf.

                  [17] European Commission, High-Level Summary on the Artificial Intelligence Act, https://artificialintelligenceact.eu/high-level-summary/.

                  [18] GDPR.eu, Article 1 – Subject-Matter and Objectives, https://gdpr-info.eu/art-1-gdpr/ (last visited Oct. 27, 2024).

                  [19] GDPR.eu, Article 5 – Principles Relating to Processing of Personal Data, https://gdpr-info.eu/art-5-gdpr/.

                  [20] European Data Protection Supervisor, Glossary: Data Minimization, https://www.edps.europa.eu/data-protection/data-protection/glossary/d_en.

                  [21] Data Protection Commission, Principles of Data Protection, https://www.dataprotection.ie/en/individuals/data-protection-basics/principles-data-protection

                  [22]General Data Protection Regulation,, Article 35 – Data protection impact assessment (2018), https://gdpr-info.eu/art-35-gdpr/.

                  [23] European Commission, The Digital Services Act, https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en (last visited Oct. 27, 2024).

                  [24] University of Illinois, China’s Personal Information Protection Law, U. OF ILL. (Oct. 18, 2021), https://www.vpaa.uillinois.edu/resources/policies/u_of_i_system_and_international_privacy_laws/china_s_personal_information_protection_law#:~:text=PIPL%20was%20enacted%20on%20August,effective%20on%20November%201%2C%202021.

                  [25] UC BERKELEY, China Privacy Law (Nov. 2021), https://oercs.berkeley.edu/privacy/international-privacy-laws/china-privacy-law.

                  [26] Personal Information Protection Law, art. 4, https://personalinformationprotectionlaw.com/PIPL/category/general-provisions/ (last visited Feb. 1, 2025).

                  [27] Id.

                  [28] UC Berkeley, supra note 25.

                  [29] Id.

                  [30] Personal Information Protection Law, art. 28, https://personalinformationprotectionlaw.com/PIPL/article-28/ (last visited Feb. 1, 2025).

                  [31] Personal Information Protection Law, art. 55, https://personalinformationprotectionlaw.com/PIPL/article-55/ (last visited Feb. 1, 2025).

                  [32] Personal Information Protection Law, art. 56, https://personalinformationprotectionlaw.com/PIPL/article-56/ (last visited Feb. 1, 2025).

                  [33] Id.

                  [34] Id.

                  [35] Id.

                  [36] Personal Information Protection Law, art. 39, https://personalinformationprotectionlaw.com/PIPL/article-39/ (last visited Feb. 1, 2025).

                  [37] Id.

                  [38] Personal Information Protection Law, art. 47, https://personalinformationprotectionlaw.com/PIPL/article-47/, (last visited Feb. 1, 2025).

                  [39] Personal Information Protection Law, art. 49, https://personalinformationprotectionlaw.com/PIPL/article-49/ (last visited Feb. 1, 2025).

                  [40] Personal Information Protection Law, art. 66, https://personalinformationprotectionlaw.com/PIPL/article-66/ (last visited Feb. 1, 2025).

                  [41] Zhonghua renmin gongheguo shuju anquan fa (中华人民共和国数据安全法) [Data Security Law of the People’s Republic of China] (promulgated by the Standing Comm. Nat’l People’s Congress, June. 10, 2021, effective Sept. 1, 20210), art. 1, 2021, http://www.npc.gov.cn/englishnpc/c2759/c23934/202112/t20211209_385109.html (China).

                  [42] Id.art. 21.

                  [43] Christian Perez, Why China’s New Data Security Law Is a Warning for the Future of Data Governance, FOREIGN POLICY (Jan. 28, 2022), https://foreignpolicy.com/2022/01/28/china-data-governance-security-law-privacy/.

                  [44] Zhonghua renmin gongheguo shuju anquan fa (中华人民共和国数据安全法) [Data Security Law of the People’s Republic of China] (promulgated by the Standing Comm. Nat’l People’s Congress, June. 10, 2021, effective Sept. 1, 20210), art. 35, 2021, http://www.npc.gov.cn/englishnpc/c2759/c23934/202112/t20211209_385109.html (China).

                  [45] Id. art. 36..

                  [46] Id.

                  [47] Isabella Bousquette, The AI Boom Is Here. The Cloud May Not Be Ready, WALL STREET JOURNAL (July 10, 2023 12:00 PM), https://www.wsj.com/articles/the-ai-boom-is-here-the-cloud-may-not-be-ready-1a51724d.

                  [48] Samuel Hammond, The Scramble for AI Computing Power, AMERICAN AFFAIRS https://americanaffairsjournal.org/2024/05/the-scramble-for-ai-computing-power/.

                  [49] John Ramos, Power demands of AI computing could put power grid under major strain, CBS (June 26, 2024 6:29 PM),https://www.cbsnews.com/sanfrancisco/news/power-demands-of-ai-computing-could-put-power-grid-under-major-strain/.

                  [50] GOLDMAN SACHS, AI is poised to drive 160% increase in data center power demand, (May 14, 2024), https://www.goldmansachs.com/insights/articles/AI-poised-to-drive-160-increase-in-power-demand.

                  [51] Id.

                  [52] Id.

                  [53] Id.

                  [54] Bill Scherlis, Weaknesses and Vulnerabilities in Modern AI: Integrity, Confidentiality, and Governance, SEI(Aug. 5, 2024),https://insights.sei.cmu.edu/blog/weaknesses-and-vulnerabilities-in-modern-ai-integrity-confidentiality-and-governance/.

                  [55] Patrick Sisson, Housing discrimination goes high tech, CURBED(Dec. 17, 2019 6:12 PM),https://archive.curbed.com/2019/12/17/21026311/mortgage-apartment-housing-algorithm-discrimination.

                  [56]  Id.

                  [57] Hoang Pham et al., How Will AI Impact Racial Disparities in Education?, STANFORD CTR. FOR RACIAL JUSTICE (June 29, 2024), https://law.stanford.edu/2024/06/29/how-will-ai-impact-racial-disparities-in-education/.​Stanford Report

                  [58] Id.

                  [59] Id.

                  [60] Id.

                  [61] Katharine Miller, Privacy in an AI Era: How Do We Protect Our Personal Information?, STANFORD (Mar. 18, 2024), https://hai.stanford.edu/news/privacy-ai-era-how-do-we-protect-our-personal-information.

                  [62] Id.

                  [63] Id.

                  [64] Northwestern University Buffett Institute for Global Affairs, The Rise of AI and Deepfake Technology (2023), https://buffett.northwestern.edu/documents/buffett-brief_the-rise-of-ai-and-deepfake-technology.pdf.

                  [65] Id.

                  [66] Sophie Maddocks, What We Keep Getting Wrong about Deepfakes Like the fake Taylor Swift Nudes, MSNBC (Jan. 29, 2024 5:56 PM), https://www.msnbc.com/opinion/msnbc-opinion/ai-deepfake-nudes-taylor-swift-x-image-sexual-abuse-rcna136221.

                  [67]Id.

                  [68] Haleluya Hadero, Teen Girls are Being Victimized by Deepfake Nudes. One Family is Pushing for More Protections, ASSOCIATED PRESS (Dec. 2, 2023 2:37 PM), https://apnews.com/article/deepfake-ai-nudes-teen-girls-legislation-b6f44be048b31fe0b430aeee1956ad38.

                  [69] Id.

                  [70] Halle Nelson, Taylor Swift and the Dangers of Deepfake Pornography, NSVRC (Feb. 7, 2024), https://www.nsvrc.org/blogs/feminism/taylor-swift-and-dangers-deepfake-pornography.

                  [71] Id.

                  [72] Hadero, supra note 68

                  [73] Nelson, supra note 70

                  [74] Shannon Bond, AI Fakes Raise Election Risks As Lawmakers And Tech Companies Scramble To Catch Up, NATIONAL PUBLIC RADIO(Feb. 8, 2024 5:00 AM), https://www.npr.org/2024/02/08/1229641751/ai-deepfakes-election-risks-lawmakers-tech-companies-artificial-intelligence.

                  [75]Id.

                  [76] Id.

                  [77] Id.

                  [78]Id.

                  [79] Id.

                  [80] Kalia Richardson, One Year After The Actors’ Strike, AI Remains A Persistent Threat, ROLLINGSTONE (July 14, 2024), https://www.rollingstone.com/tv-movies/tv-movie-features/actors-strike-sag-aftra-ai-one-year-later-1235059882/.

                  [81] Letter from Robert J. Kasunic, Associate Register of Copyrights and Director of Registration Policy and Practice U.S. Copyright Office to Van Lindberg, Taylor English Duma LLP (Feb. 21, 2023).

                  [82] Zachary Small, As Fight Over A.I. Artwork Unfolds, Judge Rejects Copyright Claim, N.Y. TIMES (Aug. 21, 2023), https://www.nytimes.com/2023/08/21/arts/design/copyright-ai-artwork.html.​

                  [83] Id.

                  [84] Id.

                  [85] Thaler v. Perlmutter, 130 F.4th 1039 (D.C. Cir. 2025)

                  [86] Carlos Ignacio Gutierrez & Gary Marchant, How Soft Law Is Used in AI Governance, BROOKINGS (May 27, 2021), https://www.brookings.edu/articles/how-soft-law-is-used-in-ai-governance/.​

                  [87] Id.