Evolving AI Regulation and the Legal Challenges of its Use

It’s been almost one year since the release of ChatGPT, the spark that ignited the market-transforming AI craze. And while governments scramble to develop regulations to make AI safer, more secure, and trustworthy, leaders of technology behemoths, including Alphabet CEO Sundar Pichai, Microsoft president Brad Smith, and OpenAI, then Microsoft, then OpenAI CEO again Sam Altman, are throwing their support behind AI regulation — traditionally one of the most significant headwinds and headaches for new technology.

So why would tech leaders be in support of regulation? At its core, a set of well-thought-out rules would ensure that AI firms invest in products they know won’t be regulated out of existence.

Cooperation from the tech world would also mean that companies would have a cohesive set of rules rather than a patchwork of disparate AI laws across various states, allowing them to weigh in on creating the regulations. According to Darrell West, a senior fellow at the Brookings Institute’s Center for Technology Innovation, “The worst-case scenario for businesses is having 50 different sets of rules at the state level. It would be expensive to devise different software for Idaho as opposed to Illinois.” Put another way, a known regulatory landscape is invariably easier to navigate than an environment of “known unknowns.”

The proliferation of AI governance efforts this year at nearly every level has been a natural extension of the fast deployment of AI and industry reorientation around it. The Biden administration’s executive order is an opening round not meant to be comprehensive or final. Still, it sets a significant policy agenda as other bodies—including Congress and aligned international partners—consider the next steps.

The European Union is already well on its way to crafting AI legislation. The EU initially proposed its AI rules in April 2021, focusing on the different levels of risk associated with the technology. The more risk involved, the more regulation AI is subject to. High-risk applications would need to be assessed by regulators before hitting the market. Foundation models that power generative AI systems would need to be registered in an EU database. Other forms of AI would need to meet transparency requirements that help users understand their capabilities.

China has also introduced its own AI regulations. In August, the country finalized guidelines requiring companies of China-facing AI platforms to submit their algorithms for review to the appropriate government entities. However, those platforms designed for use outside China don’t have to abide by the same rules.

Generative artificial intelligence raises unique legal questions about data use and how content will be regulated.

Generative AI, including large language models (LLMs) such as ChatGPT and image-generation software, are powerful business tools. Still, they raise questions about how data is used in AI models and how the law applies to things like text creation or computer-generated images—most lawsuits about generative AI center on data use, copyright infringement, and privacy concerns. It’s important to note that, according to a recent US district court case and the US Copyright Office, AI-generated content is not copyrightable. AI and AI-generated images were among the major issues that led to the recent SAG-AFTRA strike. Did the movie studios relent in the strike on the use of AI because of these decisions, at least in part?

If you’re a software company, the use of AI in creating the company’s solutions will inevitably become heavily scrutinized by those conducting due diligence, either in the context of funding, M&A, or customer RFPs. How confident should the company be if the question is asked: how much of your product is not protected by copyright?

Where generative AI fits into the current legal landscape.

Generative AI may be new to the market, but existing laws have substantial implications for its use. Courts are working out how the existing laws should be applied. There are infringement and right-of-use issues, uncertainty about ownership of AI-generated works, and questions about unlicensed content used to train algorithms and whether users can direct these tools with specific reference to other creators’ copyrighted and trademarked works by name without getting their permission.

Litigation is already happening.  In a case filed at the end of 2022, Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms based on the AI using their original works without a license to train their AI in their styles. This allowed users to produce works that may be vastly different from their existing, protected works. The result would be unauthorized derivative works. Considerable infringement penalties may apply if a court finds the AI works are unauthorized and derivative.

Similar cases in 2023 claim that companies trained AI tools using “data lakes” with thousands — or even millions — of unlicensed works. Getty, an image licensing company, filed a lawsuit against the creators of Stable Diffusion, alleging the improper use of its photos, violating copyright and trademark rights in its watermarked photograph collection.

The legal system is asked to clarify what constitutes “derivative work” under intellectual property laws in these cases. The outcome of these cases may well rely on the interpretation of the fair use doctrine, which allows copyrighted work to be used without the owner’s permission “for purposes such as criticism (including satire), comment, news reporting, teaching, research,” and for transformative use of the copyrighted material in a manner for which it was not intended.

This isn’t the first time technology and copyright laws have collided. Google successfully defended itself against a lawsuit by arguing that transformative use allowed for scraping text from books to create its search engine. For the time being, this decision remains precedential.

Our take

Businesses should evaluate their transaction terms to ensure they are adequately protected in contracts, particularly where the other party uses generative AI or LLMs. To start, they should ask for terms of service from generative AI platforms that confirm they have the appropriate licensing of the training data used to feed their AI algorithms. They should also demand broad indemnification for potential intellectual property infringement caused by a failure of the AI companies to license data input properly.

They should consider adding disclosures in their vendor, customer, and even non-disclosure agreements if either party uses generative AI or LLMs to ensure both sides understand and protect intellectual property rights. Vendor and customer contracts can include AI-related language added to confidentiality provisions to prohibit receiving parties from inputting confidential information into text prompts of AI tools.

Companies that use generative AI or work with vendors that do should keep their legal counsel abreast of the scope and nature of that use as the law will continue to evolve rapidly. If you need help or have questions, please drop us a line at +1 212 545 8022 or visit our website – www.rooney.law.

 

Read more insights from the Rooney Law team here.

 

© 2023 Rooney Law. All rights reserved. Rooney Law PC is an international corporate law firm. In accordance with the common terminology used in professional service organizations, reference to a “partner” means a person who is a partner or equivalent in such a law firm. Similarly, reference to an “office” means an office of any such law firm. This may qualify as “Attorney Advertising” requiring notice in some jurisdictions. Prior results do not guarantee a similar outcome. Nothing herein shall create an attorney-client relationship between the reader and Rooney Law PC.

Insight Search

Insight Topics

Keep in touch

Recent Posts

Related Articles

Scroll to Top
gdpr