The ability of machines to learn and improve without explicit instructions has the potential to revolutionize many industries, but businesses that use AI must be aware of the legal and operational risks that come with it.

Key Legal and Operational Risks for Enterprise AI

By Louis Lehot, Foley & Lardner

Louis Lehot
6 min readMay 1, 2023

--

The adoption of artificial intelligence tools in the enterprise is set to accelerate as venture capitalists and large corporations alike deploy billions of dollars toward creating and releasing new foundation models. Following ChatGPT’s rapid adoption, arguably the fastest adoption of a new technology application in the history of science, there is no turning back. Fasten your seatbelts and prepare for the disruption. The ability of machines to learn and improve without explicit instructions has the potential to revolutionize many industries, from healthcare to finance and up the courtroom steps and ultimately into the judge’s chambers. However, we all know that with great power comes great responsibility, and businesses that use AI must be aware of the legal and operational risks that come with it.

Potential for AI in the Enterprise

As evidenced by the billions of dollars in new capital being deployed to OpenAI, Anthropic, Stability AI, and other startups, the opportunities for new businesses to take large data pools and monetize them within new foundation models that provide premium services, will disrupt consumer and enterprise applications forever. Creators of intellectual property have the opportunity to unlock untapped revenue streams. Developers and enterprises will pay for these premium services, potentially on a volumetric basis. At last, enterprises will be able to consume advanced analytics scalably.

Public markets reward those enterprises out in front (e.g., Nvidia) and punish those who look to be replaced by ChatGPT and its progeny (e.g., Buzzfeed).

A recent study published by MIT suggests that the deployment of AI in the enterprise is rapidly accelerating across functions and departments, landing and expanding at greater velocities. A recent Pitchbook Data report estimated the value of the AI and machine learning market at $197.5 billion in end user spending in 2022 and forecasts that spending will double by 2025.

Amid the hype-cycle, in a somewhat stunning development by an unexpected group of bedfellows across academia, the entrepreneur community, “bigtech” and old economy corporate executives self-styled as the “Future of Life Institute” came together to publish an open letter and policy recommendations demanding that governments around the world mandate a moratorium for all AI labs to pause, for six months at least, the training of AI systems more powerful than GPT-4. The moratorium was demanded to protect against the profound risks to society and humanity in light of the lack of planning for or management of machines that no one — not even their creators — can understand, predict, or reliably control. The “pause” is demanded so that AI labs and independent experts jointly develop and implement shared safety protocols for advanced AI design and development that would be rigorously audited and overseen by independent outside experts, ensuring safety beyond a reasonable doubt. Simply put, the open letter demands that humanity design AI governance to ensure that humans control machines rather than ceding control to machines.

While the open letter has yet to lead to governmental action, entrepreneurs, executives, and investors would do well to consider resolving ownership questions regarding underlying intellectual property and mitigate the obvious legal and operational risks to AI development and deployment as they design the roadmap forward.

Fundamental Questions of IP Ownership

Artificial intelligence engines are receiving terabytes of data in the form of text, video, audio, and images (the “inputs”), running large language models and algorithms on this data, and then generating responses to queries (the “outputs”). In other words, given that the inputs inform the outputs, the debate is raging about who owns the intellectual property created by the AI engines. Who is the creator? Is it the author of the original content that the AI engine used to train itself, or is it the engine’s designer that created the outputs? Another question before the courts is whether the outputs can benefit from copyright laws at all, given that there is no human creator. Does the AI training process infringe copyrights in other works? Do AI outputs infringe copyrights in other works? Is the process of scraping copyrighted data and using it to train AI engines that create outputs a “fair use” that is protected under U.S. copyright laws? Lawsuits from authors and artists demanding compensation are dropping all over the United States. The consequences of losing could be significant.

Legal Risks of AI

Another of AI’s most significant legal risks is the potential for bias. AI systems are as good as the data they are trained on. If that data is biased, the AI system will also be biased. This can lead to outcomes that violate anti-discrimination laws. For example, an AI hiring system trained on historical data that reflects biased hiring practices may perpetuate that bias and result in discrimination against certain groups.

Another legal risk of AI is the potential for violating privacy laws. AI systems often require access to large amounts of data. If that data includes personal information, businesses must comply with relevant privacy laws, such as the General Data Protection Regulation (GDPR) in the European Union or the California Consumer Privacy Act (CCPA) in the United States.

As both legislation and regulation lag behind advancements in AI, it is also difficult for companies to prepare for the regulatory measures that will no doubt be coming. These will, of course, have significant legal concerns as they begin to roll out over time. It will be important that companies developing and utilizing AI are prepared to make some substantial shifts if necessary.

Operational Risks of AI

Image Credits: Freepik.com

In addition to legal risks, the operational risks associated with AI must also be considered. One of the biggest is the potential for unexpected outcomes. AI systems can be complex and challenging to understand, and they may produce surprising results that can harm a business’s operations or reputation. Some very public examples of well-known AI systems making poor recommendations or diagnoses resulted in significant consequences.

There is also the risk of user error. AI systems must still be operated and ultimately controlled by humans, so the risk of user error will always exist. The results can range from an embarrassing situation for a company to substantial harm to reputation, business operations, or customers.

Another operational risk is the potential for cyberattacks. AI systems can be vulnerable to attacks by hackers who may attempt to manipulate the system for their gain or to disrupt the business’s operations. This can lead to data breaches, financial losses, or other types of damage.

Mitigating Risks

Businesses should take several steps to mitigate AI’s legal and operational risks. First, they should ensure that their AI systems are designed with fairness, accountability, and transparency in mind. This may involve developing guidelines for the collection and use of data, testing the system for bias, and providing explanations for the decisions made by the system.

Second, businesses should ensure they comply with relevant laws and regulations, such as anti-discrimination and privacy laws. This may involve conducting regular audits of the AI system to ensure compliance and providing training to employees on the legal and ethical implications of AI.

Third, businesses should develop robust cybersecurity measures to protect their AI systems from cyberattacks. This may involve encryption and multi-factor authentication, conducting regular vulnerability assessments, and training employees on cybersecurity best practices.

Finally, consideration should be given to licensing “inputs” and compensating artists and authors for outputs. Watch this space. It could change who are the winners and who are the losers.

Businesses that use AI must be aware of the legal and operational risks that come with it. By designing their AI systems with fairness, accountability, and transparency in mind, complying with relevant laws and regulations, developing robust cybersecurity measures, and fairly compensating creators, businesses can mitigate these risks and reap the benefits of this powerful technology.

Louis Lehot is a partner and business lawyer with Foley & Lardner, based in the firm’s Silicon Valley, San Francisco and Los Angeles offices, where he is a member of the private equity and venture capital, mergers and acquisitions and transactions practices, as well as the technology team.

--

--

Louis Lehot
Louis Lehot

Written by Louis Lehot

Louis Lehot is a partner and business lawyer with Foley & Lardner LLP, based in the firm’s Silicon Valley office. Follow on Twitter @lehotlouis

No responses yet