Although Artificial Intelligence (AI) has promised to transform not just the workplace but human society as a whole, introducing AI tools into any workplace without adequate safeguards brings considerable risk. AI is still an emerging technology, meaning legal and regulatory considerations are murky. The use of AI can create unintended consequences.
The National Institute of Standards and Technology (NIST) is correct to state in their Artificial Intelligence Risk Management Framework (NIST, 2023) that “AI systems and the contexts in which they are deployed are frequently complex, making it difficult to detect and respond to failures when they occur.” Very large-scale businesses have been caught unaware by these risks, so every organization, large or small, should consider implementing a policy and procedure that addresses these issues before deploying AI in their organization.
For the purposes of this review, the definition of AI is “applications or other software tools that simulate human intelligence to generate responses, work products, or perform tasks.”
Privacy and security
AI tools present a risk when sensitive information is collected and stored. Stolen AI information is a method by which cybercriminals can socially engineer or directly attack individuals, so an AI system that has sensitive information must be safeguarded against compromise and unauthorized access.
In May 2023, Amazon settled with the Federal Trade Commission after it was discovered, among other things, that Ring Home Camera video footage was being used to train AI algorithms without the knowledge or consent of Ring consumers. In addition, Amazon was accused of sharing this footage with employees and contractors who were unauthorized to view the content. See: Decision and Order, Ring LLC, FTC Case No 1:23-cv-1549, FTC Act 15 U.S.C. §§ 53(b).
However, even a secure AI system can present legal risks. Google has been sued in a class action lawsuit due to the use of its flagship AI program, Gemini. Google has been accused of “secretly” turning on Gemini AI to track consumers’ private communications without the consumers’ consent. According to the complaint, Google, through Gemini AI, has access to “the entire history of its users’ private communications, including literally every email and attachment sent and received in their Gmail accounts.” See: Thele v. Google, No. 5:25-cv-09704 (Northern District of California filed November 11, 2025).
Deployment
The deployment of any algorithmic software, including AI, can present significant risks to any business. One of the most famous pre-AI algorithmic disasters happened to Knight Capital in 2012, when its trading software caused a $440 million loss to the company in 30 minutes. That loss was approximately three times Knight Capital’s annual earnings.
The problem was a fairly routine oversight, where new code was not deployed to all production servers. Because this mistake was not caught when the code was activated, Knight Capital inadvertently bought an additional 397 million shares during the first morning of the new code’s activation. This financial loss ultimately resulted in Knight Capital being acquired by a rival.
More recently, McDonald’s was forced to scuttle its drive-through AI implementation in 2024 due to serious problems with deployment. The IBM AI system, Automated Order Taker, had a high failure rate that frustrated customers, was widely reported, and required significant human intervention to correct. Despite the high-profile nature of McDonald’s AI program not meeting expectations, AI seems to have a low return in general on investment.
In fact, failure to see a return on AI investment appears to be extremely, maybe even startlingly, common. The Massachusetts Institute of Technology (MIT) reported that as many as “95% of organizations are getting zero return” on their AI investment. MIT attributes some of the blame to poor contextual learning and misalignment with day-to-day operations. See: “The GenAI Divide: State of AI in Business 2025,” MIT NANDA (2025).
Oversight
Another risk with using AI-driven technology is when the AI is deployed effectively initially, but over time causes losses due to data changes that are not accounted for by the technology. To be sure, this problem also exists with natural persons. However, the nature of AI machine learning means serious trouble can develop when organizations assume that the AI is functioning correctly when the system is, in fact, making serious errors. A term for this situation is “concept drift,” where changing data invalidates the AI model trying to predict the data outcomes.
The clearest example of this is Zillow Offers, where the real estate giant was purchasing and reselling homes directly. The AI appeared to be working appropriately in the early stages of the program. However, beginning in 2021, the automated system bought homes at inflated prices on a massive scale. Zillow was abruptly forced to exit the business in 2021, with losses estimated at $1 billion, and causing an approximate 25% reduction in its workforce.
Zillow’s model could not adjust to changes in the real estate market (probably brought about by the COVID-19 restrictions), and continued to assume price growth in areas where the market price was actually falling. This failure to adjust or account for new variables is an extremely common problem with machine-learning AI, and monitoring of performance needs to be part of the process of deploying AI. In this respect, managing AI is not much different than evaluating human performance in the workplace.
Intellectual property rights
Questions over the use and misuse of copyrighted works to train AI systems are some of the most litigated issues today in the AI context. These lawsuits are complex with no easy answer as to when the use of copyrighted works constitutes intellectual property infringement.
In 2025, one federal judge ruled that the use of lawfully acquired books for AI training was “quintessentially transformative” and thus protected by the fair use doctrine, but any acquisition of copyrighted materials that was not legal (such as the use of pirated books) was not protected and therefore constituted intellectual property infringement. Because damages could be as much as $150,000 per infringed work, the size of the potential liability exposure resulted in settlement negotiations.
This area of the law is very dynamic and subject to significant changes in the coming years. See: Bartz v. Anthropic PBC, No. 3:24-cv-05417 (Northern District of California filed August 19, 2024).
Regulations
As expected, U.S. state legislatures are rushing in to regulate the use of AI in the marketplace. As is common with these regulations, the political leaning of the state has little resemblance to whether or not regulatory legislation is in effect. Just a few samples of the state laws regulating the use of AI include:
Arkansas
Arkansas law states that the person who provides any inputs or directives to a generative AI tool is the owner of the generated content, provided the content does not infringe on existing copyrights or IP rights.
California
California explicitly states that a defendant who developed, modified, or used AI that is alleged to have caused harm to the plaintiff cannot use autonomous actions by the AI as a defense.
Maine
Maine passed a law prohibiting any person from using an AI chatbot to engage with a consumer in a manner that may mislead or deceive a reasonable consumer into believing that the consumer is engaging with a human. Maine provides a safe harbor defense if the consumer is notified in a clear and conspicuous manner that the consumer is not engaging with a human being.
Montana
Montana made it an unfair trade practice to intentionally publish, perform, distribute, transmit, or make available to the public a digital voice depiction or digital visual depiction for commercial use without authorization.
Some of these laws are part of the overall privacy laws of the state, and have additional requirements for and restrictions on processing of data, including processing of data by AI systems.
If using AI, be prepared
Before purchasing and deploying an AI solution, organizations should have a policy and procedure in place to help ensure key risks are managed. Some of the considerations in a policy should include:
- If sensitive, confidential, or statutorily protected information will be processed and/or stored by the AI system, what controls are in place to safeguard the data from unauthorized access?
- What quality assurance and continuous testing controls are in place to ensure the AI is ready to be deployed, and will be monitored to guarantee the AI is functioning as intended?
- If applicable, what are the controls to avoid the use or generation of content that could violate or misuse the Intellectual Property rights of another?
- What is the exit strategy of the organization from the AI system if use is no longer acceptable to the organization?
Controlling risk in AI systems will be a major discipline for organizations in the coming years, with AI technology still maturing and some of its initial promises not meeting expectations.
Organizations should not ignore the potential benefits that AI solutions may provide, but should also be very aware that the use of AI in the workplace has very real risks that should be studied before, during, and after implementation.





















































