In our previous post, “A Primer on Artificial Intelligence and the Law in 2023,” we briefly discussed how the federal government is preparing for legislation and regulation regarding Artificial Intelligence (“AI”) through provisions of the National Artificial Intelligence Initiative Act of 2020 (the “Act”). While no comprehensive federal statute regulating AI has been signed into law, regulatory agencies must contend with the emerging technology under existing laws.

Executive Order 13859, “Maintaining American Leadership in Artificial Intelligence,” directed the Office of Management and Budget (“OMB”) to issue guidance regarding agency regulation and oversight of technologies and industries that use AI. The OMB issued said guidance in November of 2020, “Guidance for Regulation of Artificial Intelligence Applications,” which struck a fairly permissive tone, stating:

Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth. Where permitted by law, when deciding whether and how to regulate in an area that may affect AI applications, agencies should assess the effect of the potential regulation on Al innovation and growth. While narrowly tailored and evidence based regulations that address specific and identifiable risks could provide an enabling environment for U.S. companies to maintain global competitiveness, agencies must avoid a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America’s position as the global leader in AI innovation.

The Federal Trade Commission (“FTC”) issued guidance in April of 2020, “Using Artificial Intelligence and Algorithms” to help businesses better understand how to comply with existing regulations while using AI technology. The FTC noted that while AI technology has rapidly developed in recent years, automated decision-making practices have been regulated for many decades, such as under the Fair Credit Reporting Act (FCRA) and the  Equal Credit Opportunity Act (ECOA). The FTC gave concrete guidance to follow when using AI technology, including:

  • don’t deceive consumers about how you use automated tools;
  • if you deny consumers something of value based on algorithmic decision-making, explain why;
  • if you use algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for importance;
  • if you might change the terms of a deal based on automated tools, make sure to tell consumers;
  • make sure that your AI models are validated and revalidated to ensure that they work as intended, and do not illegally discriminate; and
  • protect your algorithm from unauthorized use.

While no comprehensive federal regulatory scheme is in place regarding AI technologies, companies seeking to use AI technology in their businesses within the U.S. should be careful to comply with all existing laws and regulations applicable to AI and automated decision-making practices. One way to do that, as a matter of best practice, is following the Fair Information Practice Principles of Notice, Choice, and Consent as is standard with any good privacy program. 

Check back here for updates on AI regulatory developments as we continue to monitor this rapidly changing space. Taft’s Privacy and Data Security attorneys can assist in answering any questions or advising on how to manage, train, and mitigate risks associated with privacy, data management, and cybersecurity, as well as what to do after a breach or other cyberattack occurs. Stay tuned to our Taft Privacy and Data Security Insights or download our app for more news and information.