Enforcement activity surged in 2025, with landmark judgments and settlements—some reaching eight and nine figures—targeting issues such as ad tracking, analytics, wiretapping, text messaging, data subject rights, and sensitive data collection. This aggressive trend shows no signs of slowing as we move into 2026.

Taft continues to help its clients find the correct answers in their context for addressing these risks. Building on our year-end post, here are some issues you may want to consider as you take on the new year.Continue Reading Your 2026 Privacy, Security, and Artificial Intelligence Checklist

As 2025 comes to a close, we asked several members of Taft’s Privacy and Data Security practice group to share their thoughts on what should be on a client’s “wish list” for the holiday season, or on a list of resolutions for 2026.

Here are their thoughts for businesses considering to not only meet the requirements of new laws and mitigate existing risks, but also looking to seize the opportunity to maximize the impact of technology to unleash the power in their data.Continue Reading Closing Out 2025: Key Privacy & Data Security Updates from Taft

Colorado legislators have approved a five-month delay for the implementation of the Colorado Artificial Intelligence Act (the Act), moving the start date from Feb. 1, 2026, to June 30, 2026.

The decision follows a special legislative session called because of concerns stemming from compliance costs, industry lobbying, and fiscal impacts on businesses and the state. Colorado Budget Director Mark Ferrandino indicated that the law could cost the state alone between $2.5 million and $5 million annually to implement, and Colorado Governor Jared Polis indicated that the amount could be as much as $6 million per year. The Act, originally designed to address risks of algorithmic discrimination in sectors like employment, housing, and lending, will now give both lawmakers and businesses more time to clarify provisions and prepare compliance programs.Continue Reading Colorado Gives Businesses Breathing Room Before AI Act Takes Effect

Early on July 1, the U.S. Senate voted to halt an effort to impose a 10-year moratorium on state regulation of artificial intelligence. The vote, 99-1, removed the AI provision from President Trump’s “Big, Beautiful Bill” that had evolved from a full moratorium on state AI regulation for the next decade, to its most recent iteration that required states to adopt the ban in order to receive federal broadband funding over the next five years.

Yesterday, Sen. Marsha Blackburn of Tennessee and Sen. Ted Cruz of Texas attempted to revise the AI ban to address current regulations. According to media reporting, efforts toward banning state AI regulation broke down amidst concerns that the language was overly broad and could adversely impact existing laws concerning privacy, consumer protection, and child safety.Continue Reading US States Can (And Will) Continue To Regulate Artificial Intelligence … for Now

Last week, I had the privilege to attend one of the Midwest’s largest artificial intelligence conferences dedicated to AI developers, users, and enthusiasts: Cincy AI Week. During the three-day event, which brought together over 950 local professionals, I spoke on a panel entitled “Managing Risk in the Age of AI and Automation.”

Here are six important observations I shared during that panel:Continue Reading Cybersecurity in the Era of Generative and Agentic AI: Six Observations

The Google Threat Intelligence Group revealed a chilling reality: nation-states are weaponizing AI tools like Gemini for sophisticated cyberattacks. This new frontier of AI-powered fraud demands immediate attention from business leaders and general counsel, who stand at the confluence of technology, data security, and governance.

Recent Incidents and the Evolving Sophistication of These Attacks

Generative AI, like the tools used by these cybercriminals, can create highly convincing text, images, voice recordings, and even video interactions that are nearly impossible to distinguish from genuine content. In the report Adversarial Misuse of Generative AI, the Google Threat Intelligence Group explains how more than 20 countries have used Google’s generative AI tool named Gemini for nefarious purposes, including cyber espionage, destructive computer network attacks, and attempts to influence online audiences in a deceptive, coordinated manner.Continue Reading AI-Powered Fraud: Immediate Action Steps to Protect Companies from Next-Generation Payment Scams

Hard to believe, but 2025 will be here before you know it. And what goes best with a new year? A countdown list!

Last week, I spoke at the Dayton Bar Association’s Corporate Counsel Section on the topic of the Top 10 legal technology issues that in-house counsel should have on its radar for 2025. Continue Reading Top 10 Technology Issues to Watch for in 2025

Special thanks to Taft summer associate Tanner Wilburn for his significant contributions to this post. 

On July 12, 2024, the European Union’s Artificial Intelligence Act (AI Act) was published in the EU Official Journal.

This comprehensive legislation establishes the first risk-based regulatory framework for AI systems, with far-reaching implications for businesses using AI. The AI Act is effective August 2, 2024, with the enforcement of the majority of its provisions commencing on August 2, 2026.

Continue Reading The EU AI Act – What Businesses Need to Know

Artificial intelligence, referred to as “AI” for short, has had an outsized impact on nearly every aspect of human existence. If that sounds like an overstatement, it’s not— machine learning systems and generative AI tools have now been integrated into various sectors of life including healthcare, government services, industry, and education. In 2023, more than 50% of US companies reported using AI for cybersecurity/fraud management, and 97% of business owners expressed enthusiasm that AI platforms like ChatGPT will help their businesses. Several cities and municipalities have adopted protocols for how local government may use and rely upon AI as part of day-to-day duties. 

Unsurprisingly, the law has lagged well behind the impressive speed of AI’s ballooning technological development. This notwithstanding, various governmental agencies, legislative bodies, and courts have begun to assemble a regulatory regime which may help answer the million-dollar question in this brave new world: who, or what, is liable when AI goes wrong?Continue Reading Artificial Intelligence, Real Liability: Who’s on the hook when things go wrong?

On October 6, 2023, Snap Inc. and Snap Group Ltd. (collectively, “Snap”) received a preliminary enforcement notice from the U.K. Information Commissioner’s Office (ICO) due to a potential failure to properly assess the privacy risks posed by its generative AI chatbot, My AI.Continue Reading Snap Receives Preliminary Enforcement Notice Related to Privacy Risks Posed by AI Chatbot