Does Artificial Intelligence Need a Warning Label?

Warning labels are everywhere. From the serious “Beware of Dog” sign on a neighbor’s fence or skull-and-crossbones on a carton of some toxic substance to the ironic notice to never iron or steam fabrics whilst they are being worn – and everything in between.

If the recent announcement from the U.S. Department of Commerce on new guidance marking the 270th day since the White House issued an executive order on AI is any indication, artificial intelligence needs a warning label of its own as well.

The narrative around AI vacillates between breathless anticipation of the brave new technological world to come and the inevitable robot apocalypse that will destroy humanity. But the reality is artificial intelligence is elbowing its way into the global economy on multiple levels, from natural language processing to business intelligence to healthcare and beyond. And governments are hurrying to catch up with oversight.

The Commerce Department release celebrates actions taken since the administration laid out oversight objectives and guidance for using AI, but parts of it read like the pitch deck of a science fiction film looking for studio investors.

Imagine you’re settling into your seat at the multiplex, bucket of popcorn and cold drink at the ready and the coming attractions begin. The lights dim and deep baritone voice speaks:

“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software.”

But wait, a hero emerges with the answer.

“These guidance documents and testing platform will inform software creators about these unique risks and help them develop ways to mitigate those risks while supporting innovation.”

Sounds like the foundation for a good techno-thriller, maybe even a blockbuster. In reality, those are the words of Under Secretary of Commerce for Standards and Technology and National Institute of Standards and Technology   Director Laurie E. Locascio in the Commerce Department release.

One of the big targets is dual-use foundation models, tools that are used across a broad range of tasks and are called dual-use because of their potential for both benefit and harm. This Dr. Jekyll and Mr. AI twist is a classic plot device, but its familiarity does not make it any less unsettling.

The guidance also includes a section on managing the risks of generative AI, including the use of chatbots or virtual assistants for customer service issues. It lists a full dozen risks associated with generative AI, including the production of misinformation, disinformation or hate speech and other harmful content and lowered barriers to cybersecurity attacks.

If this all sounds a little ominous, that’s because it is.

Industry and governments have made great strides in understanding the risks of the widespread use of AI and the efforts outlined in the Commerce Department release are a solid beginning. But it’s hard not to hear the menacing music and breathless narration of that movie preview where AI threatens to severely disrupt business at the very least and impact economies around the world.

If nothing else, the Commerce report shows artificial intelligence needs some rules and regulations and a cooperative effort among governments, stakeholders and the public. A warning label wouldn’t be a bad idea either.