UL Solutions introduces a new standard to fill the AI regulation gap: ‘Innovation without safety is failure’
- In today’s CEO Daily: UL Solutions CEO Jennifer Scanlon speaks with ’s Diane Brady about the company’s new UL 3115 standard
- The big leadership story: How to earn $18.4 million without working a single day
- The markets: Conditions are tough right now
- Plus: All the latest news and office chatter from .
(SeaPRwire) – Good morning. For more than 120 years, UL has affixed its mark to products ranging from string lights to toaster cords, conveying a simple promise: This product won’t cause harm. Last week, the $3 billion-a-year safety science firm issued a new certification for AI-integrated products for the first time. As UL Solutions CEO Jennifer Scanlon told me: “Innovation without safety amounts to failure.”
Hardly any technology has advanced as swiftly with so little regulatory oversight. (The fragmented patchwork of emerging state laws only adds to the confusion.) This week, attention is focused on OpenClaw, an autonomous virtual agent that has sparked a new craze in China. It received a nod from Nvidia CEO Jensen Huang during his developers conference this week, where he unveiled NemoClaw and dubbed the OpenClaw framework “the next ChatGPT.”
Can private-sector safety standards achieve what Washington has not: establish guardrails for fast-developing technologies with potentially far-reaching impacts? The UL mark already appears on roughly 22 billion products globally each year. This latest standard, UL 3115, assesses whether an AI-powered product is safe, resilient, and effectively governed, with “human control” maintained throughout the product’s lifecycle. “Regardless of whether there’s government regulation on this, our clients are approaching us because they require broader protections and guarantees,” Scanlon told me. “They’re eager to have at least one standard they can follow, which gives them confidence in how they’re positioning themselves ahead of their customers.”
UL’s core strength lies in functional safety. As Scanlon puts it: “When you turn on the radio in your car, you don’t want the brakes to suddenly engage. So how is that embedded software being tested and validated? They’re integrating AI into toys. How can we ensure those toys are safe for children?”
That’s why UL’s AI Center of Excellence set out to apply its safety protocols to the realm of AI-integrated physical products. “We begin with an investigative framework, which is a preliminary step toward ensuring safety. This involves our engineers and scientists collaborating with clients to understand their concerns and perceived challenges—then approaching these from a scientific angle: What other risks should you be aware of?”
“In the case of AI-embedded products, they started examining: How transparent is the algorithm? What degree of bias is present in these algorithms? How reliable is the training data? And if some of that training data is inaccurate, how do you remove it from the learning model? What form of human oversight and verification—those critical final checks—is in place? What are those processes?”
So far, two products have received AI certification: Qcells’ Energy Management System, an AI-powered control engine for data centers, and the Omniconn Platform 4.0, a smart building solution. It’s one piece of the puzzle in a world where leaders strive to balance speed with safety.
Contact CEO Daily via Diane Brady at diane.brady@.com
This article is provided by a third-party content provider. SeaPRwire (https://www.seaprwire.com/) makes no warranties or representations regarding its content.
Category: Top News, Daily News
SeaPRwire provides global press release distribution services for companies and organizations, covering more than 6,500 media outlets, 86,000 editors and journalists, and over 3.5 million end-user desktop and mobile apps. SeaPRwire supports multilingual press release distribution in English, Japanese, German, Korean, French, Russian, Indonesian, Malay, Vietnamese, Chinese, and more.