I lead a lab where engineers strive to break my life’s work—this is the only approach to prepare for quantum threats

The first time I gave my credit card to a security lab, it came back compromised—not physically damaged, but breached. Within 10 minutes, the engineers had uncovered my PIN.

This happened in the early 1990s, when I was a young engineer starting an internship at a company that helped build the smart card industry. I trusted my card was secure and the system worked. But watching strangers casually extract information meant to stay secret and protected was a shock. It also marked the moment I realized how fragile security can be, and the devastating impact breaches could have on individuals, global businesses, and governments.

Most people assume security is about creating something unbreakable. In reality, it’s about understanding exactly how something fails, under what conditions, and how quickly. That’s why today, I run labs where engineers are paid to attack the very chips my company designs. They measure power fluctuations, inject electromagnetic signals, fire lasers, and strip away layers of silicon. Their job is to act like criminals and hostile nation-states on purpose—because the only honest way to build trust is to first try to destroy it.

To someone outside the security field, this approach sounds counterintuitive. Why spend years designing secure hardware just to invite people to tear it apart? The answer is simple: Trust that’s never been tested isn’t trust—it’s an assumption. Assumptions fail quietly at first, and they do so at the worst possible moment.

Over the past three decades, I’ve watched secure chips evolve from a specialized tech into invisible infrastructure. Early in my career, most of my work focused on payment cards. Convincing banks and payment networks that chips were safer than magnetic stripes wasn’t easy. Back then, there were fears about surveillance and tracking. Few recognized these chips were becoming digital passports—proving identity, authenticating devices, and deciding what could and couldn’t be trusted on a network.

Today, secure chips sit quietly inside credit cards, smartphones, cars, medical devices, home routers, industrial systems, and national infrastructure. Most people never notice them, which is often seen as a sign of success. But that invisibility also creates risk. When security fades from view, it’s easy to forget it needs to keep evolving.

At its core, a secure chip does one key thing: It protects a secret—a cryptographic identity that proves a device is genuine. All other security measures build on this foundation. When a phone unlocks, a car talks to a charging station, a medical sensor sends data to a hospital, or a software update reaches a device in the field, all these actions depend on that secret staying hidden.

The challenge is that chips don’t just store secrets—they use them. They calculate, communicate, and respond. The moment they do this, they start leaking information. Not because they’re poorly designed, but because physics can’t be negotiated. Power use shifts, electromagnetic emissions change, timing varies. With the right tools and expertise, these signals can be measured and interpreted.

This is what happens in our attack labs every day. Engineers “listen” to chips much like an electricity provider can guess your daily routine from power usage. They stress-test devices until they behave unexpectedly. They introduce faults and observe how chips respond. From these observations, they learn how attackers think, where information escapes, and how defenses need to be redesigned.

Quantum computing enters this picture without drama or science fiction. It doesn’t change what attackers want—they still seek secrets. What it changes is how fast they can get them. Problems that would take classical computers thousands of years could be solved in minutes or seconds once sufficient quantum power exists. The target remains the same; the timeline disappears.

This is why static security fails. Any system designed to be secure once and left untouched is already aging toward obsolescence. If a system is never attacked, it will eventually fail—because the world around it doesn’t stand still. Attack techniques evolve, tools get cheaper and more powerful (especially with AI), and knowledge of successful attacks spreads globally, encouraging others to copy them.

Many organizations make the same mistake: They assume they’ll see threats coming and wait for visible breaches or public incidents before acting. With quantum, this logic breaks down. The first groups with meaningful quantum capability won’t announce it—they’ll use it quietly. In fact, this is already happening with Harvest Now-Decrypt Later (HNDL) attacks, where large amounts of encrypted data are collected today to be decrypted later with quantum tech. By the time attacks become obvious, the damage is done.

That reality is why governments and regulators are acting now. Across industries, requirements are emerging for systems to become quantum-resilient within set timelines. This isn’t driven by theory or hype—it’s driven by the fact that updating cryptography, hardware, and infrastructure takes years, while exploiting weaknesses takes moments.

When I walk through our labs today, what stands out most isn’t the sophistication of the tools, but the discipline of the process. Access is tightly controlled, engineers are vetted and audited, and every experiment is documented. This isn’t curiosity-driven hacking—it’s structured, repeatable testing to find weaknesses early, while there’s still time to fix them. Every successful attack feeds into a stronger design.

This is what leaders, system owners, and policymakers need to understand: Security doesn’t fail suddenly. It fails quietly, long before anyone notices. Preparing for quantum threats isn’t about predicting the exact moment of a breakthrough—it’s about accepting that once it happens, there’s no grace period. The only responsible approach is to assume your systems will be attacked and ensure that happens under controlled conditions, before someone else chooses the timing for you.