AI’s Educational Role Depends on Human Guidance

Graduation cap on the blue background

As graduation ceremonies celebrate the promise of a new generation, a key question arises: Will AI render their education meaningless?

Many CEOs believe so. They foresee a future where AI substitutes engineers, doctors, and educators. Meta’s CEO recently suggested AI will replace mid-level engineers responsible for the company’s code. NVIDIA’s CEO has even proclaimed coding as outdated.

While Bill Gates acknowledges the rapid AI development as “profound and even a little bit scary,” he also praises its potential to democratize access to elite knowledge. He, too, envisions AI replacing coders, doctors, and teachers, offering free, high-quality medical advice and tutoring.

Despite the hype, AI cannot “think” independently or act without human input—at least for now. Indeed, whether AI enhances or diminishes learning depends on a critical decision: Will we allow AI to merely predict patterns, or will we require it to explain, justify, and remain anchored in the laws of our world?

AI requires human judgment, not only to oversee its output but also to integrate scientific guidelines that provide direction, grounding, and interpretability.

Recently, likened AI chatbots to a decent student taking an oral exam. “When they know the answer, they’ll tell it to you, and when they don’t know the answer they’re really good at bullsh*tting,” he said at an event at the University of Pennsylvania. According to Sokal, if a user lacks sufficient knowledge on a topic, they might not detect a chatbot’s “bullsh*t.” This, to me, perfectly encapsulates AI’s supposed “knowledge.” It mimics understanding by predicting word sequences but lacks real conceptual understanding.

This is why “creative” AI systems and have sparked debates about whether large language models genuinely understand cultural nuances. When teachers express concerns that AI tutors could impede students’ critical thinking, or doctors flag algorithmic misdiagnoses, they highlight the same issue: Machine learning excels at pattern recognition but lacks the in-depth knowledge gained from systematic human experience and the scientific method.

That is where a offers a potential solution. It emphasizes incorporating human knowledge directly into machine learning processes. PINNs (Physics-Informed Neural Networks) and MINNs (Mechanistically Informed Neural Networks) serve as examples. Although the names may sound complex, the concept is straightforward: AI improves when it adheres to established rules, whether physical laws, biological systems, or social dynamics. This implies that humans are still needed, not just to utilize knowledge but also to create it. AI performs optimally when it learns from us.

I see this in my own work with MINNs. Instead of allowing an algorithm to guess based on past data, we program it to follow established scientific principles. Consider a local . For this type of business, the blooming time is crucial. Harvesting too early or late reduces essential oil potency, affecting quality and profitability. An AI might waste time analyzing irrelevant patterns. However, a MINN starts with plant biology. It employs equations connecting heat, light, frost, and water to blooming to make timely and financially relevant predictions. But it only succeeds when it understands the physical, chemical, and biological world. This knowledge stems from science, which humans develop.

Imagine applying this approach to cancer detection: Breast tumors emit heat due to increased blood flow and metabolism, and predictive AI could analyze numerous thermal images to identify tumors based solely on data patterns. However, a MINN, such as the one recently , uses body-surface temperature data and integrates bioheat transfer laws directly into the model. This means that instead of guessing, it understands how heat moves through the body, allowing it to identify what’s wrong, what’s causing it, why, and precisely where it is by utilizing the physics of heat flow through tissue. In one instance, a MINN accurately predicted a tumor’s location and size within millimeters, relying entirely on how cancer disrupts the body’s heat signature.

The key takeaway is simple: Humans remain essential. As AI advances, our role isn’t disappearing but evolving. Humans need to “call bullsh*t” when an algorithm produces something strange, biased, or incorrect. This isn’t just a limitation of AI; it’s a strength of humans. It means our knowledge also needs to expand so we can guide the technology, keep it in check, ensure it functions as intended, and benefit people in the process.

The real danger isn’t that AI is becoming smarter; it’s that we might stop using our intelligence. If we treat AI as an unquestionable source, we risk forgetting how to question, reason, and recognize when something doesn’t make sense. Fortunately, the future doesn’t have to unfold this way.

We can build systems that are transparent, interpretable, and rooted in the accumulated human knowledge of science, ethics, and culture. Policymakers can fund research into interpretable AI. Universities can educate students who combine domain knowledge with technical skills. Developers can adopt frameworks like MINNs and PINNs that require models to adhere to reality. And all of us—users, voters, citizens—can demand that AI serve science and objective truth, not just correlations.

After more than a decade of teaching university-level statistics and scientific modeling, I now concentrate on helping students understand how algorithms function “under the hood” by learning the systems themselves, rather than using them by rote.

This approach is vital today. We don’t need more users clicking “generate” on black-box models. We need people who can understand the AI’s logic, its code and math, and catch its “bullsh*t.”

AI will not render education irrelevant or replace humans. But we might replace ourselves if we forget how to think independently, and why science and deep understanding matter.

The choice is not whether to reject or embrace AI. It’s whether we’ll stay educated and smart enough to guide it.