What happens when artificial intelligence moves faster than our ability to understand, verify and trust it?
In this episode of Tech Talks Daily, I talk with eSentire’s Alexander Feick, a cybersecurity veteran who has spent more than a decade working at the intersection of complex systems, risk, and emerging technology. Alex leads eSentire Labs, where his team investigates how to secure new technologies before they quietly become life-supporting parts of modern enterprise infrastructure.
Our conversation is about a current and uncomfortable reality. AI is being embedded into workflows, products and decision-making systems at a pace that most organizations are unprepared for.
Alex explains why many AI failures aren’t caused by malicious models or dramatic breaches, but by broken ownership, invisible dependencies, and a lack of continuous verification. These are not technical problems. They are organizational blind spots that quietly increase risk over time.
We also explore the ideas behind Alex’s recently published book on trust and AI, which he made available for free because of the speed at which real AI errors were already overtaking theory.
From rapid injection and model drift to the dangers of treating non-deterministic systems as if they were predictable software, Alex explains why generative AI requires a fundamentally different security mindset. He makes a clear distinction between chatbot AI and embedded AI, explaining the moment when trust quietly shifts from people to systems that cannot take responsibility.
The discussion delves deeper into what trust actually means in an AI-driven organization. Alex argues that trust must be continually earned, measured and monitored, and not assumed after a successful pilot. Verification becomes the real deal, not generation, and leaders who fail to recognize this shift risk scaling errors faster than they can contain. We also talk about why he turned his book into an AI consultant, what that experiment revealed about the limits of models, and why human responsibility can’t be automated away.
This is an informed, practical conversation for leaders, technologists, and anyone deploying AI in real organizations. If AI becomes part of the way decisions are made where you work, how confident are you that someone actually owns the outcome?
Useful links
Subscribe to the Tech Talks daily podcast
![]()

![]()

#Trust #Verification #Ownership #Age #Alexander #Feick #eSentire


