Today’s episode is a conversation with Bret Kinsella, recorded while he was in Las Vegas for CES and preparing to take the AI stage. Bret brings a rare combination of long-term perspective and hands-on experience. As general manager of Fuel iX at TELUS Digitalhe operates generative AI systems at a scale most companies will never see, processing trillions of tokens and delivering measurable business results for global organizations. That vantage point gives him a clear view of both the promise of generative AI and the uncomfortable truths that many teams are still avoiding.
Together we discover why generative AI breaks so many of the assumptions security teams have relied on for decades. Bret explains why these systems are probabilistic rather than deterministic, and how that one shift creates what he calls a limitless attack surface.
Users are no longer limited to predefined buttons or workflows, and output is no longer limited to a fixed database. The same prompt can pass or fail depending on subtle changes, making single-pass testing and checkbox fulfillment dangerously misleading. If you’ve ever wondered why an AI system feels safe one day and unpredictable the next, this conversation provides an informed explanation.
We also investigate why focusing only on the model misses the real risk. Bret makes a strong case that the model is just one part of a much larger system made up of system prompts, connected data sources, tools, and guardrails. Change any of these elements and behavior changes. This is why automated, continuous red teaming has become inevitable.
Bret shares how Telus Digital’s Fortify AI attack model uncovered hundreds of vulnerabilities in a matter of hours, far beyond what human teams could realistically discover on their own. Yet automation is not the end of the story. The final decisions still depend on people understanding the context, trade-offs and business impact.
Throughout the discussion we return to a simple but uncomfortable idea. AI safety is not something you determine after implementation. It requires a different mindset, broader testing, iterative validation, and ongoing human judgment. For leaders moving from experimentation to real-world implementation, this episode is a clear look at what responsible progress actually requires.
As more organizations rush to deploy agents and autonomous systems in 2026, are we really prepared for software that learns, adapts, and occasionally surprises us? What does that mean for the way you test and trust AI within your own company?
Useful links
Subscribe to the Tech Talks daily podcast
![]()

![]()

#Telus #Digital #human #role #mile #safety #security


