AI is transforming the way teams work. But it’s not just the tools that matter. What matters is what happens to thinking when those tools do the heavy lifting, and whether managers notice it before the gap widens.
There is a common pattern across all sectors. AI-enabled work looks polished. The reports are clean. The analyzes are structured. But when someone asks the team to defend a decision and not summarize it, the room goes quiet. The output is there, but the reasoning is not owned.
For David, the COO of a mid-sized financial services company, the problem came to light during quarterly planning. Multiple teams presented the same compelling statistic about regulatory timelines, but it turned out to be incorrect. It came from an AI-generated summary that combined outdated guidance with a recent policy draft. No one had checked it. No one had questioned it. It just sounded good.
“We weren’t lazy,” David told us. “We just didn’t have a process that asked us to look twice.”
Through our work advising teams in navigating AI adoption, Jenny as an executive coach, learning and development designer, and Noam As AI strategists, we have seen a clear distinction: there are teams where AI flattens performance, and teams where it deepens performance. The difference is not whether AI is allowed. What matters is whether the judgment is designed back into the work.
The good news is that teams can adopt practices to shift from producing answers to making decisions. This new way of thinking doesn’t slow things down. It brings performance to where it really matters – and protects the judgment that no machine can replace.
1. The factual audit: question the output of AI
AI produces fluent language. That’s exactly what makes it dangerous. If the output sounds authoritative, people stop checking it. It’s a pattern often called “workslop”: AI-generated output that looks polished but lacks the substance to hold up to scrutiny. In contrast, critical thinking becomes stronger when teams learn to treat AI as unverified input, not an end source.
David didn’t punish the teams that got the stat wrong. He redesigned the process. Before any strategic analysis could move forward, teams needed to conduct a fact audit: identifying AI-generated claims and validating them against primary sources such as regulatory filings, official announcements, or verified reports. The mandate was not about catching mistakes, but about building a reflex.
Over six months, the quality of planning inputs improved significantly. Teams started identifying uncertainty themselves, before anyone asked.
The World Economic ForumThe 2025 Future of Jobs Report underlines this: in high-stakes decisions, AI should augment human judgment, not replace it. Embedding that principle in daily work is not optional. It is a competitive advantage.
For a tip: Start with three. Don’t revise the entire process at once. Ask each team member to highlight three AI-generated claims in their next product and trace them back to a source. Keep it lightweight; the habit is more important than the volume.
- The Fit Audit: demand context-specific thinking
AI follows standard best practices. That’s by design. But generic advice rarely wins in a specific situation. The true test of critical thinking is not whether an answer sounds smart, but whether it fits.
Rachel, managing partner at a global consulting firm, noticed it right away. Her teams relied on AI to craft customer recommendations, and the results were consistently competent but painfully interchangeable. “Improve stakeholder communications. Build organizational resilience,” she told us. “It could have been written for anyone. It was written for no one.”
She introduced a simple checkpoint. Before any recommendations could be made, the team had to answer one question in writing: Why does this solution work here, and not with our last three customers? They had to explicitly relate each suggestion to the customer’s constraints, the company’s methodology, and the real stakeholder landscape.
The shift was immediate. Teams began to reject the generic AI language and replace it with reasoning that was their own. Customer presentations became sharper. Debates replaced consensus.
Gallup’s 2025 workplace data supports why this is important on a large scale. While nearly a quarter of employees now use AI every week to consolidate information and generate ideas, effective use requires strategic integration, not just access. Managers are the ones who set that standard.
For a tip: Make it verbal. While written suitability audits are good, you can ask a team member to explain their recommendation out loud, in a five-minute stand-up or during a quick team check-in. Misalignment quickly disappears when people can’t hide behind polished text.
- The asset audit: make human contributions visible
Here’s what most managers miss: Even when employees think critically, that thinking is invisible. If it doesn’t surface, it won’t be recognized and it won’t be developed.
Marcus, a VP of strategy at a technology company, started requiring a short “decision log” in addition to each quarterly business review. No summary of what AI produced. An account of what the team decided to do with it.
The questions were simple: What assumptions did you challenge? What have you revised? What did you reject, and why? One regional manager used it to point out something the AI had completely missed: the tension between short-term revenue goals and long-term customer retention. She rewrote the analysis framework to reveal that trade-off. The review became a strategic conversation rather than a status update.
“It changed what we were looking for,” Marcus said. “We stopped evaluating the output. We started evaluating the judgment.”
McKinsey’s research confirms the stakes: heavy users of AI report needing higher-level cognitive and decision-making skills more than technical skills. As AI handles routine work, the human contribution becomes the full competitive advantage. Making it visible is not just good management. It’s a strategy.
For a tip: Keep the log short, with only three to five bullet points. What was the AI input? What has changed in the team? What was the last call and why? The goal isn’t documentation itself: it’s thinking about making something the team can see, discuss, and learn from.
- The quick audit: capture how the team thinks
Critical thinking deepens when people can trace their own reasoning: not just the final output, but also the process that shaped it. Without this, every result starts again. The team uses this to build up institutional knowledge.
Sarah, a partner at a professional services firm, began requiring a brief process description before each client presentation. No summary of the final product. A trail: what cues were used, what sources were checked, where the framing shifted and why.
After each presentation, team members wrote a short individual reflection: Where did my thinking change during this process? Over time, the artifacts became a shared learning tool. Teams could see which prompts yielded superficial results, which revisions added real value, and how collaboration shaped the final judgment.
“It turned experimentation into something reusable,” Sarah told us. “Every project used to feel like starting over. Now we build on what we have already created.”
The result wasn’t just better results. It was a team that became sharper and faster together.
For a tip: Create a shared tracker. Keep it simple: a shared document, a Notion page, or even a Slack channel. Record what prompt was used, what worked, what didn’t, and what you would try next. No slides, no pressure. The goal is to normalize small bets and shared learning in real time.
Critical thinking with AI
AI is only as powerful as the people who use it on purpose. The best teams don’t win because they have the fastest tools. They win because they have developed judgmental habits.
They wonder what sounds good. They demand context over consensus. They make their thinking visible and learn from it.
Managing critical thinking in the AI era does not require banning tools or lowering standards. It requires clarity about where the mind lives.
Drawing that line between what AI should handle and what should remain human is one of the defining responsibilities of leadership right now. AI is changing the way work is done. Management determines how people think while they do it.
#undermine #teams #critical #thinking #skills #Heres #protect


