|
Yesterday I had one of those calls that starts like small talk and ends like a strategy session you wish you’d recorded. The leader I spoke with runs a large team at a consumer-facing tech company (keeping it anonymous on purpose). They’ve been tasked with driving AI adoption company-wide. Not “run a pilot.” Not “explore.” Company-wide. And they’ve actually made real progress: clear pillars (employee productivity, product innovation, customer service efficiency), early wins in customer support automation, and laying the foundation to ship AI features repeatedly, not just once. Then the most interesting moment: they brought up a term I hadn’t heard before. “AI brain fry.” The idea is simple and slightly uncomfortable: the people pushing hardest for AI (often executives) may be the ones least exposed to the day-to-day cognitive load of using it. AI creates an explosion of optionality. Optionality creates decisions. Decisions create context switching. And context switching, at scale, is exhausting. This is why “training” alone doesn’t work. Especially for leadership teams. Most enablement is designed for individual contributors: write better prompts, summarize docs, draft emails. Useful, but incomplete. Executives don’t need more tactics; they need a new operating rhythm for how decisions get shaped:
The punchline from the call: adoption often slows the higher you go. The leaders care about the org, but their days are meetings, judgment calls, and people-work. It's hard for them to make the time to use AI for themselves. And, ironically, that’s exactly where AI can help most. But you need to design for it, make the time for it and iterate. If you’re trying to “drive AI adoption,” consider this your reminder: you’re not deploying software. You’re redesigning how humans make decisions. Alex |
As an AI Coach, Advisor, and Agent Builder, I help organizations and business leaders harness the power of artificial intelligence to boost productivity and streamline operations. I enable organizations to navigate the transformative landscape of AI, educating teams, identifying operational and strategic opportunities with AI and creating a framework for safe and transparent use of data in the organization.
I saw the future yesterday. In a blurry screenshot I pulled from my laptop while sitting across the room with my phone. Here's what happened. Claude Desktop (the app that launched in January with Code and Cowork) quietly added something new: Dispatch. It's a feature that lets your phone talk to your desktop. Not just send messages. Actually operate your computer. I paired my phone with my desktop through a QR code in the app's left-hand menu, tapped Dispatch, and typed: "Get the last...
I’ve been on a lot of calls lately with private equity firms trying to figure out AI. Not “should we use AI” calls. That ship has sailed. These are the harder conversations: where do we actually start, what’s worth paying for, and how do we get our teams to use this stuff consistently? Three calls in this week. Three very different firms. And yet the same five themes kept surfacing. I think they apply well beyond PE, to enterprise and to non-profits. Build vs. Buy is the wrong question (until...
There's a specific kind of anxiety that comes with being in AI right now. It's not the fear of being left behind. It's the accumulation of "I should really learn that"... the Substack you flagged, the YouTube video someone texted you, the X thread with 200 likes you saved and never opened. For a while, my "learning system" was a graveyard of browser tabs and starred emails. I knew things were there. I just couldn't find them. And the more they piled up, the less I actually learned because...