The "AI Adoption" Trap: Usage Isn't Impact


Last week, a senior leader at a large investment firm opened our Zoom call with what usually sounds like a win: "We've got high usage across the firm."

Everyone has access. People are familiar with the tools. A few internal workflows are already connected.

Great. I love talking with people for whom AI is really embedded and working. I quickly determined this call was a "trading notes" kind of call. It wasn't a request for help from me.

And then, about twenty minutes in, he said shared their concern: "But the usage data isn't telling us anything useful anymore."

"Do people use AI?" is a beginner question. It's the question you ask in month two, when you're just trying to prove the investment wasn't wasted. The more revealing questions (the ones that actually drive ROI) are different:

How do they use it?

For which workflows?

With what context?

And are they producing better work, faster, with less risk?

This firm had reached the awkward middle. They weren't ready to drop serious money on niche "AI-for-finance" platforms that promise the moon. They didn't want an internal engineering team that turns every good idea into a six-month IT project. But they had dozens of smart people with real ideas and no operating model to turn those ideas into repeatable tools.

The example we kept circling back to: investment materials.

Early-stage writeups are getting close to 90% quality with the right prompts and structure. That part is actually working. But later-stage outputs (long memos, 50-slide decks, multiple revision rounds) are where time goes to die. Not because people are slow. Because the organization lacks reusable building blocks: shared context, prompt libraries, consistent templates, and a lightweight feedback loop.

My bias (and it's a "business first, AI second" bias) is that the next step here isn't a grand rollout or another all-hands training.

It's small groups of people who already work together, meeting every other week for a focused sprint. One workflow. Clear inputs and outputs. A draft "recipe." Then iterate, live, until it holds up under real deadline pressure. If the culture doesn't support homework, run it office-hours style. Do the old school things that work: screen-share, build it together, ship something usable before people leave the room.

Here's the punchline: AI transformation isn't tool adoption. It's workflow authorship.

And authorship needs time, structure, and someone keeping the experiment from dying in everyone's inbox.

That's where the real ROI starts showing up.

Alex

Alex Talks AI

As an AI Coach, Advisor, and Agent Builder, I help organizations and business leaders harness the power of artificial intelligence to boost productivity and streamline operations. I enable organizations to navigate the transformative landscape of AI, educating teams, identifying operational and strategic opportunities with AI and creating a framework for safe and transparent use of data in the organization.

Read more from Alex Talks AI

I saw the future yesterday. In a blurry screenshot I pulled from my laptop while sitting across the room with my phone. Here's what happened. Claude Desktop (the app that launched in January with Code and Cowork) quietly added something new: Dispatch. It's a feature that lets your phone talk to your desktop. Not just send messages. Actually operate your computer. I paired my phone with my desktop through a QR code in the app's left-hand menu, tapped Dispatch, and typed: "Get the last...

I’ve been on a lot of calls lately with private equity firms trying to figure out AI. Not “should we use AI” calls. That ship has sailed. These are the harder conversations: where do we actually start, what’s worth paying for, and how do we get our teams to use this stuff consistently? Three calls in this week. Three very different firms. And yet the same five themes kept surfacing. I think they apply well beyond PE, to enterprise and to non-profits. Build vs. Buy is the wrong question (until...

There's a specific kind of anxiety that comes with being in AI right now. It's not the fear of being left behind. It's the accumulation of "I should really learn that"... the Substack you flagged, the YouTube video someone texted you, the X thread with 200 likes you saved and never opened. For a while, my "learning system" was a graveyard of browser tabs and starred emails. I knew things were there. I just couldn't find them. And the more they piled up, the less I actually learned because...