From my experience, and from more and more conversations I’ve been having lately, there is a scene playing out in organizations right now…
Leadership is asking what the AI strategy is (and where they can find ROI).
The IT team is evaluating security, procurement, governance, ecosystems, permissions, compliance, and stack alignment.
Employees are trying to figure out whether any of these tools will actually help them get through their day faster without making life more complicated.
And the end result?
While the discussion is happening, half the employees are quietly using their own free ChatGPT account anyway.
That is not me being flippant. That is the reality – I see it over and over.
You should see the sheepish grins on people’s faces when I ask the crowd from the stage, “How many of you are using your personal ChatGPT for work?”
It is also why so many organizations feel like they are “using AI” without actually getting very far.
Three Groups, Three Very Different Priorities
One group is worried about protecting the organization.
Another group is just trying to get their work done.
A third group wants to know whether this is going to create real value, standardize intelligently, and move the needle in a meaningful way.
None of those groups are wrong.
That is the important part.
The IT team is not wrong for caring about security, governance, access, procurement, approved tools, vendor relationships, and where this all fits into the broader tech stack. That is their job. If something goes sideways, they are the ones who have to clean it up.
End users are not wrong for wanting something clear, useful, and not overwhelming. They are busy. They have deadlines. They are not looking to become amateur AI analysts on the side. Most of them are asking a pretty simple question: Will this actually help me do my job better, or is this one more thing I am supposed to learn and work into my day?
Leadership is not wrong for focusing on standardization, scalability, and ROI. They should be. They are not paying for AI because it is interesting (and paying for it includes affording time for staff to learn, implement and experiment). They are paying for it because they want better output, stronger execution, more consistency, better decision-making, time savings that actually lead somewhere, and ideally, some level of competitive advantage.
The Real Problem: Everyone Is Having a Different Conversation
The problem is not that these groups disagree.
The problem is that they are often having entirely different conversations.
That is where things start to break down.
The IT team says, “We need to be thoughtful.”
Users hear, “You are making this harder than it needs to be.”
Users say, “We just want something that works.”
IT hears, “You do not care about risk.”
Leadership says, “Show me the business case.”
Everybody else hears, “You are on your own until you can turn this into a spreadsheet.”
And once that gap opens up, people start solving the problem for themselves.
How Shadow AI Starts
That is when the shadow AI starts.
An employee uses a personal account.
A department head starts testing tools without telling anyone.
Someone in marketing loves one platform.
Someone in operations swears by another.
Someone in sales heard from a friend that a different tool is better.
Someone on the leadership team read one article and now wants everybody on the same thing by Friday.
This is how organizations end up with AI chaos dressed up as innovation.
The irony is that most of this does not happen because people are resisting AI.
It happens because they are trying to use it.
They are trying to make sense of it in the absence of a common language, shared priorities, and a practical path forward.
Organizations Do Not Just Need AI Access. They Need AI Translation.
That is why I keep coming back to the same point: most organizations do not just need AI access. They need AI translation.
They need someone in the room who understands what IT is protecting, what users are dealing with, and what leadership is actually trying to accomplish.
Because if you do not bring those groups together, one of two things usually happens.
Either the organization moves too slowly, talks itself in circles, and never gets past the planning stage.
Or it moves too fast, creates confusion, overwhelms employees, and ends up with a mess of overlapping tools, wasted money, inconsistent usage, and no real standard.
Neither of those is adoption.
What a Better Approach Looks Like
So what does a better approach look like?
What IT Needs to Understand About Users
First, IT has to understand that most users are not asking for a grand AI vision. They are asking for relief.
They want help writing, summarizing, organizing, preparing, analyzing, brainstorming, and communicating. They do not want a maze of tools and permissions and optionality dropped in their lap. They do not want to sit through ten explanations of model architecture. They don’t care what this morning’s tech blog says about why not to use a specific platform. They want to know what they are allowed to use, what it is good for, what it is not good for, and how it can make their day easier without creating risk.
That is not laziness. That is practicality.
What Users Need to Understand About IT
Second, users need to understand that IT is not being difficult for the fun of it.
If there is a security issue, a privacy issue, a data leak, a compliance problem, or a procurement mess, IT is the team that owns the consequences. They are not worried about governance because they like red tape. They’re not exploring different avenues and platforms because they’re big nerds. They are worried about it because one careless decision can create real exposure for the organization.
That matters.
What Both Groups Need to Understand About Leadership
Third, both groups need to understand that leadership is not impressed by AI for AI’s sake.
Leadership does not care that someone saved twenty minutes writing a draft if that twenty minutes just disappears into the ether. They care what happened next. Did that extra time help improve client service? Did it help move a project forward faster? Did it help the company sell more, support better, reduce friction, create consistency, or make smarter decisions?
That is the shift.
The real value conversation is not “AI saved me ten minutes.”
It is “AI helped us redirect time and energy toward something that matters.”
That is the conversation leadership wants.
The Questions That Actually Bring People Together
So how do you bring these groups together?
You start by acknowledging that they are all solving for different things.
Then you stop treating AI as one giant conversation.
Instead, you make it specific.
- What are the top five use cases employees actually need help with?
- What are the red lines IT needs respected?
- What would leadership consider a meaningful win in the next 90 days?
- What needs to be standardized, and what can remain flexible?
- Where do you want experimentation, and where do you want consistency?
Those questions are far more useful than abstract discussions about “our AI journey.”
Where Momentum AI Fits
This is also why I believe a lot of organizations need a bridge between the technical conversation and the human one.
That is a big part of what we do through Momentum AI.
Not in the sense of swooping in with a canned answer or pushing one platform for everyone. It is about helping organizations cut through the noise, understand the motivations in the room, and create an AI approach that people can actually use, support, and measure.
Because the truth is, the best AI tool for an organization is not always the one that looks best in a demo.
It is often the one that fits the organization well enough to be adopted clearly, safely, and consistently.
That is a very different standard.
And it is the one more organizations need to use.
Alignment Is the Real AI Strategy
If your users are overwhelmed, your rollout is too complicated.
If your IT team is panicked, you’re going to take a long time to move forward.
If leadership cannot see the connection between usage and business value, your AI effort is too fuzzy.
That does not mean the answer is to pull back.
It means the answer is to get everybody speaking the same language.
IT team, meet your users.
Users, meet leadership.
Leadership, meet reality.
AI adoption is about more than choosing a platform and running with it.
It is an alignment decision.
And until organizations start treating it that way, a lot of very smart people are going to keep having very different conversations while employees quietly keep using whatever free tool they can find.
