When AI Tools Sit Unused, the Problem Is Usually People
Most AI rollouts fail quietly. The software gets purchased, the licences get activated, and then almost nothing changes. Staff revert to their old workflows. Managers stop asking about it. Six months later, someone in finance asks why the organisation is paying for tools nobody uses.
The technical side of AI adoption is rarely the hard part. The hard part is changing how people work, what they trust, and what they bother to learn. That is where an ai champions program earns its keep.
An ai champions program is a structured approach to identifying, training, and empowering a small group of employees who become internal advocates and practical guides for AI adoption. They are not IT staff or external consultants. They are people from finance, operations, customer service, legal - wherever the actual work happens - who understand both the tools and the specific problems their colleagues face every day.
This article explains how to build one that works.
What an AI Champion Actually Does
The role is often misunderstood. An AI champion is not a trainer who runs one-off workshops and disappears. They are not a help desk. They are not responsible for making everyone an AI expert.
What they actually do is narrower and more useful:
- Spot opportunities - They notice where AI could reduce friction in their team's daily work, because they do that work themselves
- Translate capability into context - They can explain what a tool does in terms that make sense to their colleagues, not in vendor language
- Reduce the activation energy - They sit near the people who need help, which means a quick question gets answered in a minute rather than a ticket getting logged and forgotten
- Feed intelligence back up - They tell leadership what is actually working, what is not, and where resistance is coming from
The last point matters more than most organisations realise. Without champions embedded in teams, leadership is flying blind. They get adoption metrics that measure logins, not actual use.
Selecting the Right People
Picking the wrong champions is the most common mistake. Organisations tend to default to whoever seems most enthusiastic about technology, or whoever is easiest to spare for the role. Neither criterion produces good champions.
The qualities that actually predict success are:
Credibility with peers. If colleagues already respect someone's judgement, they are far more likely to follow their lead on trying something new. A champion nobody listens to is just overhead.
Genuine curiosity about the tools. Not enthusiasm for AI as a concept, but willingness to actually sit with a tool, break it, figure out its limits, and form honest opinions. Champions who oversell capabilities destroy trust fast.
Communication skills. They need to explain things clearly to people who are sceptical, busy, or both. This is harder than it sounds.
Tolerance for ambiguity. AI tools change constantly. Champions need to be comfortable saying "I don't know, but I'll find out" without losing credibility.
Practically speaking, aim for one champion per team or business unit, with a minimum of one per 20-25 staff. Smaller organisations might start with three or four champions covering multiple areas. The ratio matters less than having genuine coverage across the functions where you want adoption to happen.
Structuring the Program
An ai champions program needs enough structure to be consistent, but not so much that it becomes a bureaucratic burden on people who still have day jobs.
Initial Training
Start with two to three days of focused onboarding. This should cover:
- The specific tools the organisation has deployed, with hands-on time
- The organisation's AI policy and governance framework (if you do not have one, build it before you launch champions)
- How to identify good use cases versus poor ones
- How to handle data privacy questions - champions will get these constantly
- Basic prompt engineering for the tools in use
Avoid generic AI literacy courses. Champions need to know your tools, your data environment, and your specific constraints.
Ongoing Support
Champions need a channel to ask each other questions and escalate issues. A dedicated Slack or Teams channel works fine. More important is a regular cadence - a 45-minute fortnightly call where champions share what they are seeing in their teams, what is working, and what questions they cannot answer. This call should include someone from IT or whoever manages the AI stack, and ideally someone from leadership who can act on what they hear.
Recognition and Incentives
Champions are taking on additional responsibility. This needs to be acknowledged explicitly, not just with a mention in a team meeting. Options include:
- Formal recognition in performance reviews
- Access to more advanced training and certifications
- A small time allocation - even two hours per week protected from other work - to focus on the champion role
- Involvement in AI strategy decisions, which many people find genuinely motivating
If champions feel like they are doing extra work for no recognition, they burn out or disengage. Either outcome kills the program.
A Concrete Example: How One Professional Services Firm Did It
A mid-sized accounting firm with around 180 staff rolled out an AI writing and research assistant across the business. Initial adoption was poor - about 15% of staff were using it regularly after two months, mostly in the marketing team who had pushed for it in the first place.
They identified six champions across their audit, tax, advisory, and operations teams. Each champion spent three days in structured onboarding, then returned to their teams with a simple brief: find two or three tasks where the tool could save time, test it properly, and share what they found.
Within six weeks, the audit team champion had developed a specific workflow for drafting management letters that cut the time spent on first drafts by roughly 40%. She ran a 90-minute session with her team showing exactly how she used it, including where it got things wrong and how she caught errors. Because she was a senior auditor they respected, and because she was honest about the tool's limitations, the team engaged with it seriously.
By month four, overall adoption had risen to 58% of staff using the tool at least weekly. More importantly, the firm had a clear picture of which use cases were generating real value and which were not - intelligence that came directly from the champions network.
The program cost roughly two weeks of staff time across the six champions, plus the initial training investment. The return on that investment was measurable within a quarter.
Managing Resistance
Champions will encounter resistance. Some of it is reasonable - people have legitimate concerns about job security, data privacy, and whether the tools are actually good enough to be worth learning. Some of it is inertia. Champions need to be prepared for both.
The most effective approach is to stop trying to convince sceptics with general arguments about AI and start with specific, local evidence. "Here is what this tool did for Sarah in your team last week, and here is exactly how she used it" is far more persuasive than any statistic about productivity gains.
For deeper resistance rooted in fear about job security, that is a conversation for managers and leadership, not champions. Champions should not be put in the position of having to defend organisational strategy. Their job is to help people use tools effectively, not to manage change anxiety that requires a different kind of response.
Measuring Whether It Is Working
An ai champions program needs to be evaluated honestly. Useful metrics include:
- Active adoption rates - the percentage of staff using specific tools at least weekly, tracked by team so you can see where champions are having impact
- Use case documentation - how many practical workflows have champions identified and documented? This is a tangible output
- Champion retention - are the people you invested in still in the role six months later? High turnover signals the program is not supporting them properly
- Qualitative feedback - regular short surveys to both champions and their teams about what is and is not working
What to avoid measuring: completion of training modules, number of logins, or any metric that can be gamed without actual behaviour change. These numbers look good in reports and tell you almost nothing useful.
What to Do Next
If your organisation has deployed AI tools and is not seeing the adoption you expected, an ai champions program is one of the most cost-effective interventions available. It does not require large budgets or external consultants to run indefinitely - it requires identifying the right people, giving them real support, and treating the role seriously.
Here are practical first steps:
- Audit your current adoption - Get honest data on who is actually using your AI tools, how often, and for what. This tells you where to focus.
- Identify two or three potential champions - Look for credible, curious people in the teams with the lowest adoption. Have a direct conversation about the role before you announce anything publicly.
- Build or review your AI governance framework - Champions cannot answer policy questions if there is no policy. This needs to exist before champions start fielding questions.
- Design a structured onboarding - Generic training will not do the job. Build something specific to your tools and your context.
- Commit to the support infrastructure - The fortnightly call, the communication channel, the recognition. Without these, the program runs out of steam within three months.
If you want to talk through how to structure an ai champions program for your specific organisation, the team at Exponential Tech works with Australian businesses on exactly this kind of practical AI strategy. Reach out at exponentialtech.ai.