Every L&D leader is being asked the same question right now: do our people have the AI skills they need? Most cannot answer it with any confidence. Not because the question is unanswerable, but because very few organisations have done the foundational work of actually mapping what AI skills exist across their workforce — and which ones are missing.
This guide is a practical, step-by-step walkthrough for L&D professionals who want to build a credible, actionable AI skills map. It covers what AI skills actually are, how to design a mapping framework that holds up under scrutiny, how to collect data without overwhelming your people, and — critically — how to turn the results into decisions rather than a slide deck that gets filed away.
Step 1: Understand What "AI Skills" Actually Means
Before you can map AI skills, you need a working definition. The term is used so loosely in the market that it has become almost meaningless. When a CEO says "our people need AI skills," they might mean anything from basic digital literacy to advanced prompt engineering to the ability to evaluate AI model outputs for bias. These are very different things, and conflating them leads to training programmes that miss the mark entirely.
For practical L&D purposes, AI skills can be organised into four distinct layers:
Foundational AI literacy is the baseline. It covers conceptual understanding of how AI systems work, what large language models are, how they generate outputs, and what their limitations are. Every employee in your organisation needs some version of this — not because they will all use AI tools daily, but because they will all be affected by AI decisions and need to be able to evaluate them critically.
Operational AI proficiency covers the ability to use AI tools effectively in a specific job context. This includes prompt engineering, the ability to structure tasks for AI assistance, knowing when to trust AI output and when to verify it, and understanding how to handle AI-generated content responsibly. This layer is role-dependent — a marketing manager's operational AI proficiency looks very different from a finance analyst's.
AI judgment and critical evaluation is the layer most organisations neglect. It covers the ability to assess the quality, reliability, and ethical implications of AI outputs. Can your people tell when an AI is hallucinating? Can they identify when a model has been trained on biased data? Can they make sound decisions about when AI assistance is appropriate and when human judgment must take precedence? This is where AI literacy becomes genuinely strategic.
AI collaboration and workflow integration is the most advanced layer and the one that will define competitive advantage over the next five years. It covers the ability to redesign workflows around AI capabilities, to identify where AI can augment human work most effectively, and to manage the human-AI collaboration dynamic in a team context.
Understanding which layer matters most for which role is the first and most important decision in any AI skills mapping exercise. If you skip this step and go straight to deploying a generic AI literacy survey, you will collect data that tells you very little about actual capability gaps.
Step 2: Define Your Scope Before You Measure Anything
One of the most common mistakes in AI skills mapping is trying to measure everything at once. The result is a sprawling exercise that takes months, produces data of questionable quality, and leaves L&D teams with a 200-row spreadsheet they do not know how to act on.
A more effective approach is to define scope deliberately before you begin. Ask three questions:
Which roles are in scope for this mapping exercise? Not every role needs a deep AI skills assessment in the first round. Start with the roles where AI adoption is already happening or where the business has made a strategic commitment to AI integration. These are the roles where skill gaps will have the most immediate impact — and where the data will be most actionable.
What is the business objective driving this mapping exercise? Are you trying to identify training needs for an upcoming AI tool rollout? Are you building a board-level AI readiness report? Are you trying to comply with the EU AI Act's Article 4 requirement for "sufficient AI literacy"? The answer shapes everything — the depth of assessment required, the format of the output, and the stakeholders who need to see the results.
What will you do with the data once you have it? This question sounds obvious but is rarely asked explicitly. If you cannot describe the specific decisions that the AI skills map will inform, you are not ready to run the mapping exercise. The most useful AI skills maps are built backwards from the decisions they need to support.
Step 3: Build Your AI Skills Framework
Once you have defined scope and objective, you need a framework — a structured list of the specific AI skills you will assess, organised by role and proficiency level. This is the intellectual core of the entire exercise, and it deserves serious investment.
A good AI skills framework has three characteristics. First, it is specific enough to be actionable. "AI literacy" is not a skill — it is a category. The skills in your framework should be specific enough that you can design a training intervention to address a gap in each one. "Ability to write effective prompts for text summarisation tasks" is a skill. "AI literacy" is not.
Second, it is calibrated to your organisation's actual AI tool stack. A framework built around generic AI concepts will not tell you whether your people can use the specific tools your organisation has deployed or is planning to deploy. Where possible, anchor your framework to the AI tools that are already in use or in procurement — Copilot, ChatGPT Enterprise, Gemini for Workspace, or whatever your stack looks like.
Third, it is differentiated by role and level. A junior analyst and a senior manager need different AI skills, even if they use the same tools. Your framework should specify not just what skills matter, but what proficiency level is expected for each role. This is what allows you to identify gaps rather than just measuring average capability.
For most organisations, building a robust AI skills framework from scratch is a significant undertaking. A faster and more reliable approach is to use a validated framework as a starting point and adapt it to your context. Selectic's AI Readiness Assessment is built on a research-backed competency model that covers all four layers described above and can be configured to reflect your specific tool stack and role structure.
Step 4: Choose Your Assessment Method
With a framework in place, you need to decide how you will collect data on where your people currently stand. There are three main approaches, each with different trade-offs.
Self-assessment surveys are fast and scalable, but they are also the least reliable method for measuring actual AI skills. Research consistently shows that people overestimate their own competence in areas they do not fully understand — a phenomenon known as the Dunning-Kruger effect. In the context of AI skills, this is particularly pronounced: employees who have used ChatGPT a few times often rate themselves as highly proficient, while employees who have thought more deeply about AI limitations rate themselves lower. Self-assessment data is useful for measuring confidence and willingness to adopt AI tools, but it should not be the primary basis for identifying skill gaps.
Manager assessments are slightly more reliable than self-assessments for operational skills, but they introduce their own biases. Managers tend to rate their team members based on general performance rather than specific AI skill proficiency, and they often lack the technical knowledge to assess AI judgment and critical evaluation skills accurately.
Structured skills assessments — scenario-based tests that present employees with realistic AI-related tasks and measure their ability to complete them — are the most reliable method for measuring actual AI skills. They take longer to design and administer than surveys, but the data quality is substantially higher. A well-designed AI skills assessment can distinguish between employees who understand AI concepts in theory and those who can apply them in practice — a distinction that self-assessment surveys almost never capture.
For most organisations, the most effective approach is a combination: use a structured assessment to measure actual proficiency, and layer a short confidence survey on top to understand how employees feel about their AI skills. The gap between actual proficiency and self-assessed confidence is itself a valuable data point — it tells you where you have employees who are capable but not yet confident (a training and communication challenge) versus employees who are confident but not yet capable (a more serious risk).
If you are looking for a validated, scenario-based approach to AI skills assessment, the Selectic Skills Mapping tool is designed specifically for this use case — it delivers an automated skill matrix for any competency, including AI skills, across any team or role structure.
Step 5: Run the Assessment and Collect Clean Data
The quality of your AI skills map depends entirely on the quality of the data that goes into it. There are several practical steps that make a significant difference to data quality.
Communicate the purpose clearly before you launch. Employees who do not understand why they are being assessed will either disengage or game the assessment. A clear, honest communication — explaining that the purpose is to identify training needs, not to evaluate individual performance — dramatically improves both participation rates and data quality. If employees believe the results will be used to make decisions about their employment, they will not answer honestly.
Ensure anonymity at the individual level where possible. For most AI skills mapping exercises, the unit of analysis should be the team or role, not the individual. Reporting results at the individual level creates anxiety and reduces honest participation. Aggregate data — "the marketing team has a significant gap in AI output evaluation skills" — is more actionable than individual scores and less threatening to collect.
Set a realistic completion window. Assessments that are left open indefinitely get low completion rates. A two-week window with a clear deadline and one reminder typically produces the best balance of completion rate and data quality. Longer windows allow procrastination; shorter windows create resentment.
Plan for non-participation. In most organisations, 15–25% of employees will not complete an AI skills assessment regardless of how well it is communicated. Plan for this in advance — decide what you will do with incomplete data sets and how you will handle roles where participation is too low to draw meaningful conclusions.
Step 6: Analyse the Results — What to Look For
Once you have data, the temptation is to report average scores by team or department and call it done. This is the minimum viable analysis, and it misses most of the value in the data.
The most useful analyses go beyond averages and look at the distribution of skills within teams. A team where everyone has moderate AI skills is in a very different position from a team where half the members are highly proficient and half have no AI skills at all — even if the average score is identical. The second scenario suggests a knowledge transfer opportunity; the first suggests a more uniform training need.
Look specifically for gaps between foundational literacy and operational proficiency. Employees who score high on AI literacy but low on operational proficiency are ready for practical, tool-specific training. Employees who score low on foundational literacy are not yet ready for operational training — they need conceptual grounding first. Confusing these two groups leads to training programmes that are either too advanced or too basic for the people in the room.
Pay particular attention to AI judgment and critical evaluation scores. This is consistently the weakest layer in most organisations' AI skills profiles, and it is also the layer most directly linked to AI risk. Employees who cannot evaluate AI outputs critically are the ones most likely to propagate AI errors, act on AI hallucinations, or make decisions based on biased model outputs. Gaps in this layer should be treated as a risk management priority, not just a training need.
Finally, look at the relationship between AI skills and AI tool adoption. In most organisations, there is a strong correlation between proficiency and adoption — employees who have stronger AI skills use AI tools more frequently and more effectively. But there are often pockets of high proficiency with low adoption (suggesting tool access or workflow integration barriers) and low proficiency with high adoption (suggesting employees are using AI tools without the skills to use them safely). Both patterns warrant attention.
For a deeper look at how AI skills data connects to measurable business outcomes, see our article on The Direct Link Between AI Skills and Tangible Year-End Financial Results.
Step 7: Turn the Map into Action
An AI skills map that does not lead to action is a waste of everyone's time. The final and most important step is translating the data into a concrete learning and development plan.
Prioritise by impact, not by gap size. The largest skill gaps are not always the most important ones to address. A significant gap in a skill that is rarely used in practice is less urgent than a smaller gap in a skill that is central to daily work. Prioritise training interventions based on the business impact of the skill, not just the size of the gap.
Design role-specific learning pathways. Generic AI training programmes — "AI for everyone" courses that cover the same content for all employees — are almost always a poor investment. The data from your AI skills map should allow you to design differentiated learning pathways: a foundational literacy programme for employees who need conceptual grounding, an operational proficiency programme for employees who are ready for tool-specific training, and a judgment and critical evaluation programme for employees who need to develop more sophisticated AI assessment skills.
Build in measurement from the start. The only way to know whether your training interventions are working is to measure AI skills again after the training has been delivered. Build a re-assessment into your learning pathway design — not as an afterthought, but as a core component. The gap between pre-training and post-training scores is the most direct measure of training effectiveness available to L&D teams. For a practical framework on how to measure and report this, see our article on Moving Beyond the 'Smile Sheet': Why Assessments Are the Ultimate Measure of Training Effectiveness.
Report results in business language. L&D teams that present AI skills data in terms of competency scores and training completion rates will struggle to get executive attention. L&D teams that present the same data in terms of AI adoption rates, risk exposure, and projected productivity impact will get budget and support. Translate your AI skills map into business language before you present it to leadership — and use the ROI framework to make the case for investment. Our ROI of Learning service is designed specifically to help L&D teams make this translation.
What Not to Measure: Common Mistakes in AI Skills Mapping
Just as important as knowing what to measure is knowing what to avoid. Several common approaches to AI skills mapping produce data that looks useful but is not.
Do not measure AI tool usage as a proxy for AI skills. The number of times an employee opens Copilot or ChatGPT tells you nothing about whether they are using those tools effectively. Usage data is a measure of adoption, not proficiency. It can be a useful leading indicator, but it should never be the primary basis for an AI skills assessment.
Do not rely on manager nominations to identify "AI champions." The employees who are most enthusiastic about AI tools are not necessarily the ones with the strongest AI skills — and the employees with the strongest AI skills are not always the most visible ones. Skills mapping based on manager nominations introduces significant selection bias and misses the employees who are quietly developing strong AI capabilities without broadcasting it.
Do not use a single assessment to cover all four skill layers. A 10-question survey cannot reliably measure foundational literacy, operational proficiency, AI judgment, and workflow integration simultaneously. Either accept that you are measuring one layer at a time, or invest in a multi-component assessment that is designed to cover all four layers with appropriate depth.
Do not skip the role-specific calibration step. A framework that applies the same proficiency expectations to all roles will produce data that is technically accurate but practically useless. The insight that "the average employee scored 62% on AI skills" tells you almost nothing. The insight that "customer service managers are meeting the expected proficiency level for their role, while finance analysts are 23 percentage points below the expected level for theirs" tells you exactly where to focus.
The EU AI Act Dimension
For organisations operating in the European Union, AI skills mapping is no longer just a best practice — it is increasingly a compliance requirement. Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure that their staff have "sufficient AI literacy" to understand the capabilities and limitations of the AI systems they work with.
What "sufficient AI literacy" means in practice is still being defined by regulators, but the direction of travel is clear: organisations will need to be able to demonstrate that they have assessed their workforce's AI literacy, identified gaps, and taken steps to address them. An AI skills map is the foundational document for this compliance exercise.
For a detailed guide to what the EU AI Act means for HR and L&D teams, see our article The EU AI Act: A Practical HR Guide.
How Long Does an AI Skills Mapping Exercise Take?
The honest answer is: it depends on the scope and the method. A targeted assessment of a single team or department, using a validated assessment tool, can be completed in two to three weeks from kick-off to results. A full-organisation AI skills mapping exercise, covering all roles and all four skill layers, typically takes two to three months when done properly.
The most common cause of delay is not the assessment itself — it is the time spent building the framework and getting stakeholder alignment on scope and objectives. Organisations that invest in this upfront work consistently complete their mapping exercises faster and produce more actionable results than those that rush to the assessment phase.
If you want to move quickly, the most effective shortcut is to use a validated, pre-built AI skills framework rather than building one from scratch. Selectic's AI Readiness Assessment gives you a research-backed starting point that can be configured to your organisation's context in days rather than months.
Conclusion: The Map Is Not the Territory
An AI skills map is a tool, not an end in itself. The organisations that get the most value from AI skills mapping are the ones that treat it as a living document — something that is updated as AI tools evolve, as the workforce changes, and as the business's AI strategy develops — rather than a one-time exercise that produces a report and is then forgotten.
The AI skills landscape is changing faster than any other skills domain in the history of L&D. The skills that matter most today will be different from the skills that matter most in eighteen months. Building the organisational capability to continuously assess, map, and develop AI skills is not just a training project — it is a strategic capability that will determine how effectively your organisation navigates the AI transition.
Start with a clear framework. Measure what matters. Act on the gaps. And build in the measurement infrastructure to know whether your interventions are working. That is the L&D approach to AI skills mapping — and it is the approach that will produce results that last.
Ready to start mapping AI skills in your organisation? Book a demo with the Selectic team and see how our AI Readiness Assessment can give you a complete picture of your workforce's AI capabilities in days, not months.
