The conversation about AI skills has matured considerably since 2023. The early debate — whether AI would replace workers — has given way to a more nuanced and more useful question: which specific skills determine whether a person can work effectively alongside AI, and which skills will become the defining differentiators of professional performance over the next five years?
This catalog attempts to answer that question systematically. It draws on the emerging consensus from workforce research, competency frameworks developed by leading organisations, and the practical experience of L&D teams that have been running AI readiness assessments and skills mapping programmes at scale. It is organised not as a list of tools or platforms, but as a structured taxonomy of the underlying competencies that determine AI performance — the skills that persist regardless of which specific AI tools a person uses, and which will remain relevant as the tools themselves continue to evolve.
The catalog is intended to be useful for three audiences. For HR and L&D leaders, it provides a reference framework for designing AI skills programmes, conducting gap analyses, and communicating about AI readiness in terms that are specific enough to be actionable. For managers and team leads, it provides a vocabulary for discussing AI skill development with their teams and identifying where development investment will have the highest return. For individual professionals, it provides a map of the competency landscape — a way to understand where they stand today and where they need to develop to remain competitive in an AI-augmented world.
Why a Catalog, Not a Course List
Before diving into the taxonomy, it is worth explaining the choice of format. Most resources on AI skills take the form of course recommendations — "take this Coursera course," "complete this LinkedIn Learning path." That approach has its place, but it conflates the skill with the learning intervention designed to develop it. A catalog of AI skills is more fundamental: it describes the competencies themselves, independent of any particular learning pathway.
This matters for several reasons. First, it allows organisations to assess current capability before prescribing development — to understand where the gaps actually are rather than assuming that everyone needs the same training. This is the core logic behind AI readiness assessments: measure first, then act. Second, it allows for calibration by role and context. The AI skills that matter for a software engineer are different from those that matter for a financial analyst, a marketing manager, or a manufacturing supervisor. A catalog that describes competencies at a sufficient level of specificity can be adapted to different role contexts in a way that a generic course recommendation cannot.
Third, and perhaps most importantly, a competency-based catalog provides a stable framework for tracking development over time. Specific AI tools change rapidly — the tool that is central to a role today may be superseded in eighteen months. But the underlying competencies — the ability to critically evaluate AI outputs, to collaborate effectively with AI systems, to understand the ethical implications of AI-assisted decisions — are more durable. Investing in developing these competencies, rather than just training on specific tools, is what produces lasting competitive advantage. This is a theme we explore in depth in our article on the direct link between AI skills and year-end financial results.
The Four Domains of AI Competency
The catalog is organised around four domains, each of which represents a distinct dimension of AI competency. These domains are not sequential — they develop in parallel — but they do have a rough hierarchy: the foundational domain provides the conceptual basis for everything else, while the advanced domains represent the capabilities that differentiate high performers in AI-augmented roles.
Domain 1: AI Foundations — Understanding the Technology
AI Foundations is the domain of conceptual understanding. It encompasses the knowledge and mental models that allow a person to work with AI systems intelligently — to understand what they are doing, why they behave as they do, and what their limitations are. This domain is foundational not because it is the most important, but because without it, the other domains cannot develop properly. A person who does not understand how large language models work cannot reliably evaluate their outputs. A person who does not understand the concept of training data bias cannot identify when an AI system is producing biased results.
1.1 AI Literacy
AI literacy is the entry-level competency in this domain. It encompasses a working understanding of what AI is and is not — the ability to distinguish between narrow AI (systems designed to perform specific tasks) and general AI (a theoretical future capability), to understand the basic mechanics of how machine learning systems learn from data, and to have a conceptual grasp of why AI systems sometimes fail in ways that are surprising or counterintuitive.
At a practical level, AI literacy means being able to read about AI developments in the press or in industry publications and evaluate them critically — to distinguish between genuine capability advances and marketing claims, to understand what a benchmark result does and does not tell you about real-world performance, and to have an informed opinion about the implications of AI developments for one's own field.
AI literacy is the minimum threshold for productive participation in organisational conversations about AI strategy, AI adoption, and AI risk. Employees who lack it are not just less productive with AI tools — they are a liability in those conversations, because they cannot evaluate claims or contribute meaningfully to decisions.
1.2 Understanding AI Outputs and Limitations
This competency goes one level deeper than general literacy. It encompasses the ability to understand, at a practical level, why AI systems produce the outputs they do — and specifically, why they sometimes produce outputs that are wrong, misleading, or harmful.
The key concepts in this competency include: hallucination (the tendency of large language models to generate plausible-sounding but factually incorrect content); distributional shift (the tendency of AI systems to perform poorly on inputs that differ significantly from their training data); adversarial vulnerability (the susceptibility of AI systems to inputs specifically designed to cause them to fail); and emergent behaviour (the tendency of large AI systems to exhibit capabilities and failure modes that were not explicitly programmed and were not anticipated by their developers).
Understanding these phenomena is not an academic exercise. It is a practical prerequisite for using AI tools safely and effectively in professional contexts. A professional who does not understand hallucination will not know to verify AI-generated factual claims. A professional who does not understand distributional shift will not know to be cautious when applying an AI system to a context that differs from its training environment.
1.3 AI Ethics and Responsible Use
This competency encompasses the ability to identify and reason about the ethical dimensions of AI use in professional contexts. It includes understanding the concepts of algorithmic bias and fairness, data privacy and consent, transparency and explainability, accountability for AI-assisted decisions, and the environmental costs of large-scale AI deployment.
At a practical level, this competency means being able to identify situations where AI use raises ethical concerns — where the use of AI-generated content without disclosure might be misleading, where an AI-assisted decision might be discriminatory, where the collection of data to train or fine-tune an AI system might violate privacy expectations — and to navigate those situations appropriately.
This competency is increasingly important in the context of the EU AI Act and other emerging AI regulatory frameworks. As we explored in our article on the EU AI Act and what it means for HR, organisations that deploy AI systems in high-risk contexts have specific obligations around transparency, human oversight, and documentation — and employees in those organisations need the competency to understand and fulfil those obligations.
Domain 2: AI Operations — Using the Technology Effectively
AI Operations is the domain of practical capability. It encompasses the skills that determine whether a person can actually use AI tools to accomplish real work tasks — not just in a generic sense, but in the specific context of their role and the specific tools available to them. This is the domain where most AI skills training currently focuses, and it is genuinely important. But it is important to understand that operational capability without foundational understanding is fragile — it produces people who can use AI tools when they work as expected, but who are lost when they encounter edge cases, failures, or novel situations.
2.1 Prompt Engineering and AI Communication
Prompt engineering is the skill of communicating with AI systems in ways that produce useful outputs. At its most basic level, this means understanding how to formulate requests clearly and specifically — providing sufficient context, specifying the desired format and length of the output, and iterating on prompts when the initial output is not what was needed.
At a more advanced level, prompt engineering encompasses techniques such as chain-of-thought prompting (asking the AI to reason step by step through a problem before producing an answer), few-shot prompting (providing examples of the desired output format), role prompting (asking the AI to adopt a specific perspective or expertise), and structured prompting (using templates or frameworks to ensure consistent output quality).
The importance of prompt engineering varies significantly by role and use case. For roles that involve heavy use of generative AI tools — content creation, code generation, data analysis, research synthesis — it is a core operational competency. For roles where AI use is more peripheral, it is a useful but not critical skill. This is one reason why skills mapping is so important before designing AI skills programmes: the relative priority of different competencies varies enormously by role.
2.2 AI-Augmented Workflow Design
This competency encompasses the ability to redesign work processes to incorporate AI tools effectively — to identify which tasks in a workflow are good candidates for AI augmentation, to design the human-AI handoffs that determine where AI output ends and human judgment begins, and to build the verification and quality control steps that ensure AI-augmented work meets the required standard.
This is a higher-order operational competency than prompt engineering, because it requires not just the ability to use AI tools but the ability to think systematically about how work should be organised in an AI-augmented environment. It is also a competency that has significant implications for team and organisational performance — a team whose members individually know how to use AI tools but have not collectively redesigned their workflows to incorporate AI effectively will capture only a fraction of the potential productivity gains.
2.3 AI Tool Selection and Evaluation
This competency encompasses the ability to evaluate AI tools against specific use case requirements — to understand the trade-offs between different tools, to assess the quality and reliability of AI outputs in a given context, and to make informed decisions about which tools to use for which tasks.
This competency is increasingly important as the AI tool landscape becomes more complex. The number of AI tools available to knowledge workers has exploded in the last two years, and the quality and reliability of those tools varies enormously. Professionals who can evaluate tools critically — who understand what benchmarks do and do not tell you about real-world performance, who can design practical tests of tool capability for their specific use cases, and who can make cost-benefit assessments that account for the time cost of learning and integrating new tools — are significantly more effective at capturing value from AI than those who simply adopt whatever tool is currently most hyped.
2.4 Data Literacy for AI
This competency encompasses the foundational data skills that are prerequisites for effective AI use in data-intensive roles. It includes the ability to understand and evaluate the data that AI systems are trained on or applied to, to identify data quality issues that might affect AI performance, to interpret statistical outputs and uncertainty estimates, and to understand the basics of how data is collected, stored, and processed in organisational systems.
Data literacy for AI is distinct from general data analysis skill — it is specifically about the data-related knowledge that is needed to use and evaluate AI systems effectively, rather than the ability to conduct original data analysis. It is a prerequisite for competencies like AI output evaluation and AI-augmented workflow design in data-intensive contexts.
Domain 3: AI Judgment — Evaluating and Overseeing the Technology
AI Judgment is the domain of critical evaluation and human oversight. It encompasses the skills that determine whether a person can reliably identify when AI outputs are wrong, misleading, or inappropriate — and make good decisions about when to use, modify, or reject AI-generated content. This domain is arguably the most important for organisations operating in regulated industries, customer-facing environments, or any context where errors have significant consequences.
3.1 AI Output Verification and Fact-Checking
This competency encompasses the ability to systematically verify AI-generated content for factual accuracy — to identify claims that require verification, to use appropriate sources and methods to verify them, and to calibrate the level of verification effort to the risk profile of the use case.
The challenge of AI output verification is that it requires the professional to have sufficient domain knowledge to recognise when an AI output might be wrong — which means that this competency is highly domain-specific. A financial analyst verifying AI-generated financial analysis needs different knowledge and methods than a lawyer verifying AI-generated legal research. This is one reason why generic AI skills training has limited effectiveness for this competency: it can teach the general principle of verification, but the specific skills needed to verify outputs in a given domain can only be developed through domain-specific practice.
3.2 Bias Recognition and Fairness Assessment
This competency encompasses the ability to identify bias in AI outputs — to recognise when an AI system is producing outputs that systematically favour or disfavour particular groups, perspectives, or outcomes, and to understand the sources of that bias and its implications for the use case.
Bias in AI systems can arise from many sources: biased training data, biased problem formulation, biased evaluation metrics, or biased deployment contexts. Recognising bias requires both technical understanding (knowing how bias can enter AI systems) and domain knowledge (knowing what fair and unbiased outputs should look like in a given context). This competency is particularly important in HR contexts — where AI is increasingly used in recruiting, performance evaluation, and compensation decisions — and in any context where AI outputs are used to make decisions that affect people. For more on this, see our article on recruiting tests best practices.
3.3 Risk Assessment for AI-Assisted Decisions
This competency encompasses the ability to assess the risk profile of AI-assisted decisions — to understand the potential consequences of AI errors in a given context, to identify the appropriate level of human oversight for different types of AI-assisted decisions, and to design decision processes that maintain appropriate human accountability even when AI is doing much of the analytical work.
This competency is closely related to the AI ethics competency in Domain 1, but it is more operational — it is about the practical skill of risk assessment in specific decision contexts, rather than the conceptual understanding of AI ethics in general. It is a competency that is particularly important for managers and senior professionals who are responsible for high-stakes decisions and who need to understand how the introduction of AI into their decision processes changes the risk profile of those decisions.
3.4 Regulatory Compliance for AI Use
This competency encompasses the ability to understand and apply the regulatory requirements that govern AI use in a given industry and jurisdiction. As the regulatory landscape for AI becomes more complex — with the EU AI Act, sector-specific regulations in finance, healthcare, and other industries, and evolving data protection requirements — this competency is becoming increasingly important for professionals in regulated industries.
At a practical level, this means understanding which AI use cases in one's role or organisation are subject to regulatory requirements, what those requirements are, and how to ensure that AI use in those contexts is compliant. It also means staying current with regulatory developments — which requires ongoing learning rather than a one-time training intervention.
Domain 4: Human-AI Collaboration — Working Alongside the Technology
Human-AI Collaboration is the domain of interpersonal and behavioural skills that determine how effectively a person works alongside AI systems — and alongside other people in AI-augmented work environments. This domain is often overlooked in AI skills frameworks that focus primarily on technical competencies, but it is increasingly recognised as a critical determinant of AI performance at both the individual and team level.
4.1 Adaptive Learning and AI Tool Adoption
This competency encompasses the ability to learn new AI tools and capabilities quickly and effectively — to approach new AI tools with curiosity rather than anxiety, to develop practical proficiency through experimentation, and to continuously update one's AI skill set as the technology evolves.
This is fundamentally a learning agility competency applied to the specific context of AI tool adoption. It is closely related to the broader competency of learning agility that has long been recognised as a predictor of professional performance in rapidly changing environments. But it has specific characteristics in the AI context: the pace of change in AI tools is faster than in most other technology domains, the gap between early adopters and late adopters is larger, and the consequences of falling behind are more severe.
4.2 AI-Augmented Communication and Collaboration
This competency encompasses the ability to communicate effectively about AI-assisted work — to be transparent with colleagues, clients, and stakeholders about when and how AI has been used, to manage the expectations of others about what AI can and cannot do, and to collaborate effectively with colleagues who have different levels of AI fluency.
This competency is increasingly important as AI use becomes more widespread and the norms around AI disclosure and transparency are still being established. Professionals who can navigate these norms effectively — who know when to disclose AI use, how to frame AI-assisted outputs appropriately, and how to build trust with colleagues and clients in an AI-augmented work environment — have a significant advantage over those who cannot.
4.3 Human Oversight and Accountability Maintenance
This competency encompasses the ability to maintain appropriate human accountability for AI-assisted decisions and outputs — to resist the tendency to over-delegate to AI systems, to maintain the critical distance needed to evaluate AI outputs objectively, and to take responsibility for the quality and consequences of AI-assisted work.
This is a competency that is easy to describe but difficult to develop, because it runs counter to some natural human tendencies. Research on automation bias — the tendency to over-rely on automated systems even when they are producing incorrect outputs — suggests that maintaining appropriate human oversight of AI systems requires active effort and deliberate practice. Organisations that want to develop this competency in their workforce need to create the conditions for it: clear accountability structures, explicit norms around AI use, and regular opportunities to practice critical evaluation of AI outputs.
4.4 Cross-Functional AI Collaboration
This competency encompasses the ability to work effectively with colleagues across different functions and disciplines on AI-related projects — to communicate about AI in ways that are accessible to non-technical audiences, to understand the perspectives and constraints of different stakeholders in AI initiatives, and to contribute productively to cross-functional teams working on AI adoption, AI governance, or AI-enabled product development.
This competency is particularly important for professionals in roles that sit at the intersection of technical and non-technical domains — product managers, data analysts, HR business partners, and others who need to translate between the language of AI technology and the language of business and people. It is also increasingly important for senior leaders who need to make strategic decisions about AI investment and adoption without necessarily having deep technical expertise themselves.
Applying the Catalog: A Framework for L&D Leaders
The catalog above describes 16 distinct AI competencies across four domains. For L&D leaders designing AI skills programmes, the challenge is not to develop all 16 competencies in all employees — that would be neither feasible nor necessary. The challenge is to identify which competencies are most important for which roles, to assess current capability against those competencies, and to design targeted development interventions that address the most critical gaps.
This is the core logic of the AI readiness assessment approach that Selectic has developed: start with a systematic assessment of current capability across the four domains, calibrated to the specific role context of each employee. Use the assessment results to build a skills map that shows where the gaps are — at the individual level, the team level, and the organisational level. Then design development interventions that target the specific competencies where the gaps are largest and the business impact of closing them is highest.
As we explored in our article on how to map AI skills from an L&D perspective, the mapping process itself is valuable — it forces a structured conversation about which AI competencies actually matter for different roles, and it creates a shared vocabulary for discussing AI skill development across the organisation.
The final step — and the one that is most often neglected — is measurement. Designing a development programme and delivering it is not enough. The question that matters for the business is whether the programme actually improved the AI competencies it was designed to develop — and whether those improvements translated into better performance outcomes. This requires the kind of rigorous ROI measurement that goes beyond completion rates and satisfaction scores to measure actual competency change. As we argued in our article on moving beyond the smile sheet, this is the measurement standard that L&D functions need to adopt if they want to be taken seriously as strategic partners rather than cost centres.
The AI Skills That Will Matter Most in 2027 and Beyond
Looking ahead, the AI competency landscape will continue to evolve. Several trends are worth noting for L&D leaders planning programmes that need to remain relevant over a multi-year horizon.
The first trend is the increasing importance of judgment and oversight competencies relative to operational competencies. As AI tools become more capable and more widely adopted, the ability to use them at a basic level will become a commodity skill — a baseline expectation rather than a differentiator. The competencies that will differentiate high performers will increasingly be the judgment and oversight competencies in Domain 3: the ability to evaluate AI outputs critically, to identify bias and error, and to maintain appropriate human accountability for AI-assisted decisions.
The second trend is the increasing importance of domain-specific AI competency relative to generic AI literacy. As AI tools become more deeply embedded in specific professional domains — legal AI, financial AI, medical AI, engineering AI — the most valuable AI skills will be those that combine deep domain knowledge with AI fluency. Generic AI literacy will remain important as a foundation, but the professionals who will command the highest value in the labour market will be those who can apply AI effectively in their specific domain, which requires both the generic competencies described in this catalog and deep domain expertise.
The third trend is the increasing importance of human-AI collaboration competencies as AI systems become more agentic — capable of taking actions autonomously, not just generating outputs for human review. As AI agents become more prevalent in the workplace, the competency of human oversight and accountability maintenance will become more critical, and more difficult to exercise. The professionals who develop this competency now — who practice maintaining critical distance from AI outputs and taking responsibility for AI-assisted decisions — will be better positioned to exercise it effectively as AI systems become more capable and more autonomous.
For organisations that want to stay ahead of these trends, the implication is clear: invest in building the full range of AI competencies described in this catalog, not just the operational skills that are most immediately visible. The AI readiness assessment is the right place to start — it provides the baseline measurement that makes it possible to track progress and demonstrate the return on that investment over time. For more on how to structure that investment and measure its impact, see our articles on AI skills and financial results and the top 5 AI readiness assessments in 2026.
Related reading:
- How to map AI skills from an L&D perspective
- The direct link between AI skills and year-end financial results
- The top 5 AI readiness skills assessments in 2026
- The top 10 AI learning platforms in 2026
- Moving beyond the 'Smile Sheet': why assessments are the ultimate measure of training effectiveness
