"All this he saw, for one moment breathless and intense, vivid on the morning sky; and still, as he looked, he lived; and still, as he lived, he wondered."

The Jobs of AI: Italy’s effort to create a framework (and its impact on learning)

On April 30th, the new UNI 11621-8: 2026 was presented, a norm carrying an ambitious title somewhere along the lines of Professional role profiles relating to Artificial Intelligence. The norm is part 8 of a series on role profiles for ICT in the context of non-regulated professional activities, and it comes in the wake of the European efforts to regulate the landscape around Artificial Intelligence. Its structure follows the European e-Competence Framework.

The timing of this norm is crucial, of course, and it’s been subject to debate, with some professionals arguing that a drafting norm on these topics is like trying to take a picture of a flowing river. They aren’t wrong. Still. The regulation comes at a time when the labour market demands AI-specific professionals, but still lacks shared frameworks for recognising and evaluating them, let alone training them. In this vacuum, the technical-voluntary regulatory instrument presents itself as a pragmatic response: it does not regulate the profession in the legal sense of the term, but defines its boundaries to guide the qualification of individuals, the organisations that hire them, and — at least implicitly — the training system that prepares them.

For those working in training, let this be the starting point to understand what the norm does, what it doesn’t do, and what questions it leaves open.


1. The methodological approach: strengths and questionable choices

1.1. The reference framework: EQF, e-CF and third generation profiles

The standard explicitly states that it follows the principles of the European Qualifications Framework (EQF) and is based on UNI EN 16234-1, the European e-Competence Framework. This is a welcome choice for consistency: adopting shared descriptors (knowledge, skills, autonomy, and responsibility) means making the profiles essentially portable across different training contexts — formal, informal, and non-formal — and comparable at the European level.

If I understand this correctly, the defined profiles are classified as third generation, distinguishing from the second-generation European profiles of UNI 11621-2, such as the well-known Chief Information Officer. Appendix C clarifies the correlation between the two generations, but does so with a significant caveat: the correlation, as the norm states, “is indicative and therefore does not identify the skills and knowledge of the specific third-generation profiles with those of the respective second-generation profiles”. The correlation, as we’ll see later, reveals something interesting from a critical perspective: the second-generation Developer category alone covers six of the twelve third-generation profiles. This speaks volumes about the differentiation that has occurred in the AI ​​sector compared to the previous generation: what was once a generic developer has fragmented into highly specialised roles. While this is a useful argument for those who want to justify the need for differentiated training paths, I have already spoken several times about fragmentation as a form of labour control, always directing you to read this book.

In practical terms, the AI ​​profiles of this standard are new entities, not simple specialisations of pre-existing ones, even if they are associated with figures such as Developer, Data Scientist or ICT Security Specialist. For my fellow trainers that specialise in ICT, this has an immediate practical consequence: the courses built on the second generation profiles cannot be automatically updated to the new standard, and a detailed mapping work is required.

1.2. The table structure: a rich but very heavy model

Each profile is presented through a table divided into: brief definition, mission, expected results (divided into Final Responsible, Executor, Contributor), main tasks, skills assigned by the e-CF with relative level, abilities/knowledge (with S and K codes), and area of ​​application of the KPIs.

We’ve seen this when we did part 7 of the Italian norm on BIM professionals: while this architecture has the obvious merit of granularity, it can become very heavy. On one side, each profile is described in a level of detail unusual for a technical standard, even providing the formulas for calculating performance indicators in a way that scares the hell out of me. Those involved in instructional design can find here a very rich basis for building curricula, assessment tests, and competency rubrics. I find, however, that the limitation is at least as large as the merit: the model’s complexity risks making it accessible only to certification hunters (there’s a specific sector of hell for them), and not to the education system as a whole. A computer science degree program manager or an ITS Academy coordinator will be unlikely to derive immediate operational guidance from directly reading the prospectuses without interpretive guidance. The regulation does not provide application guides for training, and this gap is significant. Hence, my current effort to clarify what’s inside the norm.

1.3. The definition of “AI professional”: a clear but not tension-free boundary choice

The norm defines an AI professional as someone who “exercises or participates in an economic activity aimed at the design, development, training, integration, evaluation, or supply of systems, services, or works based on artificial intelligence techniques.” An explicit note excludes the power user, intended as the mere expert user of existing systems, who doesn’t fall within the scope of the norm.

This is an understandable choice from a regulatory perspective, but it raises non-trivial training issues: in the real market, the distinction between those who create and those who use AI is increasingly blurred for reasons that are inherently connected to the nature of the matter at hand. Figures such as a business consultant who implements third-party AI solutions, or an industry professional who integrates generative AI tools into operational workflows, occupy a grey area that the norm tends to exclude but that cannot be overlooked in practical terms. Widespread AI literacy training, which does not aim at creating labelled professionals but at fostering a conscious and critical use, remains outside the scope of this regulation, and this must be taken into account.

But who are these guys anyway?

2. The twelve professional figures: a critical overview

Here’s the list:

  1. Chief AI Officer (CAIO, not to be confused with TIZIO);
  2. AI Consultant (yeah, seriously);
  3. AI Product Manager;
  4. AI Prompt Engineer (again, seriously);
  5. AI Algorithm Engineer;
  6. AI Deep Learning Engineer;
  7. AI Data Engineer;
  8. AI Data Scientist;
  9. AI Security Specialist;
  10. AI Machine Learning Engineer;
  11. AI Natural Language Processing Engineer (who’s going to need larger business cards);
  12. AI Research Scientist.
It might be ’cause I just came back from Billund, but I picture them like this, and the creepy song “Everything is Awesome” plays in a loop inside my head.

2.1 The logic of classification: reading axes

The twelve figures make more sense of you read them along three axes:

  • Strategic-operational axis. The first three roles (Chief AI Officer, AI Consultant, AI Product Manager) have a predominantly strategic, governance, and business interface orientation. The Chief Officer is the top executive, of course: they manage the organisation’s AI strategy, chair the ethics committees, and report to the Board of Directors. The AI ​​Consultant supports organisations in adopting AI, so they have a role we might consider akin to the Change Manager. The AI ​​Product Manager governs the lifecycle of AI-based products. All three require transversal skills in addition to the technical ones, and this is reflected in the e-competences assigned: governance, risk management, innovation management.
  • Technical-Specialist Axis. The core group (Figures 4–11) covers technical specialisations: from the Prompt Engineer, a new and distinctive role in the era of Large Language Models, to algorithm, deep learning, machine learning, and NLP engineers, through data-oriented roles (Data Engineers and Data Scientists) and security (AI Security Specialists). This group is the operational heart of the AI ​​sector.
  • Research axis. Lonely figure 12, the AI ​​Research Scientist, has an autonomous position oriented towards the production of scientific knowledge, academic publication, and collaboration with the international community. It is the figure with the highest e-CF level (A.7 and A.9 at level 5) and the furthest from the direct production cycle, though my opinion has always been that those in the strategic axis have a duty to contribute to research. But hey, that’s just me.

3.2 Choices deserving critical attention

3.2.1. The AI ​​Prompt Engineer as an independent role

The creation of a dedicated profile for prompt engineering is a risky choice, and I think it came after some pressure from the market. It recognises a specialisation that has emerged only in recent years, with a set of specific skills (prompt-chain design, guardrail management, PromptOps, computational fairness assessment) that no pre-existing standard has ever codified. Plus, everybody wants to be a Prompt Engineer.

Yeah, you guessed the soundtrack.

It’s risky because it’s a rapidly evolving field, and what is currently a specialisation could be partially absorbed by automated tools or other roles within a few years. In other words, it could be that prompt engineering is a currently needed skill because the tools suck, but their further development will allow people to use them as intended, through natural language. The norm itself, in describing continuous professional development as necessary for all profiles, implicitly acknowledges this volatility.

3.2.2. The distinction between AI Data Engineer and AI Data Scientist

The standard keeps the two profiles separate, in line with industry practice: the Data Engineer is responsible for data infrastructure (pipelines, ETL, dataset quality), while the Data Scientist is responsible for analysis and modelling. However, they both share many core skills, and the standard itself highlights that they collaborate closely.

For training, this raises a relevant question: does it make sense to create completely separate curricula, or is a common core with subsequent specialisations more appropriate?

3.2.3. The AI ​​Research Scientist in a professional standards document

The role of the scientific researcher is traditionally governed by academic evaluation systems (publications, citations, impact factor, being somebody’s nephew, that kind of thing) that have their own rationale, different from that of professional certification. Including it in a standard on ICT profiles is an original choice to say the least, which introduces a tension: the KPIs proposed for this figure (number of publications in A-level journals, increase in the h-index, participation in international keynotes) belong to an evaluation ecosystem — the academic bibliometric one — which is distant and sometimes in conflict with the logic of the professional environment.

3.2.4. Missing figures

A critical commentary cannot ignore what isn’t there. The standard does not include profiles explicitly oriented towards AI ethics as an autonomous specialisation (since ethical issues are transversal to all profiles but do not constitute a dedicated role), AI communication and journalism, specialised AI training (AI trainers or AI educators). See for instance roles such as AI Ethics Lead, Responsible AI Officer or Trust & Safety Specialist in large tech companies (Google, Microsoft, Meta). Since these figures exist in the market and in academic curricula, and their absence suggests that the classification, although broad, does not claim to be exhaustive, as the norm itself recognises in point 4.1.


3. The common thread: trustworthy AI, regulatory compliance, and sustainability

A horizontal reading of the twelve profiles reveals three themes that recur systematically, almost like a common genetic code.

The first is compliance with the AI ​​Act (EU Regulation 2024/1689) and the UNI CEI ISO/IEC 42001 management standard. Each profile includes compliance with current legislation among its main tasks, and the KPIs of almost all roles include regulatory compliance indicators. This attention reflects the historical moment: the regulation was created when the AI ​​Act was already in force, and it couldn’t have been otherwise. For trainers, this means that any preparation program for these roles must include a regulatory literacy component that, until a few years ago, was absent from technical curricula. Nobody wants to study technical norms. They’re going to have to.

Possibly with just a spoonful of sugar…

The second cross-cutting theme is computational and environmental sustainability. Each profile includes energy efficiency indicators (kWh for inference, CO₂eq reduction) and refers to GreenOps practices. This is a significant and relatively new addition to ICT professional standards: the ecological dimension is incorporated into the technical professional profile not as an option, but as a requirement.

The third theme is explainability and transparency (XAI). Every technical profile requires explainable AI skills and the production of auditable documentation (Model Cards, registers, reports). Regulatory pressure is evident here too, but recognising explainability as a distinct professional skill — and not simply a system characteristic — is a conceptually relevant step. And I love it.

Let me just make an example on how explainability and transparency are relevant to the figure of the Chief AI Officer, just to be clear. Among their knowledge requirements, K011 — “Communication to stakeholders, reporting and explainability” — goes as far as to ask to “prepare standardised templates for explanations, with examples and application guidelines”, explicitly mentioning interactive dashboards and narrative reports as tools. These explanations are among the expected results for which the Officer is ultimately responsible through the AI Registry/Model Cards enterprise, Board of Directors report (see R05). In other words, they must maintain a corporate registry of AI systems and produce standardised documentation for each model, reporting to the Board of Directors.

Among their main tasks, CP03 requires “issuing policies and procedures and ensuring non-technical explanations for AI decisions”, so it is not enough for the system to be technically explainable: the Officer must ensure that the explanations are understandable even to those without technical skills. Moreover, CP10 calls for “publishing the annual report on the ethical/social impact of AI and the declaration of compliance with current legislation before deployment”, aligning the report with frameworks such as ALTAI and IEEE 7000.

On the skills side, S009 — “Transparency, Documentation & AI Registry” — is particularly detailed: it requires the ability to “generate non-technical explanations of AI systems’ decision-outputs for users and regulators,” to define differentiated disclosure levels for different audiences, and to “selectively publish the AI ​​Registry for high-risk systems.” Connected to that, the S019 on post-market monitoring requires the detection of drifts and performance degradation of the models in operation, integrating the data into composite governance indices.

Finally, among the KPIs, KPI11 measures “human-verified oversight (HITL)”: the percentage of critical decisions with human oversight verified by third parties out of all critical decisions. This indicator translates transparency into a concretely measurable and auditable metric.


4. Implications for training: some considerations

The standard is not a teaching document and does not claim to be one. However, those designing AI training courses cannot ignore it, for at least three reasons.

The first is the Number of the Beast: certifications Law 4/2013 — to which this provision is explicitly linked — allows professional associations to certify their members based on technical standards, and they will. Of course they will. We know the drill from there.

The second reason concerns the granularity of the profiles as a design tool. The “Skills/Knowledge” sections of each table — with the S and K codes — offer detailed lists of competencies that can be used as a starting point for building learning objectives, assessment tests, and rubrics. This isn’t an immediate process, but the raw material is available.

The third reason is more critical: the standard captures a historic moment, and the speed of change in AI is such that some of the skills codified today could be obsolete or redefined within a few years. The standard itself requires periodic revisions, but technical standardisation is inevitably slower than the technology market. Those who train AI professionals must therefore use the standard as a guiding reference, not as a sacred text, maintaining the ability to continuously update content and methodologies.

The only problem is that if you’re too fast, you risk becoming your own villain.

5. The Conclusions So Far

UNI 11621-8:2026 is an ambitious document. It proposes a systematic classification of twelve AI professional roles with a level of detail unprecedented in Italian ICT sector regulation. Its link to the EQF, the e-CF, and the AI ​​Act ensures its consistency with European reference frameworks. The choice to integrate issues such as ethics, sustainability and regulatory compliance across the board reflects a maturity in the sector that was not present in previous regulatory generations.

However, there remain issues that a single regulation cannot resolve. The gap between the descriptive granularity of the profiles and their translatability into concrete professional paths is real and requires mediation. The rapid pace of technological evolution challenges the stability of any classification. The exclusion of expert users and hybrid roles leaves out a significant portion of real-world work with AI.

For teachers and trainers, this standard is a valuable tool provided it is used with critical awareness: as a map, not as a territory.

Let’s start exploring, then.
books and literature

Snow Country

Sometimes you read a book with beautiful prose and well-constructed characters but, when you put it down, you couldn’t tell the plot if your life depended upon it. Kawabata Yasunari‘s Snow Country is one of these books. Born in 1899, the author won the Nobel

Read More »
books and literature

War and Peace

I’m satisfied.Satisfied and surprised.Satisfied because this book, since reading the Peanuts as a child, is the Ultimate Achievement. Once you’ve read it, you feel you can achieve everything. You could even be the first beagle to land on the moon.And satisfied because… by God, this

Read More »
Share on LinkedIn
Throw on Reddit
Roll on Tumblr
Mail it
No Comments

Post A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

RELATED POSTS

Snow Country

Sometimes you read a book with beautiful prose and well-constructed characters but, when you put it down, you couldn’t tell the plot if your life depended upon it. Kawabata Yasunari‘s Snow Country is one of these books. Born in 1899, the author won the Nobel

Read More

War and Peace

I’m satisfied.Satisfied and surprised.Satisfied because this book, since reading the Peanuts as a child, is the Ultimate Achievement. Once you’ve read it, you feel you can achieve everything. You could even be the first beagle to land on the moon.And satisfied because… by God, this

Read More

A New Vision for the Learning Crisis

The end of 2024 brought us no grand educational reckoning, no moment of consensus that we need to reimagine how adults learn. Instead, through 2025, we’ve settled into a peculiarly quiet collective exhaustion with the pandemic’s educational experiments, paired with a creeping anxiety that something

Read More