Earlier this week I wrote about KPIs and how we’re using them wrong, l but I fear I left something out, which is how KPIs are used in education. Nowhere is the distortion of goals by poorly chosen KPIs more visible than in the classroom. And no one can teach us to break the system better than students.
Gaming the Metrics: how does that work?
In teaching BIM, AI, and other digital tools, I often see students approach assignments not with the intent to solve a problem or test a strategy, but to optimise for the assessment.
It’s okay. I’m not blaming you. It’s a rational response.
If you tell someone they’ll be evaluated on technical complexity, they will prioritise that—even if it sabotages usability, collaboration, or conceptual clarity. When a grading rubric emphasises “parametric richness,” students pile on constraints. If the focus is on visual output, they produce glossy renderings with no functional backbone. If the benchmark is a complete dataset, they’ll cram the model full of placeholder properties—empty shells of metadata that look impressive in a spreadsheet but offer no actual insight. I could go on for hours.
This isn’t laziness. It’s adaptation. And this is the classic problem of metric gaming.
In a digital design context, it leads to a surface-level performance of understanding rather than actual competence. Students quickly learn that complexity is easier to simulate than quality, especially when the evaluation system doesn’t account for context.
This is particularly risky when we’re teaching tools that will be used in high-stakes environments:
- an information model cluttered with unnecessary parameters becomes a liability when it needs to be handled by third parties;
- a process trained on irrelevant data may automate the wrong task altogether;
- a dashboard full of green lights may hide critical blind spots.
When we reward the wrong outcomes, we train future professionals to mimic efficiency without developing the capacity to interrogate goals or evaluate impact. In doing so, we create a culture of digital performance—impressive models, empty of strategy.
As educators, we need to stop asking: “Did they use the tool correctly?” and start asking:
“Did they make the right decisions about when, how, and why to use it at all?”
This is why KPIs in digotal education, if we must have them, should prioritise reasoning, decision-making, and clarity of intent—not just output. Digital education is not about producing the most complex object. It’s about helping students develop the critical mindset to know which complexity serves a purpose—and which is just noise. This has never been more relevant, with the rise of the obsession to catch students who are “cheating with AI”.
No Comments