How the Wrong Metrics Derail Digital Thinking (and What I Tell My Students About It)
One of my students once proudly told me their Revit family—a window—had 42 parameters.
They had worked hard on it, tuning and nesting constraints, adding types, and connecting dimensions to carefully labeled values. Their excitement was tangible: a sense of mastery over a complex digital object, the satisfaction of building something with layers of control and variability.
“How did that impact the usability of the family?” I asked. ‘cuase I’m fun at parties.
They looked puzzled. It was the first time someone had questioned whether the complexity added by all those parameters actually served a purpose. Usability hadn’t even crossed their mind. The goal had been clear in their head: make it parametric. In their perception, it was the indicator of success.

Everybody loves a parametric window, right? (source here)
This is exactly how the problem starts.
The Metric of Perfection vs. The Perfect Metric
I talked about quality a few weeks ago, with a bejewelled toilet brush, and how the measure of what we deliver has to be the actual impact change it’s able to produce for the client.
When we talk about KPIs—Key Performance Indicators—we act as if they’re neutral, just as we do with Quality. As if they’re convenient labels that help us monitor and improve. But KPIs are not neutral. KPIs shape behaviour. They define what “success” looks like, and more dangerously, they condition people—students, professionals, managers—to optimise for visibility rather than value. A thing needs to be parametric? Let’s parameter the shit out of it! A process needs to be full digital? Let’s jump on the throat of anyone who’s doodling.
In this case, the student’s implicit KPI was “number of parameters.” More parameters equalled better design. But anyone who’s ever had to use a bloated, fragile Revit family in a real project knows that this is the digital equivalent of putting lipstick on a spreadsheet. The model looks impressive, but resists actual use.

If you’re not particularly interested in KPIs but you are interested in Revit windows, this article talks you through the issue of usability from a technical point of view.
But a Revit window is a Revit window, and—as long and tiresome as that might be—if you encounter one that’s not usable, you can always make another one.
What worries me is how often this happens, not only in student projects but also across entire organisations. The wrong metrics, especially when left unchallenged, end up driving the wrong outcomes. They reward the appearance of complexity, the documentation of activity, the illusion of progress. And in doing so, they suffocate usability, clarity, and strategic intent.
Let’s take a look at KPIs then, at how does this happen and at how we can avoid it.
The Measurable Trap: When Metrics Replace Meaning
The problem, I believe, lies not just in which KPIs we choose, but in why we feel compelled to define KPIs in the first place. In digital processes—whether in architectural design, BIM workflows, or AI deployment—we often begin with qualitative, complex goals: improve collaboration, enhance design intent, support better decisions. But these ambitions are difficult to track. They don’t fit neatly into charts or dashboards.
So instead, we measure what we can. We substitute the original goal with something quantifiable. The result is a proxy—and that proxy, once formalised as a KPI, slowly begins to replace the original goal in people’s minds.
Take the student’s Revit family: their original intent might have been to create a flexible, reusable object for multiple contexts. That’s a valid design goal. But flexibility is hard to measure. It doesn’t come with an automatic score. So they reached for what could be counted—parameters—and took that count as proof of success.
The same logic applies to many digitalisation strategies in the AEC industry.
Consider a few examples (I changed them around a bit, but they all come from real life).

Goal: improve interdisciplinary collaboration.
KPI chosen: number of people accessing the CDE daily.
Result: activity is rewarded over actual exchange of meaningful data. The CDE becomes a passive repository rather than an active hub.

Goal: increase model quality.
KPI chosen: number of warnings or clashes detected.
Result: teams optimise to reduce clash counts rather than confront the real, systemic issues behind coordination problems.

Goal: empower decision-making through information.
KPI chosen: number of data-rich objects or parameters per object.
Result: the model becomes saturated with data nobody uses, complicating both authoring and maintenance.
These examples might seem like implementation failures, but they’re deeper than that. They’re failures of translation—a breakdown between intent and evaluation. And they reveal a fundamental issue with the very imperative to quantify.
What gets measured, gets managed—but only if the measurement is meaningful. If the measurement is a proxy, and the proxy becomes the focus, the system starts to drift. Over time, people no longer remember the original question. They just keep chasing the number.
Incidental Note: Not All KPIs Are Numbers
While KPIs are often thought of as strictly quantitative, it’s worth remembering that qualitative KPIs exist—and matter. As you can see from a quick Google search (see here for instance), qualitative KPIs rely on observations, interviews, and subjective feedback to measure outcomes that resist numerical expression. They’re helpful for tracking things like team sentiment, client satisfaction, or creative value, where success is contextual rather than countable.
The catch? They require interpretation. Unlike dashboards that light up in green or red, qualitative indicators demand conversation. They’re not about how much but how well, and that shift can feel uncomfortable in data-driven environments. Still, if we care about strategic alignment and meaningful transformation, they’re often the only way to measure what matters.
KPIs? Where we’re going we don’t need KPIs
You might be thinking, “Let’s get rid of them!” Nope. This is not a call to abandon KPIs. Rather, it’s a call to treat them with caution and context. Especially in education and innovation, where so much of the value lies in what resists simple measurement—insight, clarity, provocation, perspective—we need to be more thoughtful about how we define “performance” at all.
Reframing the KPI: From Measurement to Meaning
So, how do we move forward, as educators and professionals, without falling into the measurable trap?
First, we stop thinking of KPIs as immutable targets and start thinking of them as strategic questions. Instead of asking how much or how fast, we begin with why, for whom, and to what end. That’s the difference between an actual KPI and a metric performance: the key, as in something that’s used to unlock meaningful insights and, in turn, to make decisions.
A good KPI is not just a number on a dashboard. It’s a signal—a way to check if a process is aligned with its purpose. In this light, a “Key Performance Indicator” becomes less about performance as output, and more about performance as navigation: are we still headed in the right direction?
In the context of BIM or digital workflows, this might mean replacing surface metrics like:
- “Number of LOD 400 objects in the model” → “Number of modelling decisions made in coordination with downstream users”;
- “Speed of response from the AI assistant” → “Number of decisions meaningfully augmented by the AI process”
These indicators don’t lend themselves to automated scoring. They require review, reflection, and dialogue. But that’s the point. Qualitative or hybrid KPIs force us to revisit strategy. They introduce friction—the productive kind—that makes room for interpretation.
In the classroom, I’ve begun encouraging students to define their own evaluation metrics at the start of a project. Not just to tell me what they’ll produce, but what kind of value they want to create, and how they’ll know if they’ve succeeded. The goal is to shift the centre of gravity from compliance to intentionality.
This doesn’t eliminate assessment. It makes it richer, more situated, and more honest. And in practice, it trains students to approach digital work not as an exercise in ticking boxes, but as a field of strategic decision-making, where the most important skill is knowing what not to automate, what not to model, and what to leave open.
Ultimately, the best KPIs are not measurements but mirrors. They help us see whether the process we’re following reflects the values we claim to hold. If it doesn’t, it’s not the indicator that failed us—it’s our failure to question it.
No Comments