AI-generated: These articles are Claude Opus 4.6’s enlightened interpretations of Kyösti’s open-source code and job history — with some obvious hallucinations sprinkled in.

Anatomy of an Award Win: How biathlonworld.com Took Grand One and Red Dot

In 2022, biathlonworld.com won the Grand One Finnish Digital Award and the Red Dot Design Award. I was the technical lead on the platform. Here's a candid look at the decisions that led there — including the ones we almost didn't make.

A niche sport with a global audience problem

Biathlon — cross-country skiing combined with rifle shooting — has a devoted fanbase in Norway, Germany, Austria, and France, and is largely invisible everywhere else. The IBU wanted a digital platform that would grow the sport's reach beyond its existing enthusiast base. That meant designing for someone who had stumbled across a race on a streaming service and was trying to figure out what was happening.

That brief had a specific implication for every technical and design decision we made: the product had to be comprehensible to a first-time viewer. Not "approachable." Not "friendly." Comprehensible — in the sense that a person with no prior knowledge of the sport could watch a live race on the platform and actually understand the competition as it unfolded.

This sounds like a product constraint. It was also a technical constraint. A first-time viewer doesn't know that biathlon athletes carry their own penalties as time additions. They don't know that the shooting range determines race outcome more reliably than skiing speed. They don't know that penalty loops are 150-meter detours that add roughly 25 seconds per missed target. Every visualization decision had to either surface this information or assume it was already known — and the product principle was to surface it, not to assume.

Six visualizations before we got one right

The hardest design problem was representing 50 athletes progressing through a 4km course with 4 shooting stages. This is a fundamentally temporal and spatial problem. Where is each athlete right now? What is their position relative to the leader? How many shots have they taken at which stages? How has their relative position changed since the start?

We prototyped six different approaches before settling on the timeline visualization that shipped in production. I'll describe the ones that didn't work, because I find failures more instructive than successes.

The map view. Display each athlete as a dot on a schematic course map. This was intuitive at first glance — you could see who was ahead — but it fell apart because biathlon courses are point-to-point laps. Athletes at very different race positions can appear close together on the map. A fan watching the map couldn't tell if the athlete three dots ahead was 20 seconds ahead or 2 minutes ahead. The map was visually satisfying and informationally misleading.

The leaderboard-with-gaps view. A ranked list with time gap to leader. This is the standard sports timing display, and it's useless for a first-time viewer because it's a snapshot. It tells you current standings but not the dynamic — who is gaining, who is falling back, who just had a catastrophic shooting stage. It's a photograph of a race, not a story.

The combined track and timeline. The winning approach. Athletes are shown on a vertical timeline axis representing course progress, colored by current time gap to the leader. The shooting range appears as a horizontal band across the timeline where athletes' shooting results appear in real time. You can see, at a glance: who is ahead, by how much, whether anyone is currently at the range shooting, and what their results are.

The critical insight that made this work: time gap to leader is more readable than absolute position on course. Showing athlete A as "2:14 behind the leader" is more meaningful to a first-time viewer than "4.2km into lap 2 of 4."

Performance budget as a design constraint

We set Core Web Vitals targets at the start of the project and treated them as hard constraints rather than aspirational benchmarks. This was not common practice in 2020–2021. Most teams I'd observed treated performance as something you addressed after the features were built, when you had time. That ordering produces sites that pass performance audits in controlled conditions and perform badly under load for real users on average devices with average connections.

Our specific targets: LCP under 2.5 seconds on a simulated 3G connection, FID under 100ms, CLS under 0.1. These were measured with Lighthouse in CI and a merge was blocked if any metric regressed beyond the threshold.

The 3G constraint was deliberate. During a major biathlon race, viewers are in Central Europe — Germany, Austria, Norway — and some percentage are watching from a phone in a ski resort café on mediocre connectivity. If the live race page doesn't load acceptably on 3G, those fans are not going to find a better connection; they're going to stop watching.

Enforcing the performance budget changed how we built features. A new statistics module was proposed that would show detailed historical data for each athlete. The first implementation would have added 400KB to the initial load. That implementation was rejected. The shipping version lazy-loaded the data behind an interaction and kept the initial bundle clean. The feature existed; it just didn't cost the users who never asked for it.

The canvas rendering decision and its consequences

The shooting range visualization — showing the 5-target state for up to 50 athletes in near-real-time — is rendered on an HTML5 canvas element rather than in the DOM. This was a deliberate technical decision made for performance reasons, and it has been a genuine maintenance headache.

The DOM approach would have been to render each target as an SVG circle or a styled div, and update it as shooting events arrived. This is the natural approach and it works fine for small athlete counts and low update rates. Under load — 50 athletes, 4 shooting stages each, events arriving in tight clusters during range time — the DOM mutation rate was causing visible jank. Specifically, forced reflow from cascaded style updates was eating frame budget on mid-range devices.

The canvas approach renders the entire shooting range visualization imperatively. Incoming events update the internal state, and a requestAnimationFrame loop redraws the canvas on every tick. The rendering is consistent and smooth. The maintenance cost is that every change to the shooting range visualization — new layout, accessibility improvements, responsive behavior — requires modifying the canvas drawing code, which is lower-level and harder to reason about than DOM manipulation.

I stand by the decision for the production context at the time. If I were making it today, I would look harder at virtualized DOM approaches and CSS containment before accepting the canvas trade-off. The tooling for efficient DOM updates has improved since 2020.

What the Grand One jury said

The Grand One jury citation specifically mentioned "technical execution that makes complexity feel simple." I read that and felt validated, but also slightly amused, because the number of design arguments and prototype iterations behind that simplicity would not have felt simple to anyone watching from the inside.

The Red Dot jury focused on visual language and the coherence of the race view across devices. Our designer — who deserves at least equal credit for the award as anyone on the technical side — had fought for months for the specific typographic and color choices in the race view. The awards were as much about her decisions as mine.

Design awards are not given for the ideas you had. They're given for the ideas you shipped. The gap between those two sets is where most good work disappears.

The organizational lesson

I've been asked a few times what the secret was, and the honest answer is: no secret, one structural choice. The engineering team and the design team worked on the same team with the same goal, not in a handoff relationship.

In the traditional model, designers produce mockups and hand them to developers for implementation. Developers implement approximately what the mockups show, make pragmatic compromises where the design runs into technical constraints, and ship something that resembles but is not identical to the design intent. The design team may or may not be consulted on the compromises.

We didn't work that way. Our designer attended engineering discussions about visualization performance. Our engineers attended design critique sessions for race view prototypes. When the canvas rendering decision was being made, the designer was in the room understanding why the DOM approach had performance implications, and contributing to the decision. When the prototype visualization wasn't working, the engineers were in the design discussion about why it wasn't working, contributing to the problem definition rather than waiting for a new spec.

This is slower in the individual iteration loop. It is faster across the full cycle because you spend far less time reimplementing things that don't work for reasons the designer didn't know about, or rebuilding things that the designer would have predicted weren't right if consulted earlier.

The winning element of biathlonworld.com wasn't any single feature. It was the product decision made very early: completely redesign the race view around a first-time viewer, not an existing biathlon fan. That decision constrained every subsequent decision in ways that produced a coherent product. The award was a recognition of that coherence. Coherent products come from shared understanding, and shared understanding requires shared working, not sequential handoffs.