AI-generated: These articles are Claude Opus 4.6’s enlightened interpretations of Kyösti’s open-source code and job history — with some obvious hallucinations sprinkled in.

Green Code and the Carbon Footprint of a Live Sports Platform

The IBU approved its Sustainability Strategy 2020–2030 with a target of halving its carbon footprint by 2030 and reaching net-zero by 2040. As the team behind biathlonworld.com, we were part of that conversation. This is a look at how platform architecture decisions — protocol choice, data serialisation, CDN design, client rendering — translate directly into carbon emissions, and what "green code" actually means when hundreds of thousands of fans are watching live race data simultaneously.

The IBU's Sustainability Context

In 2024, the IBU received the Commended distinction at the IOC Climate Action Awards — recognition for ground-breaking collaboration with broadcasters to measure and reduce the sport's carbon footprint. The IBU's published strategy commits to a 4.5% annual reduction in carbon emissions, with interim milestones tracked and reported publicly. Biathlon, as a winter sport, is acutely exposed to climate risk: the IBU has a direct organisational interest in the credibility of its sustainability commitments that goes beyond reputational management.

The EBU (European Broadcasting Union) and IBU formalised this as a sustainable production programme, with the 2024–25 season report measuring the footprint of broadcast production across all World Cup events. Digital platforms sit adjacent to broadcast: they reach the same audiences, they run on the same cloud infrastructure, and they are increasingly the primary viewing experience for younger demographics who follow races on their phones rather than their televisions. The platform's carbon footprint is measurable, and it matters.

Where Digital Carbon Actually Comes From

To reason about green code, you need a mental model of where digital emissions originate. The carbon cost of a web page load has three components:

  • Data transfer — the network infrastructure that carries bytes from origin to user. A byte transferred over fibre uses different energy than a byte transferred over mobile. Streaming in 4K generates an estimated 300–700 grams of CO₂ per hour; mobile networks consume roughly ten times more energy per transferred bit than broadband.
  • Origin compute — the servers, databases, and caches that produce and deliver the response. Compute in cloud datacentres running on renewable energy has a substantially lower carbon intensity than on-premises infrastructure with grid electricity.
  • Client device — the CPU cycles on the user's phone or laptop required to parse, render, and animate the response. This is the component most consistently underestimated in emissions models and most directly controlled by frontend code quality.

Each component is independently optimisable, and the optimisations interact. A smaller payload reduces data transfer and client parsing cost. A more efficient client-side data structure reduces both memory and the CPU cycles needed to diff and re-render on update. The leverage is multiplicative: improvements that reduce all three components simultaneously are rare but enormously valuable.

The Protocol Decision: Polling vs Push

Live race data — competitor positions, shooting penalties, time gaps, weather conditions — updates every few seconds during a race. The naive approach is polling: every client sends an HTTP request every N seconds asking for the latest state. The server sends the full current state back, the client diffs it against the previous state, and re-renders the changed elements.

Polling is simple and stateless, but its carbon cost scales badly. With 300,000 concurrent viewers polling every three seconds:

Requests / second:    300,000 / 3 = 100,000 rps
Per-request overhead: ~800 bytes HTTP headers (average)
Header overhead/sec:  100,000 × 800 = ~76 MB/s of pure overhead
                      carrying perhaps 2–5 KB of actual payload change

The HTTP header overhead alone — User-Agent, Accept, Cookie, cache negotiation headers, TLS handshake amortisation — can exceed the payload it is delivering. At scale this is not a theoretical inefficiency; it is a measurable waste of energy in transit, at the load balancer, and in TLS termination CPU time.

The biathlonworld.com real-time layer used a publish-subscribe model: the server pushes updates to connected clients exactly when data changes, with no polling. The update payload is a delta — only the changed values — rather than the full current state. A typical shooting range update during a race is a single competitor's result: two or three numeric fields, not a complete race state document. The difference in transferred bytes between a full-state poll and a targeted delta push, across hundreds of thousands of connections over a two-hour race, is measured in gigabytes.

Data Serialisation: JSON Is Not Free

JSON is the default serialisation format for web APIs and it is convenient. It is also verbose. A race update message in JSON might look like:

{
  "competitorId": "BJONDALENOLE",
  "bib": 1,
  "shootingRange": 3,
  "hits": [true, true, false, true, true],
  "rangeTime": 28.4,
  "penaltyLoops": 1,
  "timestamp": "2024-01-13T11:42:07.341Z"
}

That's approximately 180 bytes. A binary encoding of the same message — using fixed-width fields, integer timestamps, and a compact bitfield for the hits array — fits in under 20 bytes. For live race data pushed to hundreds of thousands of clients, a 9× reduction in message size is a 9× reduction in the data transfer component of carbon emissions for every update.

The trade-off is tooling cost: binary protocols require explicit schema management, versioning discipline, and encode/decode libraries on both ends. For the public-facing API where third-party integrations consume the data, JSON is the pragmatic choice despite its overhead. For the internal push channel between the platform and the browser client, a more compact encoding is worth the implementation cost — both for performance and for energy efficiency.

Vector Tiles: Raster vs Vector

The race course maps on biathlonworld.com use vector tiles rather than raster PNG/JPEG tiles. This is a standard modern cartography choice, but its green code implications deserve explicit attention.

A raster tile at zoom level 14 is a 256×256 PNG — perhaps 30–80 KB of compressed image data, optimised for visual fidelity at a fixed resolution. To display the same geographic area at zoom levels 13 through 16 (the typical range a user might navigate through when exploring a race course), you need separate tiles for each zoom level: at minimum 3–4 tile requests, potentially 10–15 depending on pan distance.

A vector tile covering the same area at zoom 14 is typically 5–30 KB of compressed protobuf — and it is resolution independent. The client renders it at whatever zoom and scale the display requires without fetching additional tiles. Zooming in does not trigger new network requests; it triggers GPU-accelerated vector re-rendering on the client. The data transfer reduction compared to raster tiling is substantial, particularly for users who zoom and pan extensively during a race.

Vector tiles also enable client-side style changes — switching between terrain view and course overlay, highlighting the current leader's route — without fetching new tile sets. Every feature that can be implemented as a client-side style operation rather than a server-side render-and-serve operation saves a network round trip and the associated compute cost at the origin.

CDN Architecture and Geographic Carbon Intensity

A biathlon World Cup race attracts viewers primarily from Northern and Central Europe — Norway, Germany, France, Sweden, Finland, Austria — with secondary audiences in North America and Asia. A single-origin deployment in a European datacenter serves European viewers with low latency and low transit carbon, but serves North American and Asian viewers over long intercontinental paths that consume more energy per byte.

A well-configured CDN with edge nodes in major population centres does two green things simultaneously: it reduces latency (user experience) and it reduces the transit distance each byte travels (energy consumption). Static assets — JavaScript bundles, fonts, CSS, vector tile caches — are served from the edge with zero origin compute cost and minimal transit. Dynamic race data is the only category that genuinely requires origin-adjacent freshness.

Carbon-aware CDN routing — directing traffic to the edge node whose regional grid is currently running on the highest proportion of renewable energy — is an emerging capability that major CDN providers are beginning to expose. This was not available in production form when biathlonworld.com was built, but the architectural pattern (stateless edge nodes that can serve from any geography) is a prerequisite for adopting it when it becomes reliable.

Client Rendering: JavaScript CPU as Carbon

The client-side component of digital carbon is least visible but most directly controlled by the engineering team. A JavaScript framework that triggers 200ms of CPU work per race update on a mobile device, across 300,000 concurrent devices, is consuming — in aggregate — the equivalent of roughly 16,000 device-hours of compute per hour of racing. At typical mobile SoC thermal design power (~3W), that is approximately 48 kW of continuous power consumption caused by JavaScript execution alone, across the user population.

This is not a reason to avoid JavaScript; it is a reason to be precise about it. Specific patterns that reduce unnecessary CPU work on live update paths:

  • Immutable data structures with structural sharing allow frameworks to diff update payloads in O(1) time per unchanged node rather than O(n) tree traversal. A race update that changes one competitor's time should not re-render the entire leaderboard.
  • Virtual DOM granularity — decomposing the race view into small components with narrow data dependencies ensures that a shooting range update only re-renders the shooting range widget, not the map, not the time gaps table, not the weather panel.
  • Windowed rendering for long lists (30+ competitors) avoids DOM nodes for off-screen elements. Maintaining 60 full DOM subtrees for all competitors, most of which are not visible, imposes unnecessary memory pressure and GC pauses.
  • Request animation frame batching for visual updates — rather than updating the DOM on every incoming WebSocket message, batch updates to the next animation frame. This coalesces multiple rapid updates (a burst of shooting results arriving in 200ms) into a single render pass.

Measuring What You Optimise

The IBU's broadcast sustainability programme, run in collaboration with EBU, measures carbon emissions from production crews, travel, and equipment across each World Cup event. The methodology is rigorous: scope 1, 2, and 3 emissions with event-specific data rather than industry averages. The digital platform's contribution is smaller in absolute terms than broadcast production, but it is also more tractable — software changes deploy in hours, not seasons.

The tooling for measuring digital carbon has matured considerably. The Sustainable Web Design model provides a methodology for estimating emissions from data transfer and device usage based on analytics-observable metrics: page load size, session duration, device mix. Green Foundation's CO2.js library integrates this model into frontend monitoring. Real User Monitoring (RUM) data — actual user sessions, actual bytes transferred, actual device types — provides the inputs for an emissions estimate that is specific to the platform and its audience rather than generic industry averages.

The most effective carbon reduction a digital platform can make is also the most effective performance optimisation: transfer less data, process it faster, and cache aggressively. Green software and fast software are the same software.

The architecture decisions that made biathlonworld.com capable of handling hundreds of thousands of concurrent viewers — push over polling, delta updates, vector tiles, edge caching, granular client rendering — were made primarily for performance and cost reasons. Their carbon benefits were a consequence, not a goal. That's not a reason to discount them. It's an argument for treating performance engineering and sustainability engineering as the same discipline, measured together, optimised together, and reported together alongside the Grand One awards and the Red Dot.