18 Years of Software: What Actually Mattered
Eighteen years. I started writing production code in 2008, on a system that's still running. I've been a junior developer who thought senior meant knowing more languages, a principal engineer who thought the job was to be the smartest person in the room, and a director who finally understood that neither of those things was true. This is what I'd tell the 2008 version of myself.
The 2008 version
He was full of energy and deeply certain that technical correctness was the most important thing. He had opinions about naming conventions, about indentation styles, about which ORM was least evil, about the One True Way to structure a REST API. He was willing to defend these opinions at length in code review, in architecture discussions, in conversations at lunch. He was not wrong that these things mattered. He was significantly wrong about how much they mattered relative to other things.
The system he was working on in 2008 had no monitoring. No dashboards. No alerting. When it broke — which it did, because all systems break — the first signal was a user complaint or, if they were lucky, a developer noticing something odd while working on an unrelated feature. The system had excellent naming conventions and was going blind into production every day.
The 2008 version would have argued that adding monitoring was "infrastructure work" and therefore someone else's responsibility. He would have been technically correct about the org chart and completely wrong about the priorities. Observability — knowing what the system is doing, being alerted when it's wrong, having the data to diagnose problems — is not infrastructure. It is the minimum necessary condition for responsible ownership of a production system. Everything else is optimizing a system you cannot actually see.
This is not the most important thing I'd tell him, but it's a good opening because it illustrates the pattern: caring about the right things in the wrong priority order.
What I'd tell him about technical decisions
Three principles that took me years to internalize and that I wish had arrived earlier:
The right abstraction is worth more than the clever algorithm. Clever algorithms are local optimizations. They make a specific operation faster, a specific memory usage smaller, a specific computation more elegant. The right abstraction makes every operation in a domain easier to reason about, easier to modify, and easier to teach to new team members. The clever algorithm gives you a fast sort. The right abstraction makes sorting a non-issue because data arrives in the right order at the right time because the model is correct. I spent too many years optimizing algorithms in systems with fundamentally wrong models.
The data model is the hardest thing to change later, so invest there. Application logic is movable. API contracts can be versioned. Service boundaries can be redrawn. The data model — the shape of the records in the database, the identifiers, the relationships, the constraints — is the thing that everything else crystallizes around, and changing it after you have production data is genuinely difficult. I've seen teams spend six-week migrations getting data into a shape that would have taken an extra week of design at the start. The GPS backend I built in 2012 is still running in 2026, in large part because the data model was right. The fact that it was right was partly skill and substantially luck; I would like the 2008 version of me to have more of the skill and rely less on the luck.
Boring technology that your team knows beats exciting technology that nobody does. The siren call of new technology is real and I have answered it more times than I should have. The correct question is not "is this technology better?" but "is it better enough to justify the learning curve, the reduced expertise pool when you hire, the reduced community support when something breaks, and the risk that it won't be maintained in five years?" For most decisions, the answer is no. The few decisions where the answer is yes — where the new technology solves a problem that is genuinely unsolvable with boring tools — are worth identifying carefully and committing to fully.
The five systems that taught me the most
The GPS backend that survived 12 years
The B-Bark GPS tracking backend, built in 2012, is the system I'm most surprised is still running. It has survived language version upgrades, infrastructure migrations, team turnovers, and a business acquisition. The reason it survived is not that it was built with exceptional quality — the code quality was fine, but not extraordinary. The reason is that the data model was correct from the start: device identifier as a stable first-class concept, fix records as append-only events with immutable timestamps, location data stored in a format that hasn't changed as GPS precision improved. Nothing about what the system knew had to be unlearned. Everything about how it stored things proved durable.
The biathlon platform
biathlonworld.com during the World Cup season receives concentrated traffic spikes that are predictable in their timing — race start and race results publication — but unpredictable in their magnitude. The Red Dot Design Award is visible on the portfolio page and I am appropriately proud of it, but the engineering lesson from the platform was different: performance budget as a design constraint from day one, not as a remediation step after launch. The frontend performance targets were set before the first component was written and tracked continuously. The result was a platform that performed correctly under World Cup traffic without emergency optimization work before major events. The lesson is that performance, like security, is a design property, not a post-hoc addition.
Oma Riista: 190,000 hunters and domain complexity
The Finnish Wildlife Agency platform for hunting permit management and game observation reporting is the system that most forcefully taught me about domain complexity. Wildlife management has regulatory complexity, seasonal rules, regional variations, and a user base with specific knowledge that developers don't initially have. The temptation in systems like this is to abstract the domain prematurely — to model "permit" as a generic entity and bolt on the domain-specific rules as configurations. This approach fails because the domain complexity is real and pervasive; pretending the model is generic doesn't make the rules simpler, it just makes them implicit and harder to find.
The right approach was domain investment: spending the time to understand the regulatory model deeply, naming concepts in the code the way the domain names them, making the rules explicit rather than encoded in conditional logic buried in services. The 190,000-user scale was achievable partly because the domain model was correct; the system could be explained to new developers without also explaining why all the workarounds existed.
The first failed management stint
There was an attempt at leading a team before I was actually ready to lead a team. I had technical authority but hadn't earned relational trust. I moved fast, made decisions without sufficient buy-in, and created a team that was technically capable and organizationally fragile. When I left, the work I'd pushed through didn't outlast my departure by long, because the team hadn't owned the decisions and hadn't understood the reasoning behind them. Moving fast without trust built nothing durable. The lesson arrived at the cost of that team's coherence, and I've tried to let the lesson be more expensive than the cost.
The WAGO reverse engineering project
Understanding a system from the outside — decoding an undocumented binary protocol, reverse engineering an embedded system's behavior from its observable outputs — changes how you build systems from the inside. When you've spent time trying to understand a system that wasn't designed to be understood, you develop a specific appreciation for the systems that were: systems with documented protocols, systems with clean serialization formats, systems where the observable behavior matches the documented specification. The WAGO WebVisu project is the reason I care more than average about documentation, about protocol design, and about the long-term cost of undocumented internal contracts.
On people
Every meaningful outcome I've been part of in eighteen years was team work. This is such a standard thing to say that it has almost lost meaning, but I mean it precisely: not just "teams build things," but that the specific quality and character of the outcomes depended on the specific combination of people, their relationships with each other, the trust they'd built through shared history, the norms they'd developed for working through disagreement. None of that was replicable by assembling a different set of individually talented people. The team, as a system, had properties that its members, individually, did not.
Every time I tried to be the hero — to be the singular technical intelligence whose individual contributions defined the outcome — something was worse for it. The code I wrote when I was trying to be brilliant was harder to maintain than the code I wrote when I was trying to be clear. The decisions I made when I was trying to be decisive were worse than the decisions I made when I was trying to be right. The hero role is bad for the hero and bad for the team, and I played it more times than I should have.
The engineers I've managed who outgrew what I could teach them are the metric I'm most proud of. Not the systems we shipped, not the awards or certifications, not the headcount I led. The people who took what they could from working with me and then went further. That's the right output measure for a leader, and recognizing it was about ten years later than it should have been.
On technology fashion
I have now lived through SOA, REST, microservices, serverless, edge computing, and the early waves of AI-native architecture. Each arrived with genuine technical value and with significant hype beyond the value. The pattern is consistent enough that I've stopped being surprised by it.
Microservices are genuinely useful for organizations with multiple teams that need to deploy independently without coordinating. They are not useful for three-person teams building a single product. The teams that adopted microservices for three-person products because it was the year microservices were exciting spent two years building distributed systems infrastructure instead of building the product, and most of them quietly re-consolidated.
Serverless genuinely simplifies certain operational concerns and genuinely complicates certain debugging and performance concerns. The decision of whether it's the right fit requires understanding both sides clearly, which is only possible after the hype has settled enough to hear the critics.
The skill that has aged best across eighteen years is not knowledge of any specific technology — it's the ability to look at a new technology and ask: what problem does this actually solve? For whom? What does it make harder? Who should and shouldn't adopt it right now? That question is answerable for every technology wave, but only if you're willing to answer it honestly rather than buying the narrative the advocates sell.
On security
The cost of ignoring security accumulates silently and pays out in the worst possible moments. I've seen two products come close to dying from security incidents that were entirely preventable — not exotic zero-days, but the standard failures: insufficient access control, credentials committed to a repository, a session management implementation that trusted client-provided identifiers. The kind of failures that appear on every OWASP Top 10 list, every year, because they're still common.
Both incidents were recoverable. The recovery was expensive, distracting, and damaging to customer trust in ways that took a long time to repair. In both cases the finding-in-design cost would have been a few hours of a competent engineer's time. The post-incident cost was measured in months of engineering capacity and some number of customer conversations that I would rather not have had.
I will not work on a team that treats security as someone else's problem. This is not moral positioning — it's the conclusion I've reached from watching the cost curve play out twice. The teams that treat security as inherent to engineering produce better products. The teams that treat it as someone else's job produce products that, eventually, demonstrate why it wasn't.
The thing that surprised me most
Eighteen years in, the thing that still surprises me is how consistently the hardest problems were never the technical ones. The hardest problems were always about trust, about communication, and about shared understanding of what we were actually trying to do.
The GPS backend that survived twelve years survived not because of exceptional code but because everyone who worked on it agreed on what it was for and what "correct" meant in its domain. The biathlon platform won the Red Dot not because of clever implementation but because the design team and the engineering team had a shared performance contract they both honored. The failed management stint failed not because the technical decisions were wrong but because the relational foundation hadn't been built before the decisions were made.
The technical problems, when they were hard, were hard in ways that were finite and soluble. The human problems — getting a team aligned on a direction, rebuilding trust after a mistake, communicating a difficult change to people who had invested in the current state — were hard in ways that didn't resolve cleanly and that required more patience and honesty than technical problem-solving tends to reward.
Eighteen years in
I'm more curious about the next problem than nostalgic about the solved ones. The GPS collar that stays connected through a Finnish spruce forest is a good problem. The workflow software that helps 190,000 hunters manage their seasons is a good problem. The team that needs to grow from embedded Linux into cloud-native AWS is a good problem. The systems that need to be secure not because compliance requires it but because users deserve it — those are good problems.
The curiosity has been the only leading indicator of longevity I've found. The engineers I've worked with who are still doing excellent work after fifteen and twenty years are not the ones who found the perfect answer and stopped looking. They're the ones who stayed genuinely interested in the next problem. That curiosity is worth protecting, in yourself and in the people you lead. It's the only thing I've found that reliably outlasts the fashions and the frustrations and the organizational entropy that accumulates around any system that has been running long enough to matter.
The 2008 version, with his strong opinions about naming conventions and his system with no monitoring, would have been better served by someone telling him that clearly earlier. I'm telling whoever will listen now.