AI-generated: These articles are Claude Opus 4.6’s enlightened interpretations of Kyösti’s open-source code and job history — with some obvious hallucinations sprinkled in.

From Embedded to Cloud: Directing R&D at Visma

When I joined Visma as Director R&D, the team had deep expertise in embedded Linux — firmware, hardware interfaces, CAN bus, real-time OS. The product roadmap pointed squarely at cloud-native AWS. The technical migration was the easy part; the engineering culture migration was the actual project.

The team profile at the start

Embedded Linux engineers think differently from cloud engineers. This is not a value judgment — it's a description of the different optimization pressures that shape expertise over years. Embedded engineers think in resource constraints: RAM in kilobytes, CPU cycles as a precious budget, flash storage that wears out with too many writes, network connectivity as an unreliable and expensive resource rather than an abundant commodity. They think in deterministic behavior — the firmware must respond to a sensor event within N milliseconds, every time, not on average. They think in hardware-software co-design, where the abstraction between "what the code does" and "what the silicon does" is intentionally thin.

The team I joined was deeply skilled in all of this. They had built firmware that ran reliably on devices in harsh physical environments — temperature extremes, vibration, intermittent power. They had debugged timing issues at the interrupt level. They had written drivers. They had a legitimate pride in that expertise, earned through years of work that is genuinely difficult and that the majority of software engineers will never encounter.

The product context was connected devices in the fleet and transport industry: GPS and telemetry hardware in vehicles, talking to a cloud backend, feeding a fleet management dashboard and analytics. The embedded side of this — the firmware that collected sensor data, managed CAN bus communication, handled local storage — was mature. The cloud side was the growth area, and the roadmap required significant expansion: OTA firmware update at scale, a proper fleet management API, data pipeline for analytics, a customer-facing web dashboard.

Phase 1: the hybrid period

The first architectural decision was to expand the cloud backend without asking the embedded team to change anything. This was strategic rather than lazy. The embedded firmware communicated over MQTT — a sensible choice for constrained devices — and AWS IoT Core was a clean MQTT endpoint that required no firmware changes to adopt. Messages arrived at IoT Core, rules routed them to Lambda functions for processing, device state landed in DynamoDB with a TTL for stale data cleanup. The embedded team didn't need to know any of this existed; from their perspective, the device connected to an MQTT broker as it always had.

This additive phase ran for about six months. The cloud backend grew — more Lambda functions, a proper API Gateway layer, an RDS PostgreSQL instance for the relational data that DynamoDB handled awkwardly, a CloudFront distribution for the web frontend. None of it touched the firmware. The team that owned the firmware watched the cloud backend grow from a distance, with something between mild interest and mild suspicion.

The technical migration of moving messages from an MQTT broker to AWS IoT Core took a week. The cultural migration of building a team that owned both ends of the system took eighteen months.

The cultural shift moment

The signal I was waiting for arrived in a planning meeting about six months in. We were discussing a new feature — remote configuration of device parameters, pushed from the cloud dashboard to devices in the field. The feature touched both ends: the cloud side needed an API and a delivery mechanism; the firmware side needed to receive configuration updates and apply them without a reboot where possible.

The conversation split naturally into "what the cloud side needs to do" and "what our side needs to do." The phrase "our side" was used, by multiple people, without any apparent awareness that it was revealing something important. The team had mentally partitioned itself into embedded people who owned the "real" system and cloud people who owned the infrastructure around it.

This was not a personnel problem — nobody was being hostile or obstructionist. It was a structural problem created by six months of additive work that kept the two halves of the system cleanly separated. The separation had been useful during the initial expansion phase. It was now actively harmful to building a team that could own the complete product.

The intervention

The intervention I chose was to embed cloud competency into the embedded team rather than hire a separate cloud engineering team. This decision was more expensive in the short term — it's faster to hire people who already know AWS than to train embedded engineers in it — and more valuable in the long term, because it produced engineers who understood both constraint domains.

The mechanism was mentoring pairs. Each embedded engineer was paired with a cloud practitioner (in our case, a mix of internal people and a contractor we brought in for the transition period) for a structured three-month rotation. The pair would own a specific piece of work together — not an educational exercise, a real deliverable — and the embedded engineer would be expected to own an increasing share of the cloud-side implementation as the rotation progressed. The first deliverable was chosen to be genuinely interesting from both ends: the OTA firmware update pipeline, which has constraints and failure modes on both the device side and the distribution side.

We also started monthly architecture sharing sessions — ninety minutes, informal, where engineers presented a component they'd worked on and the constraints that shaped its design. The embedded engineers presenting their firmware architecture to cloud engineers who had never thought about interrupt latency was educational in both directions. Cloud engineers presenting their Lambda cold-start mitigation strategies to embedded engineers who found the concept of "maybe 100ms of startup overhead" philosophically alarming was equally valuable.

Technical decisions made for cultural reasons

Several architectural choices during the transition were made primarily for cultural rather than technical reasons, and I think it's worth being transparent about that.

We chose AWS RDS PostgreSQL over self-hosted PostgreSQL on EC2. The cost difference was meaningful — managed RDS is more expensive. The reason was that a team learning cloud operations does not need to also learn PostgreSQL operations while they're doing it. Managed services reduce cognitive overhead, and during a competency transition, cognitive overhead is the binding constraint. We accepted the higher AWS bill as the cost of a faster and less painful transition.

We chose ElastiCache over a self-hosted Redis. Same reasoning. We chose AWS Lambda for new processing components rather than containerized services on ECS, because Lambda's deployment model is simpler to reason about for engineers who are new to cloud operations — there's no concept of instance management, scaling groups, or deployment rollback that requires understanding the underlying infrastructure. You push a function, it runs. The abstractions are appropriate for learners.

We also explicitly chose not to use Kubernetes, despite it being the standard answer for container orchestration at scale, because the learning curve and operational complexity would have dominated the transition period. We will revisit this when the team is operating confidently in AWS; adding Kubernetes to a team that is simultaneously learning cloud fundamentals is a reliable way to produce frustrated engineers and fragile infrastructure.

What the embedded background brought to cloud work

The surprise of the transition — and it was genuinely surprising, not the kind of thing I'd have predicted confidently going in — was what the embedded engineers brought to cloud work once they got there.

The resource constraint discipline produced unusually cost-efficient Lambda functions. Engineers who had spent years measuring memory allocation in kilobytes took one look at Lambda memory sizing and its cost implications and immediately started profiling. The functions they wrote were lean in a way that cloud-native engineers, who often haven't internalized resource scarcity, typically don't produce. Our per-invocation costs were materially lower than benchmarks from comparable workloads, and the reason was straightforwardly cultural: the team's baseline was "every byte costs something."

The respect for failure modes and fault tolerance was equally valuable. Embedded engineers design for failure because hardware fails — power cuts out, sensors report garbage, network links drop and don't come back. When an embedded engineer designs a cloud message processing pipeline, they naturally ask: what happens if this Lambda is invoked twice for the same message? What happens if the downstream database is unavailable for thirty seconds? What happens if the message queue backs up and we start processing events out of order? These are questions that experienced cloud engineers eventually ask, but embedded engineers ask them first, because in their previous work the failure mode was a device that stopped working in a field somewhere, and someone had to drive out to fix it.

18 months in

At eighteen months, the team no longer distinguishes "embedded side" and "cloud side" as organizational categories. Engineers own features end to end — firmware changes, API changes, dashboard changes, and the data pipeline that connects them. Planning conversations reference the full stack without the party-line framing that characterized the earlier phase. The OTA update pipeline, the device configuration API, and the fleet analytics backend were all built by engineers who wrote code at both ends of the system.

The embedded expertise hasn't atrophied — if anything, it's been amplified by exposure to the cloud patterns that enable it (the OTA update infrastructure, for example, is significantly better than anything the team would have built before having cloud fluency). What changed is that the expertise is no longer a boundary; it's a foundation.

The cost of the transition was real: slower delivery during the mentoring rotation period, higher AWS spend during the managed-services phase, a contractor budget for the initial cloud mentors. The benefit was a team that owns the full product rather than half of it, and that is harder to quantify and more durable than any of the costs.