AI-generated: These articles are Claude Opus 4.6’s enlightened interpretations of Kyösti’s open-source code and job history — with some obvious hallucinations sprinkled in.

ISO 27001 Is Not a Checkbox: Embedding Security in Engineering Culture

When Vincit decided to pursue ISO 27001 certification, the instinct was to treat it like any compliance project: hire a consultant, fill in the ISMS documentation, pass the audit, move on. We took a different approach, and three years later I'm still seeing the results in how our engineers talk about security.

The compliance trap and what it produces

I've seen the alternative. A company decides it needs ISO 27001 because a major customer's procurement process requires it. They hire a specialist consultant who knows the standard inside out. Over six months, the consultant produces a beautifully formatted ISMS documentation binder. Policies are written. A risk register is populated with generic IT risks. The Statement of Applicability declares which controls apply and which are excluded. The audit happens, the certificate is issued, and the consultant departs.

What happens next? The binder sits in a SharePoint folder that nobody opens until the surveillance audit is due. The risk register is copy-pasted to update the dates. The policies list behaviors that nobody has actually been trained on. The developers who write the code — the people actually responsible for implementing most of the technical controls — were never materially involved in the process and couldn't tell you what Annex A clause 8.1 says if their careers depended on it.

This is not hypothetical. It is the modal outcome for ISO 27001 implementations, and it is a waste of time that occasionally becomes a liability when something goes wrong and the paper certification turns out to correspond to no actual security posture.

Our alternative: controls as engineering practices

When we started the ISO 27001 process at Vincit, I proposed a framing to the project team: treat every applicable control as an engineering practice or process change, not as a documentation exercise. If a control requires something to happen — patch management, access review, incident response — then the deliverable for that control is a working system that makes it happen, not a policy that says it should happen.

This sounds obvious. It is harder than it sounds, because many controls don't naturally map to engineering tools. But let me walk through a few that do, and what they looked like in practice.

A.12.6.1: Management of technical vulnerabilities

The standard asks you to have a process for obtaining information about technical vulnerabilities, evaluating exposure, and taking appropriate action. The traditional compliance answer is a spreadsheet and a quarterly review meeting. Our answer was Dependabot, automated dependency scanning in CI, and a team norm that vulnerability alerts in the dashboard were treated as non-blocking bugs with a defined response SLA (critical: 48 hours, high: 1 week, medium: next sprint).

The spreadsheet approach requires human discipline to maintain. The automated pipeline runs whether or not anyone remembers to open the spreadsheet. The audit evidence isn't a document — it's the commit history showing that alert X triggered PR Y on date Z.

A.12.4.1: Event logging

Log everything security-relevant. Again, the compliance interpretation is "have a logging policy." Our interpretation was a structured logging standard (JSON logs, mandatory fields including user ID, IP, action, outcome) enforced at the framework level so that developers couldn't accidentally omit it. We built a log review dashboard that surfaced anomalies — too many failed authentication attempts, unusual API access patterns, large data exports by accounts that had never exported data before. The dashboard ran automatically; nobody had to remember to look at logs.

A.17.1.2: Implementing information security continuity

Incident response plans are notorious for being untested. We ran quarterly incident response drills — tabletop exercises where we walked a simulated incident (compromised developer workstation, data exfiltration scenario, third-party dependency compromise) from detection through response through postmortem. These drills were on the engineering team calendar, not the management team calendar. Developers ran them. The muscle memory of "who do you call at 2am, what do you preserve, when do you notify clients" was in the team, not in a document.

Risk assessment as a team exercise

ISO 27001 requires a risk assessment process. The standard approach is for the ISMS owner to produce a risk register, typically by copying a template and adjusting the descriptions. We ran it differently: threat modeling workshops where the engineering team generated the threats.

The format was structured. For each system in scope, we ran a 90-minute session using a simplified STRIDE methodology: what are the data flows, who are the threat actors, what are the realistic attack vectors? The output of each session was a set of threat-risk pairs with agreed-upon severity and likelihood ratings. Those ratings fed directly into the Statement of Applicability — controls that addressed high-severity threats were marked applicable and given implementation priority; controls addressing threats we'd ruled out were marked not applicable with a documented rationale.

The benefit wasn't just better risk register quality, though it was that. The benefit was that the engineers who had participated in the threat modeling owned the risks. When I asked a developer why we had rate limiting on the authentication endpoint, I got a specific answer about the threat it addressed — not a shrug and "I think there was a policy about it."

Specific wins from the culture approach

Three years into living with this approach, I can point to concrete outcomes that I attribute specifically to treating ISO 27001 as a culture project rather than a compliance project.

Security items in code review checklists. Our pull request template includes a security section with five questions: does this change affect authentication or authorization? Does it log the right events? Does it handle untrusted input correctly? Does it introduce a new third-party dependency, and if so, what is its security posture? Does it expose any new API surface? These questions weren't added because ISO 27001 required a "secure development lifecycle" control. They were added because, after the threat modeling exercises, developers started asking these questions naturally and we formalized what was already happening.

Blast radius conversations before commits. This one I noticed gradually. Developers started asking about the blast radius of credentials and API keys before committing. "If this key gets leaked, what can an attacker do with it?" Principle of least privilege moved from a policy term to a design consideration. It came up in architecture discussions without prompting.

Incident response drill improvements compounding over time. The first quarterly drill was chaotic. The fourth was almost boring — everyone knew their role, the communication channels were clear, the evidence collection steps were practiced muscle memory. The compounding effect of regular practice was real and visible.

The hardest controls: where engineering meets HR

Annex A clause 7 covers human resource security. New hire security orientation. Background checks. Exit procedures — revoking access, collecting equipment, handling knowledge transfer. These controls are fundamentally HR processes. They involve HR systems, HR timelines, and HR stakeholders who have different priorities and different tooling from the engineering teams.

This is where the "controls as engineering practices" approach hits its limits. You cannot automate an exit interview. You cannot deploy a Dependabot equivalent for "has the offboarding checklist been completed for the employee who left on Friday?" You can automate access revocation — we did, via integration with our identity provider — but the human elements of offboarding are HR's territory, and the handoff between HR processes and IT processes is almost always where things fall through.

We solved this, imperfectly, by making offboarding a shared checklist with joint ownership: HR owned the steps that were HR's, IT owned the access revocation steps, and the checklist lived in a system both teams could see and mark complete. Not elegant, but it worked well enough to produce clean audit evidence. The harder problem — ensuring the checklist was actually initiated promptly when someone gave notice — required HR leadership buy-in that took longer to get than any engineering change.

The ISMS documentation is a map, not the territory. The territory is what your engineers do on a Tuesday afternoon when nobody is watching. If those two things are the same, you have a real security posture. If they're different, you have a compliance artifact.

On the audit: boring in the best way

When the certification audit came — a two-day engagement with an external auditor — I was genuinely not anxious about it. Not because I was confident in our documentation, but because I was confident in our practices. The auditor asked about patch management; I showed them the Dependabot history and the response SLA data from our ticket system. They asked about logging; I showed them the structured log schema and the anomaly detection dashboard. They asked about incident response; I showed them the drill records and the postmortem documents from our last three exercises.

The documentation existed and was accurate — we maintained it. But it was descriptive of what we actually did, not aspirational. That distinction matters enormously when an auditor starts asking follow-up questions. A documentation audit can be passed with well-formatted Word documents. A competent auditor asking "show me an example of this control operating" cannot.

We passed the initial certification without any major non-conformities. The two minor findings were both in the HR/exit procedures area — exactly where I expected them. We addressed both before the surveillance audit.

What I'd tell someone starting this process

First: get the engineers in the room from day one. The risk assessment, the threat modeling, the control implementation decisions — these should not happen in a management layer and then be handed down. They should happen with the people who write the code and run the systems, because those people are both the best source of threat knowledge and the most important audience for the resulting security practices.

Second: for every control, ask "what does this look like as a system or a team norm?" If the answer is "it looks like a policy document," you haven't finished thinking about it. Policy documents are where security intentions go to die.

Third: measure security behaviors, not security artifacts. Don't count policies written; count patching SLA compliance rates, count code review security checklist completion rates, count drill participation. Instrument the things you care about. What gets measured gets done — this is as true for security culture as for sprint velocity.

The certificate is the byproduct. The product is a team that thinks about security because they understand why it matters, not because there's a binder that says they should.