Security as a Product Value: Building It In From Day One
I've seen two modes of product security: the bolted-on kind, where a security audit happens six weeks before launch and produces a list of surprises, and the built-in kind, where engineers ask "what's the blast radius of this?" before opening a PR. The distance between those two modes is almost entirely cultural, not technical.
The cost curve of late security
The numbers are industry estimates and vary by source, but the curve itself is real and consistent: finding a security issue in the design phase costs roughly one hour of one engineer's time — a conversation, a whiteboard change, a revised data flow diagram. The same issue found in code review costs perhaps six hours — reading the code, understanding the context, explaining the finding, implementing the fix, re-reviewing. The same issue found after deployment costs sixty or more hours — incident triage, impact assessment, patch development, deployment, customer communication, post-mortem, control improvement documentation.
This is the same general curve as any defect in software development, and it makes the same argument for moving security left that has been made for quality generally under Agile and DevOps. The argument is not new. What's new, in my experience, is that most engineering organizations still haven't internalized it for security specifically, even when they have internalized it for functional correctness.
The reason, I think, is that functional defects are immediately visible: the feature doesn't work, the test fails, the user reports a bug. Security defects are often invisible until they're exploited, and even then the connection between the defect and the compromise may not be obvious from the incident report. The feedback loop that drives learning and improvement doesn't fire in the same way.
What building it in actually looks like
The phrase "security by design" gets repeated often enough that it has lost specificity. Here is what it concretely means in a development process I'd consider functional:
Threat modeling in the same sprint that defines the feature. Not in a separate "security workshop" three months before launch, and not as an afterthought when the implementation is complete. When a feature is written up in a ticket and broken into tasks, a threat model for that feature is completed before any code is written. This doesn't need to be a formal STRIDE analysis with a DFD diagram — for most features, a 30-minute conversation asking "who is the attacker, what do they want, what can they do, what stops them?" produces enough to define the security acceptance criteria.
Security acceptance criteria alongside functional acceptance criteria. Every feature story has a definition of done that includes both. "User can reset their password" has functional AC: the user receives an email, the link expires after N minutes, the old password is invalidated. It also has security AC: the reset token is cryptographically random and single-use, the email address is not leaked in error messages, the endpoint is rate-limited. The security AC is written before implementation and verified before the story is closed.
Automated SAST and DAST in CI. Static analysis (SAST) runs on every PR — tools like Semgrep, Bandit for Python, or govulncheck for Go. Dynamic analysis (DAST) runs against a staging environment on every merge to main. Neither replaces human judgment, but both catch the mechanical errors that humans are bad at catching consistently: SQL injection patterns that survived code review, dependency versions with known CVEs, insecure deserialization patterns.
The security champion model
The security champion program is the organizational mechanism I've seen work most consistently at the scale of 10–50 engineer teams. The structure is simple: one engineer per team, rotating on a cadence (quarterly or semi-annually works well), has a dedicated half-day per week on security-related work. They have access to the security team's communication channel — in most organizations, a Slack workspace or equivalent. They attend a monthly sync with other champions and the security team.
The key distinction is security amplifier, not security gatekeeper. A gatekeeper model — where features must be approved by the security champion before proceeding — creates bottlenecks, resentment, and security theater (champions learn to approve things quickly to keep the process moving). An amplifier model — where the champion is a resource for their team, available to help with threat models, to review unusual authentication flows, to advise on crypto choices — creates genuine capability growth distributed across teams.
The selection criterion for champions matters more than the program design. The most effective security champions I've worked with were not the engineers with the most security knowledge; they were the engineers with the most credibility with their peers and the most curiosity about how systems fail. Security knowledge can be taught. Credibility and curiosity are harder to develop.
Psychological safety for security
This is the piece that most security program designs skip, and it is not a soft concern. If engineers are afraid to report potential vulnerabilities — because reporting means they wrote insecure code, and insecure code means they're bad engineers, and being labeled a bad engineer has career consequences — then vulnerabilities go unreported until they're serious. The reporting failure mode is harder to detect than the technical failure mode and more damaging.
The mechanism that addresses this is blameless post-mortems applied consistently to security incidents. When a vulnerability is found or a security incident occurs, the post-mortem asks: what conditions made this possible? what would make it impossible or more detectable? what can the system do differently? It does not ask: who wrote this code, and why did they not know better?
Blameless post-mortems don't mean no consequences for deliberate misconduct. They mean that honest mistakes in complex systems are analyzed as system failures, not personnel failures — because that's what they are, and analyzing them as personnel failures produces worse outcomes.
The blameless norm also applies to proactive reporting. An engineer who notices a potential XSS vector in a component they didn't write should be able to report it without creating a political problem for the team that owns that component. The recognition that goes to the engineer who reports a finding before it's exploited should be equal to or greater than the negative attention that goes to the team whose code contained the finding.
Metrics that actually changed behavior
Three metrics moved the needle in my experience; many others produced dashboards that nobody looked at.
Time-to-patch for critical CVEs, on a public board visible to leadership. Not a private operations metric — a visible organizational commitment. When the clock on a critical CVE is visible to the CTO and the engineering directors, patch prioritization happens differently than when it's managed in a private ops queue. The public visibility changes the political economy of "we'll get to it next sprint."
Security items in sprint velocity, treated like any other work. If security work — patching, threat modeling, security debt items — is tracked separately from feature work and not counted in velocity, it will always be deprioritized. If it's treated as equivalent story points, it competes fairly for capacity. This is an organizational commitment, not a tooling change; it requires product and delivery leadership to agree that security work has equal standing with feature work.
Percentage of new features with threat model completed before coding starts. This one requires a definition of "threat model" that's lightweight enough to be done consistently — a form with five questions, completed in 30 minutes, stored in the ticket. Track the completion rate. When it's below 80%, ask why. Usually the answer is either that the ticket was poorly defined (engineering can't threat model something vague) or that the team is under time pressure (a prioritization conversation, not a security conversation).
What doesn't work
Annual security training as a compliance checkbox. Engineers who click through 45 minutes of video about phishing and SQL injection once a year do not become more security-aware. They become more familiar with the concept of annual security training. The learning retention at 12 months is essentially zero. Monthly 15-minute sessions on specific, relevant topics — a recent CVE in a dependency your team uses, an anonymized post-mortem of a security incident in your industry — have materially better retention at a fraction of the annual time cost.
Penetration tests as the primary security mechanism. Pen tests are valuable for validating that security controls work as designed. They are not a substitute for building controls in the first place. An organization that relies on annual pen tests to find vulnerabilities is an organization that accepts having undetected vulnerabilities for most of the year. The pen test should be finding the gaps in a mature program, not discovering that the program doesn't exist.
CISO as the person everyone routes security questions to. This doesn't scale past about 20 engineers, and at 50 engineers it's a bottleneck that actively degrades the organization's security posture by making security feel like someone else's problem. The CISO's job is to set the program design and maintain the security culture; the security knowledge must be distributed across product teams to be effective at scale.
The product and commercial angle
B2B customers increasingly evaluate security posture during procurement. Security questionnaires — the hundred-question spreadsheets that enterprise procurement teams send before signing a contract — have become table stakes in most industries. ISO 27001 certification shortens the procurement conversation: instead of answering 100 questions, you provide the certificate and the scope statement, and most questions are answered by reference.
This makes ISO 27001 a sales enabler, not just a compliance cost. The certification doesn't guarantee your product is secure; it guarantees that your organization has a systematic approach to managing information security risks. That's what enterprise buyers are actually evaluating, and a certificate from an accredited body is more credible than a self-assessment of equivalent content.
The more durable commercial angle is "secure by default" as a product differentiator. Products that leak data, get compromised, or require customers to configure their own security controls carry operational and reputational costs for buyers. Products that are secure in their default configuration, that make the safe choice the easy choice, that produce audit logs without customer configuration — these products are genuinely easier to operate and govern at scale. In B2B, that translates directly into procurement preference and into reduced support burden post-sale.
Where most companies are stuck
Security is owned by a team and not embedded in product teams. This is the most common configuration, and it produces the most common failure mode: security as a function that product teams route around until they can't. Fixing this requires an organizational structure change — not a tool purchase, not a training program, not a new policy document. The security function must have a mandate that includes the authority to require security practices from product teams, and product teams must have the capability to execute on those requirements without routing everything to the security team.
That is a harder problem than buying a SAST tool. It's also the only problem that, when solved, actually improves security outcomes at scale. Everything else is optimization around the edges of a structural limitation. The organizations that have genuinely built security into product development all share one characteristic: security work is not a function, it's a value — and the difference between those two things is visible in every planning meeting and every post-mortem and every hire.