DevSecOps Model


For moving forward in DevSecOps, leaders must ask themselves three key questions:

  • Where is my organization now?
    This assesses the companies current competencies.
  • Where do I want my organization to be?
    Define a state of “good” for the company in terms of competitive landscape.
  • How do we get there?
    Identify initiatives for bridging the gap in-between.

For answering these questions a maturity model is recommended, which presents a prescriptive point of view on a defined domain. A common – yet easy – model might be based on stages as illustrated below:

  • Beginner
    This phase marks the beginning of the DevSecOps journey. Most important is a shift in culture and mindset that emphasizes sharing and collaboration across technical disciplines, and a desire to improve performance as a team. This is the foundation of DevSecOps.
  • Intermediate
    In this stage, organizations are consistently releasing software but may experience bottlenecks, performance issues, and some team friction. While security controls are shifting earlier in the development process, much of the security-related work is still done towards the end of the process, which can slow down release cycles and result in lower quality code.
  • Advanced
    In this stage, organizations are highly efficient and productive, releasing high quality, secure software on a regular basis to a reliable platform. Security checkpoints are embedded throughout the software development lifecycle.
  • Expert
    These are DevSecOps practices employed by the most cutting edge organizations. These organizations release high quality code multiple times per day. Security controls are deeply embedded throughout the SDLC, and security has ceased to be a siloed domain. A key aspect of this stage of maturity is a very high level of automation of processes across Development, Operations, and Security

DevOps targeted the improvement of speed and quality of writing and running software by encouraging collaboration and shared responsibilities between Dev and Ops teams.

The increase velocity of DevOps teams opened complications, which slow down DevOps lifecycles:

  • Security Issues are overlooked
    as DevOps teams target functional and non-functional – in terms of performance – requirements.
  • Security is a bottleneck (or ignored)
    as security teams are separated into a dedicated silo, having separate tools, culture and processes.

These developments suggest that security is integrated into the DevOps lifecycle more deeply. DevSecOps is literally the logical next stage of evolution, breaking silos between security and DevOps teams and realizing the full potential of the movement.

DevSecOps is the next evolution of DevOps, not a departure from it.

Competencies

DevSecOps – just like DevOps – is a philosophically defined way for collaboration and requires competencies for its fulfillment:

  1. People & Culture
    Encompasses organizational structure, communication
    styles, values, incentives, behaviors, leadership, and individual and
    team health.
  2. Plan & Develop
    Encompasses how work is prioritized, how much work is planned versus unplanned, how much work is new feature development versus paying down technical debt, and how much risk assessment and code validation factors into the earliest stage of the development process.
  3. Build & Test
    Covers testing processes and automation, quality assurance, code scanning techniques, and build and signature validation.
  4. Release & Deploy
    This competency focuses on deployment strategies and release frequency, automation of the deployment process, and validation and remediation of deployment issues.
  5. Operate
    Covers infrastructure as code, capacity planning, scaling and reliability, chaos testing and red teaming, patching, and disaster recovery.
  6. Observe & Respond
    Focuses on Service Level Objectives (SLOs), vulnerability and misconfiguration scanning, security monitoring, user experience monitoring, incident management, and post-mortems

The Model

When combining stated stages and compentencies into a matrix, a maturity model is generated, guiding metrics for shifts:

CompetenceBeginnerIntermediateAdvancedExpert
People & Culture– Functional teams
siloed
– High inter-team
friction
– Nascent
onboarding
processes
– Burnout common
– Silos breaking
down
– Embracing
experimentation &
transparency
– Onboarding
process exists
– Burnout openly
discussed
– Continuous
collaboration
across teams
– Blameless culture
– Comprehensive
onboarding
process
– Burnout quickly
addressed
– Cross-functional
teams aligned
to products and
services
– High trust,
experimentation,
learning culture
– Burnout rare
Plan & Develop– Risk and security
not considered
– High technical
debt
– Excessive bug
fix work
– Code not
validated
– Limited risk
assessment
– Moderate
technical debt
– Moderate bug fix
work
– Some code
validation
– Threat modeling
and risk
assessments
– Low technical
debt
– Low bug fix work
– All code validated
– Extensive threat
modeling/risk
assessment
– Minimal technical
debt
– New feature focus
– All code validated
automatically
Build & Test– Manual testing
– No code scanning
– No build/signature
validation
– Limited core
functionality
testing
– Partial test
automation
– Partial code
scanning
– Partial build/
signature
validation
– Partial core
functionality
testing
– High test
automation
– Dynamic code
scanning
– Significant
build/signature
validation
– Significant core
functionality
testing
– Complete test
automation
– Comprehensive
dynamic code
scanning
– Comprehensive
build/signature
validation
– Comprehensive
core functionality
testing
Release & Deploy– Manual
deployments
– Large, infrequent
releases
– No deployment
security posture
criteria
– Difficult to
remediate failed
deployment
– Partial
deployment
automation
– Medium-sized,
monthly releases
– Basic deployment
security posture
criteria
– Acceptable failed
deployment
remediation times
– High deployment
automation
– Small, weekly
releases
– Detailed
deployment
security posture
criteria
– Fast failed
deployment
remediation times
– Full deployment
automation
– Numerous daily
releases
– Automated
deployment failing
– Bias to fast
forward fixes
Operate– Manual
provisioning/
configuration
– Long capacity
planning cycles
– Manual scaling
– Single availability
zone
– No chaos testing
or red teaming
– Poor patching
hygiene
– No disaster
recovery strategy
– Partial
configuration/
provisioning
automation
– OpEx-based
capacity planning
– Partial auto-scaling
– Multi-availability
zone/region
– Basic chaos
testing or red
teaming
– Basic patching
hygiene
– Basic DR strategy
– Extensive
configuration/
provisioning
automation
– Capacity
planning based
on seasonality/
growth
– Significant auto-scaling
– Multiple cloud
providers / high
availability
– Significant chaos
testing & red
teaming
– Fast patching
– Comprehensive
DR strategy
– All infrastructure
configurations
and instructions
instantiated as
code
– Capacity planning
based on granular
usage trends/
predictions
– Comprehensive
auto-scaling
– Multiple cloud
providers / very
high availability
– Continuous chaos
testing & red
teaming
– Patching SLA
– DR plans tested
often
Observe & Respond– No SLOs formed
– No vulnerability/
misconfiguration
scanning
– No security
metrics defined
– Siloed telemetry
– User journeys
unknown
– Excessive MTTD
and MTTR
– No post-mortems
– Basic SLOs formed
– Partial
vulnerability/
misconfiguration
scanning
– Some security
metrics defined &
visible
– Some common
observability data
sets
– Basic
understanding of
user experience
– Moderately high
MTTD and MTTR
– Basic post-mortems
– SLOs & error
budgets favored
– Significant
vulnerability/
misconfiguration
scanning
– Security metrics
defined & visible
for most services
– Common
observability data
platform
– Detailed user
journey visibility
– Moderate-to-low
MTTD and MTTR
– Detailed post-mortem
– SLOs & error
budgets drive
decisions
– Extensive
vulnerability/
misconfiguration
scanning
– Security metrics
defined & visible
for 100% of
services
– Standardized
metadata model
– Complete user
journey visibility
– Very low MTTD
and MTTR
– Clear, blameless
post-mortems

By assessing an organizations competence-stages (e.g. a spider-web-chart) against the defined goal, required improvements become visible. These improvements can be shifted into a roadmap or project plan, including gap-stages as milestones for the defined target.

Therefore, an improvement from Clutural Beginner to Cultural Intermediate, requires a milestone for Cultural Advanced.

Example

  • Target 1: Cultural Advanced by Q2/2023, driven by Fabian.
  • Target 2: Cultural Intermediate by Q4/2023, driven by Teddy.

DevSecOps Value drivers

For maximizing generated value as part of maturity-shifts, matrix driver should be rated in terms of productivity, metrics, customer metrics, costs, and revenue. This rating is illustrated below:

  • Faster, more agile delivery and reduced time to market
    DevSecOps enables organizations to deliver applications to market faster, and confidently iterate revenue-impacting applications with more frequency to protect and grow revenue. The integration of security into DevOps workflows eliminates potential bottlenecks and accelerates organizations’ efficiency and agility.
  • Improved security posture and reduced risk
    DevSecOps integrates security stakeholders and security practices into all phases of the software development lifecycle and the operation of services in production. Greater collaboration, trust, and transparency among Dev, Sec, Ops teams results in lower risk software.
  • Reduced operational and development costs
    The fast feedback loops of DevSecOps practices streamline the software development life cycle and eliminate the vast mtajority of issues before they reach production environments. Incidents that do occur are resolved very quickly.
  • Improved customer experiences and satisfaction
    By producing higher quality and more secure software, DevSecOps increases the value organizations provide to their customers. Customers also value more frequent enhancements and upgrades to their services. Finally, customer satisfaction is also boosted when organizations are able to observe systems from the end-users’ perspective and have visibility into end-to-end customer journeys.