News

    One Step at a Time - a 6 Month Plan to Live and Breath DORA

    By Gradum Team14 min read
    One Step at a Time - a 6 Month Plan to Live and Breath DORA

    Podcast Episode

    One Step at a Time - a 6 Month Plan to Live and Breath DORA

    0:000:00

    It’s 02:13 and your on-call phone won’t stop vibrating. A core payments service is timing out. A cloud status page is “investigating.” The business wants answers in minutes. Regulators want a notification in hours. Your incident channel fills with guesses, screenshots, and “does anyone know who owns this vendor?” questions.

    That’s the moment DORA (the EU Digital Operational Resilience Act) is designed for—and the moment most organizations realize they don’t have a system, they have good intentions.

    What you’ll learn

    • What DORA is (and what it’s not) in 2025–2026, with scope clarity
    • A practical 6‑month DORA compliance plan you can execute with a professional team
    • How to build incident reporting that can meet DORA’s timelines (including the “4-hour” reality)
    • How to turn resilience testing and third‑party oversight into repeatable operations
    • The counter-intuitive move that makes DORA programs faster (and less painful)

    DORA in 2025: what it is, who’s in scope, and why a 6‑month plan works

    Answer-first: DORA is Regulation (EU) 2022/2554, a binding EU framework that requires financial entities to prove they can withstand, respond to, and recover from ICT disruptions (including cyber incidents and third-party outages). It applies from January 17, 2025, and it covers both financial firms and certain critical ICT providers. A 6‑month plan works because DORA is less about “one-time compliance” and more about building an operating rhythm across governance, incidents, testing, and vendors.

    DORA (Digital Operational Resilience Act) is not a generic cybersecurity standard. It’s a financial-sector operational resilience regulation with specific expectations: defined accountability by the management body, standardized incident reporting, mandated testing, and enforceable third‑party oversight.

    It also helps to disambiguate the acronym: you may see “DORA” used in other disciplines (for example, process/energy systems). In this article, “DORA” means EU financial sector digital operational resilience.

    Who is in scope (entities and relationships)

    DORA applies to 20 types of financial entities, including (non-exhaustive): credit institutions (banks), payment institutions, electronic money institutions, investment firms, insurance and reinsurance undertakings, and crypto-asset service providers. It also creates an oversight regime for Critical ICT Third‑Party Providers (CTPPs) such as major cloud and data service providers designated by the European Supervisory Authorities (ESAs).

    Why a 6‑month plan (and not a 6‑week scramble)

    You can’t “policy” your way out of DORA. The hard part is operational: evidence collection, incident classification, testing schedules, vendor mapping, and repeatable reporting.

    Key Takeaway:
    DORA compliance is the ability to demonstrate resilience—not just believe you’re resilient.

    Evidence: DORA entered into force in January 2023 and applies from January 17, 2025; it covers 20 entity types and roughly 22,000 EU-regulated financial entities preparing for compliance (research summary; ESAs/EU context) [1][5].


    Month 1: set ICT risk governance and a measurable baseline

    Answer-first: In month 1, you establish DORA ownership, define your ICT risk management framework, and create a baseline of “what exists today” across assets, services, risks, and controls. Your goal is not perfect documentation—it’s a defensible scope, clear accountability, and a prioritized gap list tied to DORA requirements. If you can’t name your critical services and dependencies, you can’t meet DORA’s incident, testing, or third‑party obligations.

    Step-by-step (what to actually do)

    1. Appoint accountable owners.
      DORA expects the management body to oversee ICT risk management. Translate that into your org chart: who signs off, who runs, who audits.

    2. Define “critical or important functions” (CIFs).
      Build a short list of services where downtime creates material impact (customer harm, market impact, regulatory breach). Keep it tight—many teams over-scope and stall.

    3. Create an ICT asset + dependency map (minimum viable).
      For each CIF, capture:

      • primary application(s)
      • infrastructure (including cloud services)
      • data flows (high level)
      • third parties (including subcontractor chains where visible)
      • current RTO/RPO assumptions
    4. Run a DORA gap assessment against the RTS/ITS topics.
      You’re looking for control gaps in:

      • access controls, encryption, logging
      • vulnerability management
      • backup and recovery
      • business continuity and disaster recovery alignment
      • third‑party oversight mechanisms
    5. Pick 5–10 metrics you’ll track monthly.
      Example metrics:

      • % CIFs with named business owner + technical owner
      • % CIFs with mapped third parties
      • mean time to detect (MTTD) trend (even if rough)
      • patching SLA adherence for critical vulnerabilities
      • backup restore test success rate

    A pattern to watch for (experience signal, without hype)

    A common failure mode in early DORA programs is treating “governance” as a slide deck deliverable. The better move is to make governance tangible: named owners, meeting cadence, and a living backlog of remediation items tied to CIFs.

    Pro Tip (mini-checklist): Month 1 deliverables

    • DORA RACI (who does what) approved
    • CIF list (v1) with owners
    • Dependency map for top CIFs (v1)
    • Gap assessment backlog with priorities
    • Metrics dashboard (even if simple)

    Evidence: Batch 1 Regulatory Technical Standards (RTS) and Implementing Technical Standards (ITS) were finalized by the ESAs on January 17, 2024, covering ICT risk frameworks and incident reporting foundations (research summary) [1].


    Month 2: build incident management to hit the 4h/72h/1‑month clock

    Answer-first: In month 2, you operationalize incident detection, classification, escalation, and reporting so you can meet DORA’s timelines for major incidents. This means defining what “major” means for you, instrumenting detection and logging, and rehearsing a reporting workflow—not just writing a policy. The output is a working incident “pipeline” from alert → triage → classification → regulatory notification → post-incident report.

    What “good” looks like under DORA

    You need three things working together:

    1. Detection and logging you can trust
      If logs are scattered across tools and teams, you’ll miss the moment when an issue becomes “reportable.”

    2. A classification model aligned to DORA
      Your responders need a fast way to decide: is this “major,” potentially major, or not? The organization also needs consistency so you’re not over-reporting noise or under-reporting risk.

    3. A prebuilt reporting workflow
      Reporting is a process, not an email. Decide who drafts, who reviews, who approves, and how evidence is attached.

    Build the workflow (practical sequence)

    • Draft an incident severity matrix.
      Include service impact, data impact, customer impact, and third‑party involvement.
    • Create a “regulator-ready” incident record template.
      Include timestamps, systems impacted, CIF linkage, containment actions, and initial hypotheses.
    • Set escalation triggers.
      Example: “Any CIF disruption lasting >X minutes” or “any confirmed compromise in a CIF environment.”
    • Run a tabletop exercise focused on speed.
      Don’t test “perfect response.” Test whether you can produce a coherent initial report quickly with known facts and explicit unknowns.

    Key Takeaway:
    Your DORA incident program is only as fast as your approval chain.

    Evidence: DORA incident reporting timelines commonly referenced in the provided research include initial notification within 4 hours, intermediate updates within 72 hours, and a final/root-cause report within 1 month for major incidents (research summary) [1][5][8]. The research also notes example “major incident” thresholds such as impacting >5% of users or losses of €100,000+ (as cited in the summary material) [1][5].


    Month 3: implement resilience testing (annual basics + TLPT readiness)

    Answer-first: In month 3, you convert “we test things” into a risk-based resilience testing program that DORA can recognize: annual baseline testing for everyone, and threat-led penetration testing (TLPT) readiness for entities that will be designated. The goal is repeatability: defined scope, independence, remediation tracking, and evidence retention. Testing is not a one-off project—it’s a calendar.

    Design your testing stack

    Think in layers, from continuous to periodic:

    • Continuous/ongoing: vulnerability scanning, log monitoring, control checks
    • Quarterly or semiannual: backup restore drills, access reviews for CIF systems
    • Annual: vulnerability assessments, scenario exercises, DR tests
    • Triennial (where required): TLPT for designated critical entities

    How to execute month 3 without chaos

    1. Create a 12–18 month testing roadmap.
      Put dates on the calendar. If it isn’t scheduled, it won’t happen.

    2. Define test scope using CIFs.
      Test what matters most. Tie every test to a service and expected operational outcome.

    3. Set remediation SLAs and tracking.
      Testing without closure is theater. Put findings into a backlog with owners and deadlines.

    4. Plan for third-party-involved tests.
      DORA pushes you to include vendor dependencies where relevant. That requires contractual readiness (month 4), but planning starts now.

    Experience signal (common pattern)

    Teams that treat penetration testing as the whole “testing program” often miss operational resilience basics—like restore tests that prove you can actually recover under pressure. DORA cares about both.

    Pro Tip: Minimum viable “test evidence pack”

    • Test scope and method
    • Independence (who executed)
    • Findings and severity rationale
    • Remediation plan + dates
    • Retest/closure proof

    Evidence: The research summary states that DORA testing expectations include annual basic testing and TLPT every 3 years for designated entities, with results reported and remediated (research summary) [1][3][7][9].


    Month 4: fix third‑party risk—contracts, monitoring, and exit plans

    Answer-first: In month 4, you make third‑party risk manageable by mapping vendor dependencies for CIFs, standardizing DORA-ready contract clauses, and implementing ongoing monitoring and exit strategies. DORA is explicit that ICT third parties are not “someone else’s risk”—they are part of your operational resilience perimeter. Your objective is simple: no critical service should depend on a vendor you can’t audit, can’t monitor, and can’t exit.

    What DORA changes about vendor management

    Traditional vendor management often focuses on procurement checklists. DORA forces operational reality:

    • You need visibility into who supports your critical services.
    • You need rights in contracts (audit access, incident notification duties, subcontractor controls).
    • You need options (exit strategies and workable offboarding).

    Build the third-party program in 4 weeks

    1. Map vendors to CIFs (start with top 10).
      Don’t attempt a perfect enterprise inventory on day one. Start with the services regulators would care about most.

    2. Standardize a DORA contract addendum.
      Common clause categories referenced in the research include:

      • audit and access rights
      • incident notification and cooperation
      • testing participation
      • subcontractor transparency
      • business continuity commitments
      • exit/termination assistance
    3. Implement continuous monitoring for cyber posture where appropriate.
      The research highlights market solutions positioning around DORA, including third‑party monitoring capabilities (e.g., SecurityScorecard’s third‑party cyber risk posture view) [Learning 6].

    4. Define exit plans that are operational, not theoretical.
      Exit strategy = data portability + replacement path + timeline + testing of the exit steps.

    Key Takeaway:
    Under DORA, a “critical vendor” is any vendor whose failure becomes your reportable incident.

    Evidence: The research summary notes an EU ICT third‑party landscape of 1,000+ providers, and that oversight fees for Critical ICT Third‑Party Providers (CTPPs) may be up to €1 million annually (delegated fees regulation context in the summary) [1]. It also cites supply chain attacks as a major concern for the financial sector (referencing Verizon DBIR in the research summary) [7].


    Month 5–6: operationalize DORA with evidence, metrics, and “runbooks that run”

    Answer-first: Months 5 and 6 are where DORA stops being a project and becomes an operating model: automated evidence collection where feasible, internal audit readiness, measurable resilience KPIs, and rehearsed runbooks across incidents, testing, and vendors. You’re aiming for “continuous compliance,” not a last-minute scramble before audits or supervisory reviews. This is also where you reduce human workload by making your tools produce the proof.

    Turn obligations into routines

    By now, you should have:

    • governance (month 1)
    • incident workflows (month 2)
    • testing calendar (month 3)
    • vendor controls (month 4)

    Months 5–6 connect them into repeatable operations.

    1) Build a single “DORA evidence register”

    Create an evidence map that links:

    • each DORA requirement area
    • your control/process
    • your artifact (policy, log report, test report, contract clause)
    • an owner
    • refresh frequency (monthly, quarterly, annually)

    2) Automate evidence capture (where it is safe and reliable)

    Manual screenshot-hunting is where programs die. The deep research learnings describe automation approaches (for example, automated “cybersecurity evidence engine” concepts) that gather evidence from systems and workflows and benchmark readiness in real time, including integrations via Slack and Teams (Research learnings) [Learning 2]. Even if you don’t use that platform, the operating principle matters: evidence should be a byproduct of operations.

    3) Define “resilience KPIs” regulators and executives both understand

    Examples:

    • time to classify incidents (from detection to “major/not major” decision)
    • time to initial notification readiness (draft + approval)
    • % CIFs with tested recovery steps in last 12 months
    • vendor concentration for CIFs (single-provider exposure)

    Pro Tip: The “two-speed” documentation rule

    • Speed 1: responder notes (fast, messy, timestamped)
    • Speed 2: regulatory narrative (slower, reviewed, consistent)

    You need both. Confusing them creates delays.

    Evidence: The research summary emphasizes DORA’s shift from reactive operational risk handling to proactive, technology-centric resilience, including incident reporting, testing, and third-party oversight as mandatory pillars [3][4]. The research learnings describe continuous readiness assessment and automated reporting aligned to DORA-style requirements (Research learnings) [Learning 2].


    The Counter-Intuitive Lesson I Learned

    Answer-first: The counter-intuitive lesson is that the fastest path to “living and breathing DORA” is to stop treating compliance as documentation, and start treating it as instrumentation. When your systems, workflows, and tools continuously generate evidence, your policies become lighter—and your operational resilience becomes real. In practice, you win by engineering repeatable loops (detect → decide → report → learn → test) rather than producing perfect narratives.

    While synthesizing DORA guidance and vendor approaches, one theme shows up repeatedly: organizations underestimate the drag of manual compliance work. They assume the hard part is “knowing the rules.” It isn’t.

    The hard part is proving you did the work, consistently, across the year.

    What to do instead (borrow this playbook)

    1. Reduce the number of “special” DORA activities.
      If an activity only happens “for compliance,” it gets postponed. Embed it into BAU: incident records become your reporting artifacts; restore tests become part of change management.

    2. Make evidence collection boring.
      Modern compliance platforms** describe guided tasks, automatic documentation capture, and real-time reporting integrations (e.g., via Slack/Teams) designed to cut manual collection effort [Learning 2]. The specific tooling is optional; the model is powerful.

    3. Aim for “auditability by default.”
      If you can’t reconstruct what happened from logs, tickets, and timelines, you’re one outage away from confusion.

    Key Takeaway:
    DORA maturity is when resilience evidence is produced automatically as part of normal operations.

    Evidence: The deep research learnings explicitly describe platforms and workflows built around automated evidence capture, continuous monitoring, and dynamic reporting to maintain DORA readiness (Research learnings) [Learning 2].


    FAQ: DORA compliance plan questions (2025–2026)

    Answer-first: These FAQs clarify scope, timelines, and practical implementation choices for a 6‑month DORA compliance plan. They are written for quick extraction and direct use in planning discussions. If you need one guiding principle: start with critical services, then expand—proportionality is real, but only if you can justify it.

    Is DORA mandatory or “best practice”?

    DORA is a binding EU regulation for in-scope financial entities, unlike voluntary assurance frameworks (for example, SOC 2 is commonly treated as voluntary) (Industry analysis) [Learning 4].

    When did DORA start applying?

    The research summary states DORA applies from January 17, 2025 (research summary) [1][2][4][7][10].

    What are DORA’s core pillars?

    From the research summary: ICT risk management, incident reporting, digital operational resilience testing, third‑party risk oversight, and information sharing (frequently bundled but distinct) [1][3][5].

    Do we have to do TLPT (threat-led penetration testing)?

    Not every entity will, but the research summary indicates TLPT is expected every 3 years for designated critical entities, while others follow proportionate, risk-based testing (research summary) [1][7][9].

    What incident timelines do we need to be ready for?

    The research summary cites 4 hours for initial notification readiness, 72 hours for intermediate updates, and 1 month for final/root cause reporting for major incidents [1][5][8].

    How does DORA relate to NIS2?

    The research summary notes that NIS2 broadens critical infrastructure resilience, but DORA provides finance-specific rules and is treated as the specialized regime for the financial sector (research summary) [9].

    What’s the biggest mistake teams make in DORA programs?

    Over-scoping. They inventory everything before defining critical services and reporting workflows. Start with CIFs and make progress visible.

    Are there meaningful penalties for noncompliance?

    Member States define penalties to be effective, proportionate, and dissuasive (specific caps vary by national law, distinct from NIS2’s 2% standard) (as summarized in the provided material) [4][10].

    Key Terms (mini‑glossary)

    • DORA (Digital Operational Resilience Act): EU Regulation (EU) 2022/2554 requiring ICT resilience in the financial sector.
    • ICT risk management framework: The governance, policies, processes, and controls used to identify and reduce technology risk.
    • CIF (Critical or Important Function): A business service whose disruption would materially impact customers, markets, or compliance.
    • Major incident: An ICT incident meeting defined severity thresholds requiring regulatory reporting under DORA.
    • RTS (Regulatory Technical Standards): Detailed technical rules issued to specify how DORA requirements must be implemented.
    • ITS (Implementing Technical Standards): Standardized templates and procedures (e.g., reporting formats) supporting consistent implementation.
    • TLPT (Threat-Led Penetration Testing): Advanced, intelligence-led testing simulating real attackers, required for certain entities on a cycle.
    • CTPP (Critical ICT Third‑Party Provider): A technology provider designated for EU-level oversight due to systemic importance.
    • Exit strategy: A tested plan to transition away from an ICT provider without unacceptable disruption.
    • Evidence register: A mapped inventory linking DORA obligations to controls and the artifacts proving they operate.

    Conclusion
    Back to 02:13: the fastest way out of that kind of night isn’t heroics—it’s preparation you can prove. With a 6‑month DORA compliance plan, you build the core operating loops: governance that assigns ownership, incident workflows that meet the clock, testing that validates reality, and vendor controls that prevent “surprise dependencies.”

    If you want more practical, implementation-focused guidance on DORA readiness and operational resilience, explore the resources at Gradum.io and turn this plan into your team’s monthly execution cadence.

    Run Maturity Assessments with GRADUM

    Transform your compliance journey with our AI-powered assessment platform

    Assess your organization's maturity across multiple standards and regulations including ISO 27001, DORA, NIS2, NIST, GDPR, and hundreds more. Get actionable insights and track your progress with collaborative, AI-powered evaluations.

    100+ Standards & Regulations
    AI-Powered Insights
    Collaborative Assessments
    Actionable Recommendations

    You Might also be Interested in These Articles...

    Check out these Gradum.io Standards Comparison Pages