Sunday, July 6, 2025

CMMC and CMMI Levels

CMMC (Cybersecurity Maturity Model Certification) and CMMI (Capability Maturity Model Integration) are both frameworks that aim to improve the practices of organizations, but they focus on different aspects and are structured differently. Below is a comparison of CMMC levels 1 to 5 and CMMI levels 1 to 5:

Overview of CMMC Levels

CMMC is specifically designed for the defence sector, ensuring that contractors and subcontractors meet specific cybersecurity requirements to protect Controlled Unclassified Information (CUI). The CMMC framework consists of five maturity levels, each with a set of practices and processes.

CMMC Levels:

  • Level 1: Basic Cyber Hygiene

    • Focus: Basic safeguarding measures.
    • Practices: Implementing basic security practices such as using antivirus software, regularly updating systems, and providing security awareness training to personnel.
  • Level 2: Intermediate Cyber Hygiene

    • Focus: Intermediate controls.
    • Practices: A structured implementation of security measures, including documentation and management of cybersecurity practices and policies.
  • Level 3: Good Cyber Hygiene

    • Focus: Protecting CUI.
    • Practices: Comprehensive security practices aligned with NIST SP 800-171, including access controls, incident response, and risk management.
  • Level 4: Proactive

    • Focus: Advanced security practices.
    • Practices: Enhanced security measures, continuous monitoring, and proactive identification and mitigation of cybersecurity risks.
  • Level 5: Advanced/Progressive

    • Focus: Cutting-edge capabilities.
    • Practices: Continuous improvement and advanced security practices, including automated response and threat intelligence integration.

Overview of CMMI Levels

CMMI is a process improvement framework applicable across various industries, focusing on enhancing organisational capabilities and performance. CMMI also consists of five maturity levels, each representing a progression in process maturity.

CMMI Levels:

  • Level 1: Initial

    • Focus: Ad-hoc processes.
    • Characteristics: Processes are unpredictable and reactive; success depends on individual efforts rather than established processes.
  • Level 2: Managed

    • Focus: Basic project management.
    • Characteristics: Processes are planned, documented, and monitored; there is a focus on managing project performance and ensuring that commitments are met.
  • Level 3: Defined

    • Focus: Standardized processes.
    • Characteristics: Processes are well-defined and tailored to the organisation; there is a focus on process improvement and consistency across projects.
  • Level 4: Quantitatively Managed

    • Focus: Data-driven management.
    • Characteristics: Processes are controlled using statistical and quantitative techniques; performance is measured and managed quantitatively.
  • Level 5: Optimising

    • Focus: Continuous process improvement.
    • Characteristics: Focus on continuous improvement through innovative technologies and techniques; there is an emphasis on proactive process optimization.

Comparison Summary

Aspect

CMMC

CMMI

Purpose

Improve cybersecurity maturity for DoD contractors

Enhance overall process maturity across various industries

Focus

Cybersecurity practices

Process improvement and capability

Levels

5 levels focused on cybersecurity maturity

5 levels focused on process maturity

Level 1

Basic cybersecurity hygiene

Initial (ad-hoc processes)

Level 2

Intermediate controls, documentation

Managed (basic project management)

Level 3

Good cyber hygiene, protecting CUI

Defined (standardized processes)

Level 4

Proactive, continuous monitoring

Quantitatively Managed (data-driven)

Level 5

Advanced, continuous improvement

Optimising (continuous process improvement)


Summary

While both CMMC and CMMI consist of five maturity levels, they differ significantly in their focus and objectives. CMMC is specifically tailored for cybersecurity in the defence sector, while CMMI is broader and applicable to various industries for process improvement. Understanding these frameworks and their respective levels can help organisations navigate compliance, enhance their practices, and improve overall performance.

DORA, NIS2, EU AI Act, and CMMC

DORA, NIS2, EU-AI, and CMMC refer to various regulatory frameworks and directives that aim to enhance security, accountability, and governance in different sectors. Here’s a brief overview of each:

DORA (Digital Operational Resilience Act)

DORA is a European legislative proposal that aims to strengthen the digital operational resilience of the financial sector. It establishes a comprehensive framework for managing information and communication technology (ICT) risks, ensuring that financial institutions can withstand, respond to, and recover from all types of disruptions and threats. Key aspects include:

  • Risk Management: Financial entities must implement robust risk management frameworks for their ICT systems.
  • Incident Reporting: Requirements to report significant ICT incidents to relevant authorities.
  • Testing: Regular testing of digital operational resilience is mandated.

NIS2 (Directive on Security of Network and Information Systems)

NIS2 is an update to the original NIS Directive, aimed at enhancing cybersecurity across the EU. It expands the scope to include more sectors and introduces stricter supervisory measures. Key components include:

  • Wider Scope: Applies to more entities, including medium and large companies in essential and important sectors.
  • Risk Management: Establishes security requirements for network and information systems.
  • Incident Notification: Obligates organisations to notify authorities of significant incidents.

EU-AI (Artificial Intelligence Act)

The EU-AI Act is a regulatory framework proposed by the European Commission to ensure that AI technology is used safely and ethically within the EU. It categorises AI systems based on risk levels and outlines requirements accordingly. Key features include:

  • Risk-Based Classification: AI systems are classified into minimal, limited, high, and unacceptable risk categories.
  • Compliance Requirements: High-risk AI systems face stringent requirements for transparency, accountability, and robustness.
  • Prohibition of Certain AI Practices: Certain AI applications, deemed harmful, are banned.

CMMC (Cybersecurity Maturity Model Certification)

CMMC is a certification framework developed by the Department of Defense (DoD) in the United States aimed at improving the cybersecurity posture of contractors and subcontractors. It includes several maturity levels that organisations must achieve to be eligible for DoD contracts. Key elements include:

  • Maturity Levels: Ranges from Level 1 (basic cyber hygiene) to Level 5 (advanced/progressive).
  • Assessment: Requires third-party assessments to validate compliance levels.
  • Focus on Protection: Emphasises protecting controlled unclassified information (CUI) within the supply chain.

These frameworks reflect a growing emphasis on cybersecurity, operational resilience, and responsible use of technology in various domains. Understanding these regulations is crucial for organisations operating within affected sectors, particularly in terms of compliance and risk management.


Mapping DORA, NIS2, EU-AI, and CMMC

Involves comparing and contrasting their objectives, scope, focus areas, and compliance requirements.

Here’s a structured overview to illustrate their similarities and differences:

1. Objective

  • DORA: Enhance the digital operational resilience of financial entities against ICT risks.
  • NIS2: Improve cybersecurity across essential and important sectors within the EU.
  • EU-AI: Ensure the ethical and safe use of artificial intelligence technologies.
  • CMMC: Establish a maturity model for cybersecurity practices among DoD contractors to protect sensitive information.

2. Scope

  • DORA: Primarily focuses on the financial sector, including banks, insurance companies, and investment firms.
  • NIS2: Covers a wide range of sectors, including energy, transport, health, and digital infrastructure, applicable to medium and large entities.
  • EU-AI: Applies to all AI systems used within the EU, affecting both public and private sectors across various industries.
  • CMMC: Specifically targets contractors and subcontractors working with the US Department of Defense.

3. Risk Management and Compliance

  • DORA: Requires robust risk management frameworks, incident reporting, and regular resilience testing.
  • NIS2: Mandates the implementation of security measures, incident notification, and risk management practices.
  • EU-AI: Introduces a risk-based classification system for AI, with compliance requirements varying by risk level.
  • CMMC: Enforces a tiered maturity model, requiring third-party assessments and adherence to specific security practices.

4. Key Focus Areas

  • DORA: ICT risk management, operational resilience testing, and incident reporting.
  • NIS2: Network and information systems security, incident response, and sector-wide coordination.
  • EU-AI: Transparency, accountability, robustness of AI systems, and the prohibition of harmful AI practices.
  • CMMC: Cybersecurity practices to protect controlled unclassified information (CUI) across supply chains.

5. Enforcement and Penalties

  • DORA: Enforcement mechanisms through national competent authorities with potential penalties for non-compliance.
  • NIS2: National authorities will enforce compliance, with penalties for significant breaches.
  • EU-AI: Non-compliance can lead to fines and sanctions, with strict enforcement measures for high-risk AI systems.
  • CMMC: Certification assessments dictate eligibility for contracts, with the potential for loss of contract for non-compliance.

6. Stakeholders

  • DORA: Financial institutions, regulators, and ICT service providers.
  • NIS2: Public administrations, essential service providers, and digital service providers.
  • EU-AI: AI developers, users, and regulatory bodies.
  • CMMC: DoD contractors, subcontractors, and cybersecurity assessors.

Summary

The mapping of DORA, NIS2, EU-AI, and CMMC highlights a common theme of enhancing security and resilience in the face of evolving threats and technologies. Each framework addresses specific sectors and risks, reflecting the increasing importance of regulatory compliance in maintaining operational integrity and security across diverse industries. Understanding these frameworks is essential for organisations seeking to navigate the complex landscape of compliance and risk management.

Digital Operational Resilience Act (DORA) - Roles

Digital Operational Resilience Act (DORA) establishes a regulatory framework aimed at ensuring that financial institutions within the European Union can withstand, respond to, and recover from all types of ICT-related disruptions. The Act outlines various roles and responsibilities that are essential for achieving its objectives. Below is an overview of the key roles defined under DORA:

1. Financial Entities

  • Definition: Financial entities include banks, insurance companies, investment firms, payment service providers, e-money institutions, and other entities engaged in financial activities.
  • Responsibilities:
    • Establish and maintain a robust ICT risk management framework.
    • Implement measures to ensure operational resilience against ICT risks.
    • Report significant ICT incidents to the relevant authorities as per the guidelines.

2. ICT Service Providers

  • Definition: These are third-party service providers offering ICT services to financial entities, such as cloud service providers, data centres, and software providers.
  • Responsibilities:
    • Ensure the security and resilience of the services provided to financial entities.
    • Comply with the requirements set forth in DORA regarding operational resilience.
    • Facilitate cooperation with financial entities in the event of incidents.

3. Competent Authorities

  • Definition: National regulatory bodies designated by EU member states to supervise the financial entities within their jurisdiction.
  • Responsibilities:
    • Oversee and enforce compliance with DORA among financial entities and ICT service providers.
    • Monitor the operational resilience and risk management practices of financial institutions.
    • Provide guidance and support to entities in implementing DORA requirements.

4. European Supervisory Authorities (ESAs)

  • Definition: Comprises three authorities: the European Banking Authority (EBA), the European Securities and Markets Authority (ESMA), and the European Insurance and Occupational Pensions Authority (EIOPA).
  • Responsibilities:
    • Develop technical standards, guidelines, and recommendations to support the implementation of DORA across the EU.
    • Facilitate coordination among national competent authorities.
    • Issue reports and analysis on the state of operational resilience within the financial sector.

5. Incident Reporting Authorities

  • Definition: Authorities responsible for receiving and managing reports of significant ICT incidents from financial entities.
  • Responsibilities:
    • Assess the reported incidents and coordinate responses as necessary.
    • Ensure that lessons learned from incidents are communicated to financial entities to enhance resilience.

6. Board of Directors and Senior Management

  • Definition: Leadership within financial entities responsible for overall governance and decision-making.
  • Responsibilities:
    • Set the strategic direction for ICT risk management and operational resilience.
    • Ensure that sufficient resources are allocated to implement and maintain resilience measures.
    • Oversee the effectiveness of the organisation's ICT risk management framework.

Summary

The roles defined under DORA are pivotal for enhancing the digital operational resilience of the financial sector within the EU. By delineating responsibilities among financial entities, ICT service providers, regulatory bodies, and management, DORA aims to create a robust framework that promotes effective risk management practices and incident response strategies. Understanding these roles is essential for organisations to navigate compliance requirements effectively and strengthen their operational resilience against ICT disruptions.

EU Data Act - Key Objectives and Provisions

EU Data Act is a legislative proposal put forward by the European Commission aimed at establishing a framework for the governance and use of data within the European Union. The primary objective of the Data Act is to promote the access to and sharing of data, fostering innovation and enhancing the data economy in Europe. The proposal is part of the broader European strategy for data, which seeks to create a single market for data to unlock the potential of data for businesses, public authorities, and citizens.

Key Objectives of the EU Data Act

  1. Facilitate Data Sharing:

    • Promote the sharing of data between businesses, between businesses and public authorities, and among public authorities to enhance collaboration and innovation.
  2. Enhance Data Availability:

    • Encourage the availability of data generated by devices, applications, and services, ensuring that users have control over their data and can share it with third parties.
  3. Support Innovation:

    • Foster a data-driven economy by enabling access to data for research, development, and the creation of new services and products.
  4. Improve Data Governance:

    • Establish clear rules and responsibilities regarding data sharing and access, including the rights and obligations of data holders, users, and other stakeholders.
  5. Ensure Fair Access:

    • Create a level playing field for businesses, particularly smaller companies and start-ups, by ensuring fair access to data and preventing data lock-in by dominant platforms.

Key Provisions of the EU Data Act

  1. Data Access and Sharing:

    • Provisions to facilitate access to and sharing of data between different stakeholders, particularly focusing on the rights of individuals and businesses to share their data securely.
  2. Data Portability:

    • Enhancements to existing data portability rights, enabling users to transfer their data between service providers easily.
  3. Public Sector Data:

    • Encouragement for public authorities to make non-personal data available for reuse, especially datasets that can contribute to public interest objectives (e.g., environmental data, health data).
  4. Conditions for Data Sharing:

    • Establishes conditions under which data sharing can occur, including fair remuneration for data sharing and the establishment of data-sharing agreements.
  5. Technical and Organisational Measures:

    • Guidelines for implementing technical and organisational measures to ensure the security and privacy of shared data.
  6. Enforcement and Compliance:

    • Mechanisms for supervision and enforcement, including the establishment of competent authorities within member states to oversee compliance with the Data Act.

Relationship with Other Data Regulations

The EU Data Act is designed to complement existing data protection regulations, such as the General Data Protection Regulation (GDPR), and other initiatives, including the Digital Services Act (DSA) and the Digital Markets Act (DMA). By establishing a clear framework for data sharing and access, the Data Act aims to enhance the overall data landscape in the EU, promoting innovation while safeguarding individual rights and privacy.

The EU Data Act officially came into force on January 11, 2024, but the majority of its provisions will become applicable starting September 12, 2025. 

Entry into Force:
  • The Data Act was published in the Official Journal of the EU on December 22, 2023, and entered into force on January 11, 2024.
  • Applicability Date:
  • While the Act is in force, its core rules and obligations will become applicable on September 12, 2025. 
  • Purpose:
  • The Data Act aims to create a fairer data economy in the EU by regulating data access and use, particularly for connected devices and cloud services.
  • Impact:
  • It will impact how businesses handle data, requiring them to provide users with access to data generated by their products and services, and allowing users to share that data with third parties.
  • Preparation:
  • Organizations should start preparing for the applicability of the Data Act by reviewing their data handling practices and making necessary adjustments to comply with the new rules by September 12, 2025

https://digital-strategy.ec.europa.eu/en/policies/data-act

Summary

The EU Data Act represents a significant step towards creating a more open and competitive data economy in Europe. By facilitating data sharing and improving access to data, the Act aims to drive innovation, enhance collaboration across sectors, and support the digital transformation of the European economy. Understanding the provisions and objectives of the Data Act is crucial for businesses and public authorities operating in the EU to align their data practices with the regulatory framework.

EU AI Act - Roles, Key Articles and Mapping with ISO 42001

    The EU AI Act establishes several key roles and responsibilities to ensure the effective implementation of its provisions and to promote the safe and ethical use of artificial intelligence throughout the European Union. Here are the primary roles defined under the EU AI Act:

    1. Providers

    • Definition: Providers are individuals or organisations that develop or place AI systems on the market or put them into service within the EU. This includes both developers and manufacturers of AI systems.
    • Responsibilities:
      • Ensure compliance with the requirements of the EU AI Act, particularly for high-risk AI systems.
      • Maintain proper documentation and provide necessary information about the AI system to users and authorities.
      • Conduct risk assessments and implement risk management measures.

    2. Users

    • Definition: Users are individuals or organisations that use AI systems in their operations, regardless of whether they develop the AI system themselves or acquire it from a provider.
    • Responsibilities:
      • Ensure that the use of the AI system complies with the EU AI Act and any applicable national regulations.
      • Provide feedback on the AI system's performance and any potential risks encountered during its use.
      • Implement necessary oversight to ensure that the AI system operates within specified parameters.

    3. Notified Bodies

    • Definition: Notified bodies are independent organisations designated by EU member states to assess the conformity of high-risk AI systems with the requirements of the EU AI Act.
    • Responsibilities:
      • Carry out conformity assessments of high-risk AI systems.
      • Issue certificates of conformity for systems that meet the required standards.
      • Provide guidance to providers on compliance and best practices.

    4. National Authorities

    • Definition: Each EU member state will designate national authorities responsible for the supervision and enforcement of the EU AI Act.
    • Responsibilities:
      • Monitor compliance with the AI Act within their jurisdiction.
      • Handle incident reporting and investigations related to AI systems.
      • Facilitate cooperation among member states to ensure a unified approach to AI regulation.

    5. European Artificial Intelligence Board (EAIB)

    • Definition: The EAIB is a new body proposed under the EU AI Act to facilitate cooperation and coordination among member states and ensure consistent implementation of the Act.
    • Responsibilities:
      • Provide guidance on the interpretation and enforcement of the AI Act.
      • Develop and update technical standards and guidelines related to AI systems.
      • Foster collaboration among regulators, stakeholders, and industry representatives.

    6. Compliance and Enforcement Authorities

    • Definition: These are specific authorities that may be established at the national level to manage compliance, monitoring, and enforcement of the EU AI Act.
    • Responsibilities:
      • Investigate potential non-compliance issues and impose penalties where necessary.
      • Support national authorities in enforcing the provisions of the Act.
      • Manage incident reporting and contribute to the ongoing monitoring of AI systems.

    Summary

    The roles defined under the EU AI Act are crucial for ensuring that AI systems are developed, deployed, and used in a manner that is safe, ethical, and compliant with regulatory standards. These roles facilitate accountability and enable effective oversight throughout the lifecycle of AI technologies, from development through to post-market monitoring. Understanding these roles is essential for organisations engaging with AI to ensure they adhere to the requirements set forth by the Act.


    The EU AI Act is structured into several articles that outline the provisions, requirements, and obligations related to artificial intelligence within the European Union. Below is an overview of the key articles, summarising their main focus and contents:

    Key Articles of the EU AI Act

    1. Article 1: Subject matter and scope

      • Defines the purpose of the regulation and the AI systems it applies to, clarifying that it covers both public and private sector uses of artificial intelligence.
    2. Article 2: Definitions

      • Provides definitions for key terms used throughout the Act, such as "artificial intelligence," "high-risk AI systems," and "provider."
    3. Article 3: Relationship with other Union legal acts

      • Clarifies how the EU AI Act interacts with other EU legislation and regulations, ensuring consistency across legal frameworks.
    4. Article 4: Fundamental rights and safety

      • Emphasises the need to respect fundamental rights and ensure safety when developing and deploying AI systems.
    5. Article 5: AI systems classification

      • Introduces a risk-based classification system for AI systems, categorising them into unacceptable risk, high risk, and low or minimal risk categories.
    6. Article 6: Prohibited AI practices

      • Lists specific AI applications that are prohibited due to their potential to harm individuals or society, such as social scoring by governments and real-time biometric identification in public spaces.
    7. Article 7: High-risk AI systems

      • Defines the criteria for identifying high-risk AI systems, which will be subject to stricter regulatory requirements.
    8. Article 8: Requirements for high-risk AI systems

      • Outlines the specific requirements that high-risk AI systems must meet, including risk management, data governance, transparency, and human oversight.
    9. Article 9: Governance and management

      • Discusses the governance structures that providers must establish to ensure compliance with the AI Act and proper management of AI systems.
    10. Article 10: AI systems requirements

      • Specifies additional requirements for high-risk AI systems pertaining to the robustness, accuracy, and reliability of the systems.
    11. Article 11: Transparency and information to users

      • Mandates that providers ensure transparency regarding AI systems, including providing users with information about their capabilities and limitations.
    12. Article 12: Human oversight

      • Emphasises the necessity of human oversight in the operation of high-risk AI systems to ensure safety and ethical considerations.
    13. Article 13: Conformity assessment

      • Describes the procedures for assessing conformity of high-risk AI systems with the provisions of the Act before they are placed on the market.
    14. Article 14: Post-market monitoring

      • Outlines requirements for ongoing monitoring of AI systems after they have been deployed to ensure continued compliance and performance.
    15. Article 15: Responsibilities of users

      • Defines the responsibilities of users of AI systems, including ensuring that the systems are used in accordance with the AI Act.
    16. Article 16: National competent authorities

      • Establishes the role of national authorities in the implementation and enforcement of the AI Act.
    17. Article 17: European Artificial Intelligence Board

      • Introduces the EAIB, tasked with facilitating cooperation among member states and providing guidance on the interpretation of the Act.
    18. Article 18: Codes of conduct

      • Encourages the development of voluntary codes of conduct for AI systems not classified as high-risk to promote best practices.
    19. Article 19: Reporting obligations

      • Sets out the obligations for providers and users to report incidents or issues related to AI systems to the relevant authorities.
    20. Article 20: Penalties

      • Discusses the enforcement mechanisms and penalties for non-compliance with the AI Act.

    Summary

    The articles of the EU AI Act lay the groundwork for a comprehensive regulatory framework aimed at promoting the safe and ethical use of artificial intelligence across the EU. By establishing clear definitions, requirements, and governance structures, the Act seeks to mitigate risks associated with AI while fostering innovation and public trust in AI technologies. Understanding these articles is crucial for organisations involved in the development or use of AI to ensure compliance with the regulatory landscape.

    EU AI Act Mapping with ISO 42001

ISO 42001 is a standard for establishing, implementing, maintaining, and continuously improving an Artificial Intelligence Management System (AIMS). The EU AI Act, on the other hand, is a legislative framework that aims to ensure the safe and ethical use of artificial intelligence within the EU.

Below is a mapping of ISO 42001 clauses with relevant articles from the EU AI Act. This mapping aims to demonstrate the alignment between the standard’s requirements and the regulatory provisions of the AI Act.

Mapping Table of ISO 42001 Clauses with EU AI Act Articles

ISO 42001 ClausesRelevant EU AI Act ArticlesDescription
1. ScopeArticle 1: Subject matter and scopeEstablishes the purpose and scope of the AI management system and the AI Act's application.
2. Normative ReferencesArticle 2: DefinitionsProvides definitions relevant to AI management and the scope of the AI Act.
3. Terms and DefinitionsArticle 2: DefinitionsAligns on the terminology used in AI, ensuring clarity and understanding.
4. Context of the OrganisationArticle 17: Risk ManagementEncourages organisations to assess the context in which they operate, including risk factors related to AI.
5. LeadershipArticle 9: GovernanceEmphasises the need for strong governance structures in the use of AI.
6. PlanningArticle 5: AI Systems ClassificationInvolves planning for compliance based on the risk classification of AI systems.
7. SupportArticle 11: Transparency and Information to UsersAddresses the need for support mechanisms, ensuring transparency and communication with users of the AI systems.
8. OperationArticle 10: AI Systems RequirementsOutlines operational processes and requirements for developing and deploying AI systems.
9. Performance EvaluationArticle 14: Post-Market MonitoringInvolves evaluating the effectiveness and compliance of AI systems after deployment.
10. ImprovementArticle 13: Conformity AssessmentFocuses on continuous improvement and the need for periodic assessments to ensure compliance with the AI Act.
Annex A (Guidance)Article 18: Codes of ConductProvides guidance on best practices and codes of conduct applicable to AI systems.

Summary

Mapping illustrates how the ISO 42001 standard can align with the regulatory framework established by the EU AI Act. Both frameworks emphasise the importance of governance, risk management, transparency, and continuous improvement in the development and deployment of AI systems. Organisations can benefit from implementing ISO 42001 to ensure compliance with the EU AI Act while enhancing their AI governance and management practices.

Recent Post

CMMC and CMMI Levels

CMMC (Cybersecurity Maturity Model Certification) and CMMI (Capability Maturity Model Integration) are both frameworks that aim to improve t...