Requirement 11: Test security of systems and networks regularly

Please provide the information below to view the online Verizon Payment Security Report.

Thank you.

You will soon receive an email with a link to confirm your access, or follow the link below.

Download this document

Thank you.

You may now close this message and continue to your article.

  • This requirement covers the use of vulnerability scanning, penetration testing, file integrity monitoring and intrusion detection to ensure that weaknesses are identified and addressed.

  • 2022 Payment Security Report
  • Figure 26. Global state of PCI DSS compliance: Requirement 11

  • Full compliance: On average, only 60% of organizations across the globe maintained compliance with Requirement 11. The percentage of organizations that kept all controls in place increased by 8.2 pp. While full compliance improved significantly, it remains the requirement with the lowest performance and sustainability across the PCI DSS.

    Control gap: The control gap declined by nearly half (43%) from a high 13.2% to 7.4%. This is a much-needed improvement, in part due to substantial performance improvement on Test Procedure 11.3.2.a (internal penetration test) by 10.0 pp, and Control 11.2 (internal and external vulnerability scans).

    Compensating controls: The use of compensating controls increased modestly from a relatively low 3.9% to 5.8% of organizations that required compensating controls to meet this key requirement.

    Control 11.3 was compensated the most (by 3.5% of organizations), followed by Controls 11.1 and 11.5 (both by 1.7%).

  • Figure 27. Requirement 11 control performance

    • The ongoing trouble with Requirement 11

      Many issues contribute to the poor performance of Requirement 11. Some are basic, while others are systemic and impact the sustainable control effectiveness of controls under Requirement 11 and require comprehensive diagnosis and remediation solutions. The following basic issues can be avoided or corrected with relatively little effort.

      • Cases where Control 11.3.3 had a larger gap than the external or internal penetration testing itself often point to an organization performing a penetration test but then failing to mitigate the findings, or being unable to do so. Some organizations receive vulnerability scan and penetration test reports that they don’t understand or are unsure where to start mitigating. This can be solved by improving the team’s skillset through education and training
      • Some organizations apply an incorrect interpretation of the requirements, such as Control 11.3.3— Exploitable vulnerabilities found during penetration testing are corrected and testing is repeated to verify the corrections—where the word “exploitable” is incorrectly interpreted as “high-risk vulnerabilities”
      • Numerous organizations have yet to achieve a medium level of maturity of their internal security testing processes and capabilities. In larger organizations, removing the silos between teams and completing the integration between various critical activities, such as vulnerability scanning, penetration testing, security incident and log management, vendor management, etc., require attention

      See page 134, where we review issues related to Requirement 11 in more detail.

  • Requirement 11: Test security of systems and networks regularly

    The goal

    The goal of PCI DSS Key Requirement 11 is to develop and maintain a sustainable capability to effectively verify the security posture of all system components across the CDE using automated network scan and penetration testing tools as well as manual methods, all designed to detect network and application vulnerabilities operating inside the network, and to rectify vulnerabilities based on a formal risk-assessment framework.

    This goal includes complete integration with all related PCI DSS Key Requirements for the establishment of an effective, integrated series of control systems, and the development and ongoing improvement of all related capabilities, processes, documentation, tools and training needed to achieve < Quantitatively managed/Optimized > maturity of this key requirement by < insert date >.

    Goal applicability and scope considerations

    • Testing scope: Security testing of all in-scope networks and IT system components across the CDE, including wireless access points, internal and external vulnerability scanning, internal and external penetration testing, segmentation testing, and cloud environments (service providers)
    • Security tools: Configuration, use and maintenance of network scan applications, penetration testing tools, change-detection tools (file-integrity monitoring), automated monitoring tools (IDS/IPS, NAC, wireless)
    • Process: Documented vulnerability management program, including network and application vulnerability management procedures, penetration testing methodology, wireless access point assessments, security alert configuration standard, incident response process

    Goal requirements:

    Some of the primary conditions necessary to achieve the goal

    • Capability—program management: Develop and document a comprehensive vulnerability management program that covers the entire scope of the requirement to effectively support the achievement of the requirement goal
    • Capability—testing scope: Create the ability to effectively sustain periodic security testing of all in-scope components, including after significant changes to the network or systems. Test for the presence of wireless (Wi-Fi) access points, and detect and identify all authorized and unauthorized wireless access points. Maintain mechanisms to detect real-time suspicious or anomalous network traffic, with intrusion-detection and/or intrusion-prevention techniques to detect and/or prevent intrusions into the network
    • Documentation and processes: Maintain effective standard operating procedures with clearly articulated performance standards. Regularly train and educate staff on how to follow the documented procedures

    Strong dependencies and integration with other key requirements

    • Requirement 6: Strong dependency and integration with secure systems and software
    • Requirement 2: Integration with application of secure configurations
    • Requirement 10: Integration with logging and monitoring requirements
    • Requirement 1: Testing of network security components
    • Requirements 3 & 4: Testing of components that store, process and transmit account data

    Short-term objectives

    • Capability: Effective communication of a complete vulnerability management program document to all stakeholders involved in the planning and delivery of this requirement (training, education and awareness) to support capacity and capability planning and ongoing project management efforts
    • Project planning: The commitment of resources, confirmation of roles and responsibilities, and scheduling of all tasks that support the effective and timely execution and achievement of all objectives and the goal

    Long-term objectives

    • Integrate: Improve the integration between all in-scope security testing and monitoring components
    • Maturity: Achieve and maintain high-capability maturity on security vulnerability management and incident response

    Common constraints

    • Capacity: Lack of capacity to scan large internal networks and to scan real-time environment and system availability; lack of resource capacity planning to manage the workload of this requirement
    • Competence: Misinterpretation of compliance requirements, lack of education and awareness. Operating without a well-defined, documented vulnerability management program
    • Capability: Failure to project manage the scheduling and completion of tasks; planning and timely execution
    • Legal constraints: Business and technical constraints due to legislation around cryptography and software
    • Business critical: Highly sensitive, business-critical systems where risk of unplanned downtime trumps software vulnerabilities, preventing scans and penetration tests from being conducted
  • On measuring and reporting sustainable control effectiveness: Requirement 11

    Various reasons exist for the prolonged poor compliance performance of PCI DSS Key Requirement 11, including failure to maintain firm process and capability control to perform the required actions. We reviewed this on pages 64 through 67 and 102 through 106 of the 2020 PSR. Another reason for the low performance of the network scan and penetration testing controls is the presentation of evidence of compliance. Organizations continue to complete and “successfully” pass their PCI DSS compliance assessments, despite creating and presenting evidence of compliance “just in time.” This behavior doesn’t demonstrate the ability and commitment to rapidly detect and correct controls that fall out of place. It often demonstrates lack of intent to address the root causes of weak control performance. This continues to happen, primarily because the metrics for the evaluation and reporting of sustainable control effectiveness and continuous improvement are not explicitly included in the PCI DSS assessment procedures within and across PCI DSS requirements (for DSS v3.2.1 and prior versions).

    For most of the PCI DSS controls, evidence of compliance can be produced and submitted to an assessor just in time for review as part of an annual compliance validation assessment. For example, policies and standards can be updated relatively quickly (and superficially), and a signature from management easily obtained to indicate that documents were internally reviewed and approved. Employees who need to receive security and awareness training can be subjected to a quick and superficial security training and awareness session a week before the arrival of the assessor. Similarly, insecure system configurations, weak passwords and poor vendor default settings in critical components can be corrected just prior to the assessor’s arrival and the finalization of the Report on Compliance (ROC). While this behavior from assessed organizations is certainly not ideal, it is widespread.

    QSAs rightfully frown upon receiving evidentiary documents clearly created for the purpose of “passing” a compliance assessment, since they don’t demonstrate commitment to meeting the intent of PCI DSS. These conditions involve maintaining a control environment that is sustainable and effective—essential conditions needed for the protection of payment card data. QSAs are trained to follow the specifications for assessment validation included in the PCI DSS Assessment Procedures. Despite an observed lack of control effectiveness and sustainability, it can be difficult for a QSA to disqualify evidence presented by organizations. There are various reasons for this, which we discuss in further detail.

    Why has Requirement 11 consistently been the lowest-performing key requirement for more than a decade—both in terms of maintaining full compliance and control gap? Controls 11.2 and 11.3 are some of the few requirements that involve external entities to produce evidence of compliance. Meeting the security testing procedures of Controls 11.2 and 11.3 requires documented proof that network and application vulnerability scans and penetration test procedures were initiated and concluded within the required timeframe and by qualified people. For various reasons, organizations continue to miss completing these time-sensitive requirements, such as producing reports that substantiate that all requirements were met for quarterly network vulnerability scans and annual penetration tests, conducted in time and after substantial changes were made to the CDE. High-risk vulnerabilities must also be corrected in time. Failing this, organizations are unable to produce a fully populated network scan or penetration testing report after the fact—as evidence that the controls were in place within the required timeframe. Penetration tests and vulnerability scan reports are technical and detailed; they include numerous timestamps and dates that record the exact period in which the control actions (scans and pentests) were executed. Fabrication of evidence is never an option; it’s a clear violation of compliance assessment requirements and, when discovered, will result in the immediate termination of the assessment, as well as other repercussions.

    • Despite the existence of payment card brand compliance programs for 20 years, some organizations still assume that achieving annual compliance is all, or most, of what it takes to protect payment card data.

  • But achieving data security and compliance success is more than just avoiding failure of compliance validation assessments.

    This situation highlights a larger issue—a weakness that has existed within the PCI DSS assessment procedures since the introduction of the PCI security regulation. The intent of PCI DSS is to ensure that security controls are effective and remain in place. However, organizations are not compelled to provide evidence of “sustainable control effectiveness” as part of their compliance validation assessments for individual requirements. Procedures for evaluating and reporting the effectiveness of any particular control, and its sustainability based on influences from its control environment, are not included in the Standard (DSS v3.2.1 and prior versions). Organizations are only compelled to control environment sustainability when ticking the checkbox in part 3a of the Attestation of Compliance (AOC) acknowledgment: “I have read the PCI DSS and I recognize that I must maintain PCI DSS compliance, as applicable to my environment, at all times.” The compliance status recorded in the ROC is the implicit evidence of this “sustainable effectiveness.” A ROC primarily records that controls were present within the control environment; it does not include a record of the actual control performance over time. As mentioned, many security controls can be out of place for several months and corrected just in time to pass the annual validation assessment. Therefore, a typical ROC does not report the actual level of assurance of controls present within and across the CDE. A ROC is unlikely to mention any increase in the risk exposure to account data and the reduced effectiveness of the CDE as a consequence of security controls not being in place for prolonged periods prior to the validation assessment.

    Organizations tend to document the minimum amount of evidence required in accordance with what is specified in the PCI DSS assessment procedures. The required evidence documentation does not specifically compel the assessed entity, nor the QSA, to report that a security control was operating as required and in place throughout the duration of its relevant control period and the typical 12-month period preceding the annual compliance validation. As explained before, with the exception of a handful of requirements, evidence that a control was temporarily not in place is often not recorded in the final ROC, and subsequently not a critical factor included in the criteria for organizations to “pass” their validation assessment.

    PCI DSS v3.2.1 assessment procedures don’t include explicit stipulations for the proactive evaluation and reporting of security control sustainability and effectiveness individually per requirement. No defined procedure is included for how control effectiveness should be measured and documented and what minimum documentation should be submitted as evidence as part of a compliance validation assessment. For example, very few specifications are included in the PCI DSS to report the date when a control was discovered to be not in place, the number of days the control was not in place and the date when remediation activities were completed for the control to be back in place, operational and functioning as intended—not merely present (such as, Control 10.8.1 that applies to service providers only, and Designated Entities Supplemental Validation (DESV) control A3.3.1.1). Including this data is invaluable for recording and reporting the actual performance of the control environment. It vastly improves individual and team responsibility and accountability to ensure that controls are operating in a manner that meets the objective and intent of the requirement. At an individual control level, it also brings much needed visibility to how controls that are not in place negatively impact the effectiveness of the control and control environment, not to mention its value during post-breach forensic investigations.

    This is an important issue that PCI DSS v4.0 will help strengthen by introducing a customized approach to controls and emphasizing ongoing assessments and other changes to the compliance procedures. It largely depends on the specifications and procedures for evaluating control effectiveness included within the PCI DSS validation assessment procedures. To be truly useful as an indicator of data security, a PCI DSS ROC should include adequate expression of a level of assurance of security controls. This requires setting minimum criteria for the quality of evidence accepted, and specification of metrics to evaluate the strength, validity and reporting of the actual control performance of individual controls or control systems. The extent to which updated requirements in PCI DSS will support this should be more evident in 2024, when the new PCI DSS v4.0 requirements become effective.