The Automation of PCI Reports on Compliance

If you really want to piss off the PCI Security Standards Council (PCI SSC), show them how you are writing your Reports on Compliance (RoCs) automatically. You’ll find yourself in remediation very quickly.

But why? What possible difference could it make that you say things the same way from report to report as long as the validation of the controls was performed correctly?

For example, let’s take Requirement 2.2.b; “Examine policies and interview personnel to verify that system configuration standards are updated as new vulnerability issues are identified, as defined in Requirement 6.1.

In order to validate that this in place you must perform the following three processes;

  1. Identify the policy documentation verified to define that system configuration standards are updated as new vulnerability issues are identified;
  2. Identify the personnel interviewed for this testing procedure, and;
  3. For the interview, summarize the relevant details discussed that verify that the process is implemented.

So, for 1.,  if you have mapped all relevant documentation to the PCI requirements (which you should) in your RoC Writing Tool (RWT), this will simply be an automated regurgitation of the document names, and hopefully section numbers. If not, you have the relevant documents already summarised in Section 4.10.

For 2., you should already have your personnel mapped against system grouping in your asset register, so again, this is just a regurgitation. If not, you have the relevant group(s) already summarised in Section 4.11.

For 3., this is where the SSC is looking for a true narrative, but all validation relevant to 2.2.b is performed in the same way for each system type, so as long as the QSA and their client are actually doing their jobs properly, the contents of this narrative will be basically the same;

For [asset group]:

QSA interviewed [personnel], examined [documents], and obtained [validation evidence] for [client] [name of vulnerability management process] and [name of configuration standard process], as well as examined production configurations for [sample(s)].

…and so on for each distinct and relevant asset grouping.

A huge advantage of this is that for any asset type you add, the list of related DSS Requirements, and the related validation evidence all become pre-defined and mandatory action items, assigned to an individual. Assuming you have also defined a compliance goal, you will also have due dates. Even further, with a correctly defined hierarchy you also have the beginning of true project management.

Imagine that, with asset management done well, you have an up-front list of EVERYTHING required to achieve compliance, along will full accountability for the collection of the necessary validation evidence. As the target organisation collects and uploads the evidence, the RoC is writing itself in the background, thereby giving full insight into the gaps and level of effort indicators required to either adjust resources, or justify technology or outsourcing investments.

Now imagine how simple APIs could accept information from end-point technologies like A/V for FIM, or from centralised management stations for logging or IDS, or how a simple agent could report against running services / listening ports / registry settings for an operating system, and you are starting to perform the validation itself automatically. Not against PCI requirements, but against your full gamut of corporate policies and standards, all of which are mapped against you asset type. And not annually, but all day, every day.

Forget PCI compliance, this is the type of Continuous Compliance Validation everyone needs, regardless of the data type, compliance regime, or industry sector.

Difficult? Yes. Simple? Always.