The Automation of PCI Reports on Compliance

If you really want to piss off the PCI Security Standards Council (PCI SSC), show them how you are writing your Reports on Compliance (RoCs) automatically. You’ll find yourself in remediation very quickly.

But why? What possible difference could it make that you say things the same way from report to report as long as the validation of the controls was performed correctly?

For example, let’s take Requirement 2.2.b; “Examine policies and interview personnel to verify that system configuration standards are updated as new vulnerability issues are identified, as defined in Requirement 6.1.

In order to validate that this in place you must perform the following three processes;

  1. Identify the policy documentation verified to define that system configuration standards are updated as new vulnerability issues are identified;
  2. Identify the personnel interviewed for this testing procedure, and;
  3. For the interview, summarize the relevant details discussed that verify that the process is implemented.

So, for 1.,  if you have mapped all relevant documentation to the PCI requirements (which you should) in your RoC Writing Tool (RWT), this will simply be an automated regurgitation of the document names, and hopefully section numbers. If not, you have the relevant documents already summarised in Section 4.10.

For 2., you should already have your personnel mapped against system grouping in your asset register, so again, this is just a regurgitation. If not, you have the relevant group(s) already summarised in Section 4.11.

For 3., this is where the SSC is looking for a true narrative, but all validation relevant to 2.2.b is performed in the same way for each system type, so as long as the QSA and their client are actually doing their jobs properly, the contents of this narrative will be basically the same;

For [asset group]:

QSA interviewed [personnel], examined [documents], and obtained [validation evidence] for [client] [name of vulnerability management process] and [name of configuration standard process], as well as examined production configurations for [sample(s)].

…and so on for each distinct and relevant asset grouping.

A huge advantage of this is that for any asset type you add, the list of related DSS Requirements, and the related validation evidence all become pre-defined and mandatory action items, assigned to an individual. Assuming you have also defined a compliance goal, you will also have due dates. Even further, with a correctly defined hierarchy you also have the beginning of true project management.

Imagine that, with asset management done well, you have an up-front list of EVERYTHING required to achieve compliance, along will full accountability for the collection of the necessary validation evidence. As the target organisation collects and uploads the evidence, the RoC is writing itself in the background, thereby giving full insight into the gaps and level of effort indicators required to either adjust resources, or justify technology or outsourcing investments.

Now imagine how simple APIs could accept information from end-point technologies like A/V for FIM, or from centralised management stations for logging or IDS, or how a simple agent could report against running services / listening ports / registry settings for an operating system, and you are starting to perform the validation itself automatically. Not against PCI requirements, but against your full gamut of corporate policies and standards, all of which are mapped against you asset type. And not annually, but all day, every day.

Forget PCI compliance, this is the type of Continuous Compliance Validation everyone needs, regardless of the data type, compliance regime, or industry sector.

Difficult? Yes. Simple? Always.

Continuous Compliance Validation: Why The PCI DSS Will Always Fall Short

Just about everyone who writes on information security has had ample blodder from the numerous high profile breaches, myself included. Some blame the PCI standards or the card brands themselves, some blame the retailers for not doing enough, and those that are a little more charitable, just blame the thieves.

In the end, it’s not about blame, it’s about learning the lesson, making the necessary adjustments, and moving on responsibly. Unfortunately, this will NOT include being able to move on from credit cards or from the PCI DSS v3.1 any time soon, so organisations wanting to avoid becoming the next Target (excuse the pun), had better pay more attention to their enterprise-wide security program, not just their annual compliance ‘projects’.

Just as importantly, they need to pay VERY close attention to innovation in the payment / authentication space, and advances in more real-time security measures / technologies.

Nothing in the PCI DSS is anything other than a bare minimum, and represents enough security for the card brands to say they are doing what they can. But any organisation who thinks this is enough will eventually lose data, and I for one have no sympathy.

You can look at every single requirement and come up with two choices: 1) Good enough for PCI, and 2) Appropriate for the business. 9 times out of 10, the second option is more difficult to implement, but in almost every instance, it is both easier to maintain, and more secure.

For example;

PCI DSS Requirements 1.X are all about networking, firewalls, segmentation and the like, and while it does stress that every service/protocol/port must have a business justification, it does not state specifically that every individual in-scope device must have least-privilege inbound and outbound rules applied.

  1. 1.1.6.a – Verify that firewall and router configuration standards include a documented list of all services, protocols and ports, including business justification for each
  2. 1.2.1.a – Examine firewall and router configuration standards to verify that they identify inbound and outbound traffic necessary for the cardholder data environment.
  3. 1.2.1.b – Examine firewall and router configurations to verify that inbound and outbound traffic is limited to that which is necessary for the cardholder data environment.

Yes, we can imply it means each device (especially 1.2.1.b), and yes, it’s the right thing to do, but no QSA can enforce anything that is not specifically written within the standard. If they had just replaced “the cardholder data environment” with “each in-scope system” DSS Section 1 would be VERY different, and instil a significantly better security posture.

However, if they DID change it to least privilege for every device, is it actually possible to implement and maintain it? Same goes for more robust configuration standards (DSS Section 2), or real-time logging (DSS Section 10), what should be done is very different from what the DSS requires.

In answer to the question, yes, it is possible, and it all boils down to one thing; baselines

Security is not about crunching big data to determine patterns, that’s only truly relevant in forensics when it’s already too late. Real security is knowing exactly what something SHOULD look like performing normally, and reporting everything outside of that. Keep it simple, or it cannot be monitored, maintained, or measured, but the PCI DSS can never go this far.

Hypothetically, if you knew every running service, listening port, and permitted connections each in-scope device should maintain to perform its function, then anything NOT those things should be investigated. That’s a baseline. Security would dictate that you have alerts based on these anomalies for all systems, not a sample of them and certainly not once a year (point-in-time).

How difficult would it be to automate this process so that EVERY system (not just PCI ones) reports back on a daily/weekly/monthly – or ANY period of time less than a year! – basis to a centralised management console to perform the baseline comparisons? Then what’s to stop you comparing the device’s listening ports to firewall rule sets to make sure they are properly defined? Or comparing them against enterprise policies and standards, or known business data flows?

Not one organisation or security vendor is doing this properly, at least not that I have seen, or not yet. Some vendors do bits of this, but the last thing you want to do is patch together a bunch of separate, non-integrated systems, as the effort to do so will usually outweigh the risk mitigation, or the cost-to-benefit ratio.

However, none of this can happen until you have centralised and accurate asset management, and seeing as the PCI DSS just added that as a requirement in v3.0, most organisations have a long way to go before they can ever achieve this ultimate in security; continuous compliance validation.

PCI – Going Beyond the Standard: Part 22, Compliance Validation

This is often where an assessment starts going truly pear-shaped. Mostly because of assumptions, but a large chunk of what makes validation of compliance so difficult is the lack of mechanisms to make it anything other than a manual process. There is no requirement for automation in PCI, so going beyond the standard is very simple.

First, let’s start out with the worst assumption; that sampling is a right. It’s not, it’s a privilege, and one you have to earn. Until you can show your assessor that you have ALL of the following in place, sampling isn’t even an option:

  1. Formalised, and Robust Configuration Management – Unless you can show that you have a VERY good handle on configuring ‘like systems’ in an identical fashion, sampling cannot be performed. All web servers, app servers, DB servers etc must start out exactly the same. From installation from a known-good base image, to configuration of applications, to testing through change control prior to promotion into production, there is no room for ad hoc here.
    o
  2. Centralised Management and Maintenance –  You must be able to show your QSA that you have the ability to KEEP your like-systems identical, so if you have a centralised console where this can be displayed, so much the better. WSUS for Windows, or CiscoWorks for Cisco network devices for example, can centrally display pretty much all you need to know about the systems. OS, version, patches and so on.
    o
  3. Centralised Logging and Monitoring – An extension to management, log baselines, like-system thresholds, and incident response etc. must be centralised or every different monitoring process must be examined individually.

Of the three facets above, PCI does not require any of them to be centralised, not even logging, so if none of these things are in place, there is no sampling.

An operating system has 12 validation points (access control, password setting, logging and so), applications have 9, and a DB has 7. In theory, a single system could require 28 separate pieces of evidence to validate its compliance.

What if you had a 100 of these systems?

I have expounded many times the concept of security being simple, not easy, but simple. Well, there is no simple without centralisation, and it’s VERY difficult to achieve simple without some form of automation.

For PCI, a screen shot of the AV [for example] settings will suffice, but this can involved the collection of MANY screenshots. How much better to have a centralised management station that shows these settings against all systems in one place? The same applies to logging, access control mechanisms, FIM, running services / listening ports (i.e. configuration standards) and almost every other validation requirement at the system level.

In the hundreds of assessment with which I have in some way been involved, I would estimate that between 25% and 40% of an assessment’s effort is related to gathering of validation evidence. That’s in year one, it’s actually greater in subsequent years, but the effort to get to the point of validation SHOULD have been less.

But that’s really the point here, and the point of the SSC’s recent supplemental on staying compliant; if you treat validation of compliance as a project, you are wasting a significant amount of resources AND you are no closer to actually being secure.

Compliance requires no automation, but without it you have no continuous compliance validation. PCI requires no centralisation, but without that you simply cannot manage what you have efficiently and effectively.

Validation is easy when your security program is simple, because the management of your program is simple. Spending weeks collecting evidence for compliance could not be a more ridiculous use of your time.

Simple is cheaper, more efficient, easier to manage and measure, and above all, more secure. Validation of compliance falls out of the back of a security program done well, so work on that first.

 

 

Continuous Compliance Validation

Annual Validation is Dead, it’s Time for Continuous Compliance

As you probably know, the PCI DSS is a minimum set of security controls that must be in place around anything that transmits, stores, or processes cardholder data. That’s probably why the card brands and the SSC get so irritated that even this basic set of good practices is so hard to achieve.

That said, unless you have a way of monitoring and maintaining your compliance within these baselines, it’s not only VERY difficult to stay compliant (let alone secure), it makes validation of your compliance an annual nightmare of gathering screenshots, log samples, and so on. I estimated that that validation of controls can take up to 25% of the entire annual assessment cycle.

This is a tremendous loss of resource time, and does nothing for your ROI, so why DOES the PCI DSS only require an annual point-in-time validation, and not validation of continuous compliance? Yes, you are accountable to stay compliant at all times, but you only have to validate it once a year, and – if you’ve earned it – on only a sample of your systems.

The answer is, they simply cannot go that far. Continuous Compliance is far more difficult than achieving PCI compliance, and is firmly in the realms of good security practices. They can enforce minimums, they cannot enforce more than that and get the necessary acceptance.

So what IS Continuous Compliance? “It is the near real-time notification of a variation from your baselined norms.” Or to put it another way; once you know what something should look like normally, you want to know if it changes from that.

For example, the PCI DSS specifies about a dozen or so validation points for an operating system: business justification for all listening ports, access control, logging, FIM and so on. Once a year, you have to show your assessor that these validation points meet the DSS requirements, and that’s it for the YEAR! All too often, systems fall out of compliance within a matter of days.

Instead, what I propose, is that you should automate (as much as possible) the collection of that validation data, and compare it to not only the PCI DSS requirement minimums, but to ALL of your compliance / regulation / internal policy standards. And not yearly, but hourly, daily, weekly, whatever makes sense. Wouldn’t you rather show your assessor a green checkmark for ALL of your systems than a dozen screenshots for a mere sample?

If this can be configured for just 50% of your in-scope devices, your entire annual validation burden will be reduced by 30% or more. Plus, you also have a very convincing addition to your compensating controls for lack of FIM or AV (if applicable).

Best of all, you are now doing security as it was meant to be done; Enterprise wide, and Business As Usual.

Any operating system experts out there want to help me put this together?

[If you liked this article, please share! Want more like it, subscribe!]