Been Breached? The Worst is Yet to Come, Unless…

The information security sector is rife with negativity and pronouncements of doomsday, and while the title is no better, this blog is not meant to scare, but to provide an alternative view of the worst case scenario; a data breach and resulting forensics investigation. The fact remains that if your data is online, someone has the necessary skill-set and wants it badly enough, they are going to get it. So the sooner you prepare yourself for the inevitable, the better you will be able to prevent a security event from becoming a business-crippling disaster.

By the time you make your environment as hack-proof as humanly possible, the chances are you have spent far more money than the data you’re trying to protect was worth, which in security equates to career suicide. Instead, you are supposed to base your security posture on the only thing that matters; a business need, then maintain your security program with an on-going cycle of test > fix > test again.

Unfortunately what happens in the event of a breach is that you are told what was broken and how to fix it from a technical perspective. This is analogous to putting a plaster / band-aid on a gaping wound. You’re not actually fixing anything. A forensics investigation, instead of being seen as the perfect opportunity to re-examine the underlying security program, is seen as an embarrassment to be swept under the carpet as soon as possible. Sadly, valuable lessons are lost, and the organisation in question remains clearly in the sights of the attackers.

For example, let’s say a breach was caused by an un-patched server. The first thing you do is fix the server and get it back online, but all you have you have done is fix the symptom, not the underlying cause;

  1. How did you not KNOW your system was vulnerable? – Do you not have vulnerability scanning and penetration testing as an intrinsic part of a vulnerability management program?
  2. How did you not know your system wasn’t patched? – Is not patch management and on-going review of the external threats landscape also part of your vulnerability management program?
  3. Did the breach automatically trigger a deep-dive examination of your configuration standards to ensure that your base image was adjusted accordingly?
  4. Did you fix EVERY ‘like’ system or just the ones that were part of the breach?
  5. Did your policy and procedure review exercise make ALL necessary adjustments in light of the breach to ensure that individual accountability and requisite security awareness training was adjusted?
  6. Were Incident Response, Disaster Recovery and Business Continuity Plans all updated to incorporate the lessons learned?

And perhaps the most important part of any security program; Is the CEO finally paying attention? Ultimately this was their fault for not instilling a culture of security and individual responsibility, so if THIS doesn’t change, nothing will.

If the answer is no to most of these, you didn’t just not close the barn door after horse bolted, you left the door wide open AND forgot to get your horse back!

Most breaches are not the result of a highly skilled and concerted attack, but by those taking advantage of the results of  systemic neglect on the part of the target organisation. i.e. MOST organisations with an Internet presence! Therefore, organisations that can work towards security from the policies up, and the forensics report down, have a distinct advantage over those who do neither.

[Ed. Written in collaboration with Voodoo Technologies; Voodoo Technology, Ltd.]

Only the Data Matters

Forget the Systems, Only the Data Matters

As a Director of a team of 28, I tried very hard to install a culture of both self-reliance, and innovation. This could be summarised by the phrase; “Don’t come to me with problems, come to me with solutions.” I have tried as much as possible to build that into my blog posts as well.

Not this time however, this one’s just me babbling.

My theory stems from the fact that there is no such thing as 100% secure. That with the right motivation, skill, and time, a hacker will get in. Anywhere. The hackers in question spend a significant amount of effort mapping the target systems to eventually find the weak spot(s), and because the environment rarely changes, their end goal is always achievable.

The analogy used most often in security is one of a castle. You build up many layers of defence (thick walls, moat, arrow-slits, battlements etc.) and your most precious possessions are held in the most secure room in the centre of it. However, because that castle can only change very slowly, a concerted attack will eventually result in the loss of the ‘crown jewels’.

All it takes is time.

However, all of these defences are really just a means to an end, it’s the data itself that’s the only thing that matters. The real problem therefore lies not so much in the systems, but their predictability. Spending money and resources on more and more ways to protect the systems is just building higher walls. Eventually you have to stop, and eventually someone is going to break them down. And to take the analogy one stage further, the higher the walls, the more fragile they become (see Insecurity Through Technology).

So what can we do when the rising interest in privacy, and the ongoing train-wreck that is PCI, is causing a tidal wave of new products and services all claiming to be the missing link in your security program? Oddly enough (given my dislike of buzz-phrases), the only one that makes sense in the context of this blog is Cloud based services, where scalability, redundancy and resilience are generally built into the platform from the beginning. A system goes down and you plug in a new one.

But how about taking this one stage further? Don’t just replace when something breaks, instead change as a matter of course! From firewalls, to servers, to encryption, even as far as location, change something in your environment to negate as much of the hacker’s reconnaissance as possible. For every benefit, there will be at least one, or even several, reasons to keep things the same, but the benefits are extensive:

  1. Security – The entire premise of this blog; if you change things frequently, bad-guys fre less able to keep up and the rewards become less and less worth the effort. Back to building your fence higher than your neighbour.
    o
  2. Simplicity – To even think about replacing a system outside of a disaster recovery scenario, everything you do has to be simple. There is no security without simplicity.
    o
  3. Business Transformation / Competitive Advantage – I contend that in terms of competitive advantage in the Information Age, any head start will be closed in a matter of weeks / months, not years / decades. Any organisation that has the capability to quickly change aspects of their environment clearly has a thorough understanding of their business processes. Understanding is knowledge, the correct application of knowledge is wisdom, or in this case; appropriate transformation.
    o
  4. Business Continuity – Most organisations have distinct gaps between their continuity needs, and their ability to meet them. Even if Incident Response and Disaster Recovery processes are tested annually, only an organisation that makes significant changes frequently has the well-honed skill-set to meet or exceed the continuity plan goals. Practice, in this case, can indeed make perfect. Perfect enough anyway.
    o
  5. Innovation – Only from simple and well-known can innovation be truly effective. When you’re not worrying about how to keep things running and can focus on what else you could be doing with what you have, you are free to be either more creative, or recover quicker from your mistakes. Too often the inability to adjust begets the fear to even try.

As I stated previously, there are probably more reasons that this theory is completely unsustainable than there are apparent benefits, but I don’t think that means it’s not worth a try. Humans tend to overcomplicate things and then get lost in the detail, but with simplicity comes the freedom to focus on what really matters; the data from which all of your knowledge springs.

[If you liked this article, please share! Want more like it, subscribe!]

PCI – Going Beyond the Standard: Part 22, Compliance Validation

This is often where an assessment starts going truly pear-shaped. Mostly because of assumptions, but a large chunk of what makes validation of compliance so difficult is the lack of mechanisms to make it anything other than a manual process. There is no requirement for automation in PCI, so going beyond the standard is very simple.

First, let’s start out with the worst assumption; that sampling is a right. It’s not, it’s a privilege, and one you have to earn. Until you can show your assessor that you have ALL of the following in place, sampling isn’t even an option:

  1. Formalised, and Robust Configuration Management – Unless you can show that you have a VERY good handle on configuring ‘like systems’ in an identical fashion, sampling cannot be performed. All web servers, app servers, DB servers etc must start out exactly the same. From installation from a known-good base image, to configuration of applications, to testing through change control prior to promotion into production, there is no room for ad hoc here.
    o
  2. Centralised Management and Maintenance –  You must be able to show your QSA that you have the ability to KEEP your like-systems identical, so if you have a centralised console where this can be displayed, so much the better. WSUS for Windows, or CiscoWorks for Cisco network devices for example, can centrally display pretty much all you need to know about the systems. OS, version, patches and so on.
    o
  3. Centralised Logging and Monitoring – An extension to management, log baselines, like-system thresholds, and incident response etc. must be centralised or every different monitoring process must be examined individually.

Of the three facets above, PCI does not require any of them to be centralised, not even logging, so if none of these things are in place, there is no sampling.

An operating system has 12 validation points (access control, password setting, logging and so), applications have 9, and a DB has 7. In theory, a single system could require 28 separate pieces of evidence to validate its compliance.

What if you had a 100 of these systems?

I have expounded many times the concept of security being simple, not easy, but simple. Well, there is no simple without centralisation, and it’s VERY difficult to achieve simple without some form of automation.

For PCI, a screen shot of the AV [for example] settings will suffice, but this can involved the collection of MANY screenshots. How much better to have a centralised management station that shows these settings against all systems in one place? The same applies to logging, access control mechanisms, FIM, running services / listening ports (i.e. configuration standards) and almost every other validation requirement at the system level.

In the hundreds of assessment with which I have in some way been involved, I would estimate that between 25% and 40% of an assessment’s effort is related to gathering of validation evidence. That’s in year one, it’s actually greater in subsequent years, but the effort to get to the point of validation SHOULD have been less.

But that’s really the point here, and the point of the SSC’s recent supplemental on staying compliant; if you treat validation of compliance as a project, you are wasting a significant amount of resources AND you are no closer to actually being secure.

Compliance requires no automation, but without it you have no continuous compliance validation. PCI requires no centralisation, but without that you simply cannot manage what you have efficiently and effectively.

Validation is easy when your security program is simple, because the management of your program is simple. Spending weeks collecting evidence for compliance could not be a more ridiculous use of your time.

Simple is cheaper, more efficient, easier to manage and measure, and above all, more secure. Validation of compliance falls out of the back of a security program done well, so work on that first.

 

 

PCI – Going Beyond the Standard: Part 18, File Integrity Monitoring (FIM)

With the exception of Role Based Access Control (RBAC), File Integrity Monitoring (FIM) is the only PCI requirement that achieves security in its purest form; prevention of, or alerts on, deviation from a known-good baseline.

Firewalls/Routers (DSS Req. 1.x) is almost there, configuration standards is even closer (DSS Req. 2.x), anti-virus (DSS Req. 5.x) is basically pointless, and logging (DSS Req. 10.x) could not be further off the mark. But FIM, assuming you have interpreted the requirements correctly, has real benefit when combined with the other requirements done well (especially configurations).

Unfortunately, even to this day, everyone associates Tripwire with the FIM requirement. If you can believe it, their NAME was included in version 1.0 of the PCI DSS! Of course, Tripwire licensing fees went through the roof as a result, and I’ve pretty much hated them for it ever since. Now they’re jumping on the Security Incident and Event Management (SIEM) bandwagon and doing as bad a job of it as everyone else.

PCI DSS v1.0 – “10.5.5 Use file integrity monitoring/change detection software (such a Tripwire) on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).

Couldn’t even spell “as” correctly, but I digress.

As in all DSS requirements, the first question you must ask yourself is; “What is the intent of…?” In this case, the intent of FIM is to ensure all of your critical files (both operating system and application) do not change without authorisation (i.e. outside of a known change control scenario). Basically it’s seen by the SSC as a back-up for anti-virus (malware being the primary cause of unauthorised file changes), but in my opinion, the correct implementation of FIM, configuration standards, and baselined logging more of less negates AV altogether (see Annual Validation is Dead, it’s Time for Continuous ComplianceContinuous Compliance Validation: Why The PCI DSS Will Always Fall Short, and PCI – Going Beyond the Standard: Part 10, Anti-Virus for a little more background).

PCI DSS Requirement 11.5 states; “11.5 Deploy a change-detection mechanism (for example, file-integrity monitoring tools) to alert personnel to unauthorised modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.” so it should be clear already how to go above and beyond; perform the checks more frequently that weekly. For a start, weekly is ridiculous, a lot can happen in a week, but the requirements’ very limited benefit is compounded by the fact that there is no guidance in WHAT changes you should be looking for.

FIMs can usually detect everything from file existence, size, permissions, hash values and so on, but these are only making checks against themselves from a previous ‘run’. Therefore one of the best ways to go WAY above PCI minimums is to compare the files to central database of known good configs directly from the operating system vendor themselves. Microsoft has a database of the latest and greatest system files (DLLS, EXEs etc.) against which you can run comparisons, and it should be relatively simple to add the baselines from each application you install on top.

Now let’s get REALLY crazy; What if you could then compare what files SHOULD be there as a result of a comprehensive configuration standard / hardening guide, and ensure that everything is as it should be?  Against CIS Security Benchmarks for example? Some FIMs (or similar agents) can also check Windows registry and GPO settings, so you can not only make sure everything is configured correctly per your approved standard(s), you can also automatically report against a significant number of other validation requirements (password complexity, access groups, log settings, time synch settings and so on).

And of course, the best way to blow PCI minimums out of the water is to compare a system against baselines stored centrally against each asset. These are system x’s available services, listening ports, permitted established connections and so on. Now you not only have configuration management, you have both policy and compliance validation built in automatically. Not once a year, but all day every day.

OK, I went WAY too far there, but hopefully you get the point. FIM is not just something you throw on a system because PCI demands it, you do it because it’s integral to how you do real security properly.

Yes Tripwire can do more than PCI asks for, but now you have just another management station to configure, maintain and monitor. FIM done well must integrate with all your other systems to have the necessary context, and FIM is only relevant if you have configured your systems correctly in the first place.

Finally, FIM should NEVER be seen as a stand-alone, end-point product, it can and should be a lot more than that.

PCI – Going Beyond the Standard: Part 17, Vulnerability Scanning & Penetration Testing

Far too often, security is seen as a project, especially if PCI compliance is the goal. The requirements for vulnerability scanning and penetration testing are therefore seen as just another tick-in-a-box and their significant benefits lost.

External vulnerability scanning is the only requirement which must be outsourced and run by an approved scanning vendor (ASV, list here), the other requirements; internal vulnerability scanning, external penetration testing and internal penetration testing can by run by internal resources IF, and ONLY if, you can adequately demonstrate the requisite skill-sets in-house.

Of course, in order to save money, it is very tempting to skate by on the bare minimum, and unfortunately some security vendors (including QSAs) will allow you to do just that. Which is a shame, almost to the point of being irresponsible, as no other requirements give you a truer indication of your actual security posture than these.

Think of it this way; the bad guys use the EXACT same techniques to break into your systems that the good guys use to tell you what’s wrong. The ONLY differences between a hacker and an ethical hacker are intent and moral code, the skill-sets and mind-sets are the same.

Between vulnerability scanning and penetration testing, you have roughly 50% of your vulnerability management program sown up. Patch management management, risk management etc. make up the rest. However, the trick that’s almost always done poorly – if at all – is the integration of vulnerability management with asset management and change control. Any change to your environment should have appropriate vulnerability management processes around them, from a quick directed scan to a full blown credentialed penetration test, and all should be in-line with agreed configuration standards (as defined against each asset).

Going above and beyond PCI in scanning and pen. testing is relatively simple, but it’s not cheap in terms of resource cost. It also demands a maturity of process and a significant shift in culture to accept the ‘overhead’, but it’s more than worth it:

1. External Vulnerability Scanning – No choice but to use an ASV, but you should choose a vendor that provides 2 things at either no, or little, extra cost; Monthly scans (PCI requires quarterly), and unlimited directed scans (against single IPs, or subnets). Performed correctly, monthly scans and directed scans initiated by change control processes go significantly above and beyond. Note: For PCI do NOT open your external firewall/routing devices to your ASV’s IP addresses. Why would you decrease your security posture to test your security posture? Just run one scan for PCI, and THEN open your firewalls so that scanners can do a more thorough job. Keep these profiles separate, one for PCI only, one for your entire business.

2. Internal Vulnerability Scanning – You can do this yourself, and I’ve lost count of the number of clients running basic installations of Nessus, but unless you have significant expertise in how to configure it AND understand the results, don’t do it. For a start, any good QSA will fail you for lack of expertise, but do you really have the time to keep it up to date? Again, running internal scans monthly and as directed by change control goes above and beyond. Having two scan profiles is also a nice feature, but if the scan engine is capable of doing more than just rattle the windows (in the ubiquitous house analogy) and can actually perform a deeper scan / reconnaissance, then you have knocked this one out the park. PCI compliance is never security, do internal scanning as far above PCI minimums as you can afford.

3. External Penetration Testing – PCI requires that you attempt to break in (without breaking) via your Internet-facing presence, but poor guidance on what the test should consist of, combined with an enormous price-compression of pen. testing services means that this effort is usually more automated than I would consider appropriate. A pen. test is supposed to be a person with the necessary skills trying for days on end to discover ways into your systems. This is rarely the case now, but is EXACTLY what your should be doing. The Internet is where most breaches originate (used to be internal), so having a VERY robust security posture from the-outside-in is of paramount importance. Do NOT skimp on this one.

PCI calls for annual pen. tests, and to go above and beyond you need to perform these more frequently. This should not be an enormous cost, and most pen. test vendors can provide an infinitely scalable service based on scope and call-off days.

4. Internal Penetration Testing – Same premise as the external pen. test, but this time from the inside. PCI requires that this test simulate an attacker ‘plugging in’ where the admins sit and seeing what they can do from scratch. Above and beyond is therefore very simple; give the pen. tester FULL access to the environment, as well as credentials to go even further where appropriate. Like scanning, you have one test for PCI, then another test for your business.

There will be times when a simple vuln. scan of a system that has undergone change is not sufficient, so having a directed pen. test process available for critical business changes is very important.

None of the above processes should be stand-alone concepts, and should be very tightly integrated with risk assessment, change control and asset management processes to be truly effective. Vulnerability Management represents the end to each cycle of your security program (Plan > Do > Check > Act > Repeat), and ensures that your security posture always remain in-line with your business goals.

It bears repeating, do NOT skimp on this requirement, you will pay far more when you have to clean up the mess after a breach.