OWASP Top 10 2017: Logging & Monitoring Makes the Hall of Shame

Fact #1: There is no effective incident response without logging and monitoring;

Fact #2: There is no effective disaster recovery without incident response; and

Fact #3: There is no effective business continuity without disaster recovery.

Therefore logging and monitoring should be a fundamental aspect of every security program, regardless of organisation size. So why is it performed so universally poorly? Don’t organisations want to stay in business?!

It’s not like EVERY STANDARD ON THE PLANET has it as a prerequisite! Well, except for these obscure ones:

  • ISO 27001 – A.12.4 Logging and monitoring
  • COBIT – F.10 Monitoring and Alert Services for Security-related Events
  • NIST – Anomalies and Events (DE.AE)
  • PCI DSS – Requirement 10: Track and monitor all access to network resources and cardholder data
  • …and so on

So you can imagine my surprise and delight when OWASP – more commonly known for coding vulnerabilities – singled this out as one of their Top 10 for 2017. Yes, it barely snuck in at number 10, but there it is, finally in the light of day.

Unfortunately, OWASP isn’t exactly up there with the NISTs of the world, so the importance of this is probably lost on most. I mean, the DSS uses [loosely] the OWASP Top 10 as one of its “industry accepted best practice” providers, which is actually why a lot of people have even heard of OWASP in the first place.

So now what? What difference is this going to make?

Well, very little probably, if you don’t understand now just how important centralised logging and monitoring is, you probably never will. If you’re in a position where this makes a difference (you’re in technology or cybersecurity) then the only time your organisation will care is when your business suffers a loss. Then I’m sure you’ll start to care as you’re updating your CV/resume.

Honestly, I really don’t know where I’m going with blog. It was either write about this or the bloody GDPR again. But it’s really the privacy regulations that are beginning to drive things like this forward. Record keeping, data breach notifications, accountability and so on all have an enormous impact in how we will be running our businesses and logging is intrinsic to them all.

In my consulting practice I very rarely use the word ‘recommend’, and I try never to mention the names of security control vendors except as examples. So while the due diligence is yours in terms of finding the right logging solution for your organisation’s needs, I HIGHLY recommend that you start looking.

I’m sure there’s some out there, but I’ve yet to see one argument for not performing logging and monitoring, and I’m willing to bet there are no valid ones. The problem, like most things in security these days is that the name is just not sexy enough. Perhaps if we include in a brand new acronym like ‘Episode Reply & Adversity Restoration (ERAR)’ as I did in Froud on Fraud’s Top 10 Cybersecurity Technologies to Implement in 2017 it would get more attention?

Whatever it takes…

[If you liked this article, please share! Want more like it, subscribe!]

Been Breached? The Worst is Yet to Come, Unless…

The information security sector is rife with negativity and pronouncements of doomsday, and while the title is no better, this blog is not meant to scare, but to provide an alternative view of the worst case scenario; a data breach and resulting forensics investigation. The fact remains that if your data is online, someone has the necessary skill-set and wants it badly enough, they are going to get it. So the sooner you prepare yourself for the inevitable, the better you will be able to prevent a security event from becoming a business-crippling disaster.

By the time you make your environment as hack-proof as humanly possible, the chances are you have spent far more money than the data you’re trying to protect was worth, which in security equates to career suicide. Instead, you are supposed to base your security posture on the only thing that matters; a business need, then maintain your security program with an on-going cycle of test > fix > test again.

Unfortunately what happens in the event of a breach is that you are told what was broken and how to fix it from a technical perspective. This is analogous to putting a plaster / band-aid on a gaping wound. You’re not actually fixing anything. A forensics investigation, instead of being seen as the perfect opportunity to re-examine the underlying security program, is seen as an embarrassment to be swept under the carpet as soon as possible. Sadly, valuable lessons are lost, and the organisation in question remains clearly in the sights of the attackers.

For example, let’s say a breach was caused by an un-patched server. The first thing you do is fix the server and get it back online, but all you have you have done is fix the symptom, not the underlying cause;

  1. How did you not KNOW your system was vulnerable? – Do you not have vulnerability scanning and penetration testing as an intrinsic part of a vulnerability management program?
  2. How did you not know your system wasn’t patched? – Is not patch management and on-going review of the external threats landscape also part of your vulnerability management program?
  3. Did the breach automatically trigger a deep-dive examination of your configuration standards to ensure that your base image was adjusted accordingly?
  4. Did you fix EVERY ‘like’ system or just the ones that were part of the breach?
  5. Did your policy and procedure review exercise make ALL necessary adjustments in light of the breach to ensure that individual accountability and requisite security awareness training was adjusted?
  6. Were Incident Response, Disaster Recovery and Business Continuity Plans all updated to incorporate the lessons learned?

And perhaps the most important part of any security program; Is the CEO finally paying attention? Ultimately this was their fault for not instilling a culture of security and individual responsibility, so if THIS doesn’t change, nothing will.

If the answer is no to most of these, you didn’t just not close the barn door after horse bolted, you left the door wide open AND forgot to get your horse back!

Most breaches are not the result of a highly skilled and concerted attack, but by those taking advantage of the results of  systemic neglect on the part of the target organisation. i.e. MOST organisations with an Internet presence! Therefore, organisations that can work towards security from the policies up, and the forensics report down, have a distinct advantage over those who do neither.

[Ed. Written in collaboration with Voodoo Technologies; Voodoo Technology, Ltd.]

Only the Data Matters

Forget the Systems, Only the Data Matters

As a Director of a team of 28, I tried very hard to install a culture of both self-reliance, and innovation. This could be summarised by the phrase; “Don’t come to me with problems, come to me with solutions.” I have tried as much as possible to build that into my blog posts as well.

Not this time however, this one’s just me babbling.

My theory stems from the fact that there is no such thing as 100% secure. That with the right motivation, skill, and time, a hacker will get in. Anywhere. The hackers in question spend a significant amount of effort mapping the target systems to eventually find the weak spot(s), and because the environment rarely changes, their end goal is always achievable.

The analogy used most often in security is one of a castle. You build up many layers of defence (thick walls, moat, arrow-slits, battlements etc.) and your most precious possessions are held in the most secure room in the centre of it. However, because that castle can only change very slowly, a concerted attack will eventually result in the loss of the ‘crown jewels’.

All it takes is time.

However, all of these defences are really just a means to an end, it’s the data itself that’s the only thing that matters. The real problem therefore lies not so much in the systems, but their predictability. Spending money and resources on more and more ways to protect the systems is just building higher walls. Eventually you have to stop, and eventually someone is going to break them down. And to take the analogy one stage further, the higher the walls, the more fragile they become (see Insecurity Through Technology).

So what can we do when the rising interest in privacy, and the ongoing train-wreck that is PCI, is causing a tidal wave of new products and services all claiming to be the missing link in your security program? Oddly enough (given my dislike of buzz-phrases), the only one that makes sense in the context of this blog is Cloud based services, where scalability, redundancy and resilience are generally built into the platform from the beginning. A system goes down and you plug in a new one.

But how about taking this one stage further? Don’t just replace when something breaks, instead change as a matter of course! From firewalls, to servers, to encryption, even as far as location, change something in your environment to negate as much of the hacker’s reconnaissance as possible. For every benefit, there will be at least one, or even several, reasons to keep things the same, but the benefits are extensive:

  1. Security – The entire premise of this blog; if you change things frequently, bad-guys fre less able to keep up and the rewards become less and less worth the effort. Back to building your fence higher than your neighbour.
    o
  2. Simplicity – To even think about replacing a system outside of a disaster recovery scenario, everything you do has to be simple. There is no security without simplicity.
    o
  3. Business Transformation / Competitive Advantage – I contend that in terms of competitive advantage in the Information Age, any head start will be closed in a matter of weeks / months, not years / decades. Any organisation that has the capability to quickly change aspects of their environment clearly has a thorough understanding of their business processes. Understanding is knowledge, the correct application of knowledge is wisdom, or in this case; appropriate transformation.
    o
  4. Business Continuity – Most organisations have distinct gaps between their continuity needs, and their ability to meet them. Even if Incident Response and Disaster Recovery processes are tested annually, only an organisation that makes significant changes frequently has the well-honed skill-set to meet or exceed the continuity plan goals. Practice, in this case, can indeed make perfect. Perfect enough anyway.
    o
  5. Innovation – Only from simple and well-known can innovation be truly effective. When you’re not worrying about how to keep things running and can focus on what else you could be doing with what you have, you are free to be either more creative, or recover quicker from your mistakes. Too often the inability to adjust begets the fear to even try.

As I stated previously, there are probably more reasons that this theory is completely unsustainable than there are apparent benefits, but I don’t think that means it’s not worth a try. Humans tend to overcomplicate things and then get lost in the detail, but with simplicity comes the freedom to focus on what really matters; the data from which all of your knowledge springs.

[If you liked this article, please share! Want more like it, subscribe!]

PCI – Going Beyond the Standard: Part 22, Compliance Validation

This is often where an assessment starts going truly pear-shaped. Mostly because of assumptions, but a large chunk of what makes validation of compliance so difficult is the lack of mechanisms to make it anything other than a manual process. There is no requirement for automation in PCI, so going beyond the standard is very simple.

First, let’s start out with the worst assumption; that sampling is a right. It’s not, it’s a privilege, and one you have to earn. Until you can show your assessor that you have ALL of the following in place, sampling isn’t even an option:

  1. Formalised, and Robust Configuration Management – Unless you can show that you have a VERY good handle on configuring ‘like systems’ in an identical fashion, sampling cannot be performed. All web servers, app servers, DB servers etc must start out exactly the same. From installation from a known-good base image, to configuration of applications, to testing through change control prior to promotion into production, there is no room for ad hoc here.
    o
  2. Centralised Management and Maintenance –  You must be able to show your QSA that you have the ability to KEEP your like-systems identical, so if you have a centralised console where this can be displayed, so much the better. WSUS for Windows, or CiscoWorks for Cisco network devices for example, can centrally display pretty much all you need to know about the systems. OS, version, patches and so on.
    o
  3. Centralised Logging and Monitoring – An extension to management, log baselines, like-system thresholds, and incident response etc. must be centralised or every different monitoring process must be examined individually.

Of the three facets above, PCI does not require any of them to be centralised, not even logging, so if none of these things are in place, there is no sampling.

An operating system has 12 validation points (access control, password setting, logging and so), applications have 9, and a DB has 7. In theory, a single system could require 28 separate pieces of evidence to validate its compliance.

What if you had a 100 of these systems?

I have expounded many times the concept of security being simple, not easy, but simple. Well, there is no simple without centralisation, and it’s VERY difficult to achieve simple without some form of automation.

For PCI, a screen shot of the AV [for example] settings will suffice, but this can involved the collection of MANY screenshots. How much better to have a centralised management station that shows these settings against all systems in one place? The same applies to logging, access control mechanisms, FIM, running services / listening ports (i.e. configuration standards) and almost every other validation requirement at the system level.

In the hundreds of assessment with which I have in some way been involved, I would estimate that between 25% and 40% of an assessment’s effort is related to gathering of validation evidence. That’s in year one, it’s actually greater in subsequent years, but the effort to get to the point of validation SHOULD have been less.

But that’s really the point here, and the point of the SSC’s recent supplemental on staying compliant; if you treat validation of compliance as a project, you are wasting a significant amount of resources AND you are no closer to actually being secure.

Compliance requires no automation, but without it you have no continuous compliance validation. PCI requires no centralisation, but without that you simply cannot manage what you have efficiently and effectively.

Validation is easy when your security program is simple, because the management of your program is simple. Spending weeks collecting evidence for compliance could not be a more ridiculous use of your time.

Simple is cheaper, more efficient, easier to manage and measure, and above all, more secure. Validation of compliance falls out of the back of a security program done well, so work on that first.

 

 

PCI – Going Beyond the Standard: Part 18, File Integrity Monitoring (FIM)

With the exception of Role Based Access Control (RBAC), File Integrity Monitoring (FIM) is the only PCI requirement that achieves security in its purest form; prevention of, or alerts on, deviation from a known-good baseline.

Firewalls/Routers (DSS Req. 1.x) is almost there, configuration standards is even closer (DSS Req. 2.x), anti-virus (DSS Req. 5.x) is basically pointless, and logging (DSS Req. 10.x) could not be further off the mark. But FIM, assuming you have interpreted the requirements correctly, has real benefit when combined with the other requirements done well (especially configurations).

Unfortunately, even to this day, everyone associates Tripwire with the FIM requirement. If you can believe it, their NAME was included in version 1.0 of the PCI DSS! Of course, Tripwire licensing fees went through the roof as a result, and I’ve pretty much hated them for it ever since. Now they’re jumping on the Security Incident and Event Management (SIEM) bandwagon and doing as bad a job of it as everyone else.

PCI DSS v1.0 – “10.5.5 Use file integrity monitoring/change detection software (such a Tripwire) on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).

Couldn’t even spell “as” correctly, but I digress.

As in all DSS requirements, the first question you must ask yourself is; “What is the intent of…?” In this case, the intent of FIM is to ensure all of your critical files (both operating system and application) do not change without authorisation (i.e. outside of a known change control scenario). Basically it’s seen by the SSC as a back-up for anti-virus (malware being the primary cause of unauthorised file changes), but in my opinion, the correct implementation of FIM, configuration standards, and baselined logging more of less negates AV altogether (see Annual Validation is Dead, it’s Time for Continuous ComplianceContinuous Compliance Validation: Why The PCI DSS Will Always Fall Short, and PCI – Going Beyond the Standard: Part 10, Anti-Virus for a little more background).

PCI DSS Requirement 11.5 states; “11.5 Deploy a change-detection mechanism (for example, file-integrity monitoring tools) to alert personnel to unauthorised modification of critical system files, configuration files, or content files; and configure the software to perform critical file comparisons at least weekly.” so it should be clear already how to go above and beyond; perform the checks more frequently that weekly. For a start, weekly is ridiculous, a lot can happen in a week, but the requirements’ very limited benefit is compounded by the fact that there is no guidance in WHAT changes you should be looking for.

FIMs can usually detect everything from file existence, size, permissions, hash values and so on, but these are only making checks against themselves from a previous ‘run’. Therefore one of the best ways to go WAY above PCI minimums is to compare the files to central database of known good configs directly from the operating system vendor themselves. Microsoft has a database of the latest and greatest system files (DLLS, EXEs etc.) against which you can run comparisons, and it should be relatively simple to add the baselines from each application you install on top.

Now let’s get REALLY crazy; What if you could then compare what files SHOULD be there as a result of a comprehensive configuration standard / hardening guide, and ensure that everything is as it should be?  Against CIS Security Benchmarks for example? Some FIMs (or similar agents) can also check Windows registry and GPO settings, so you can not only make sure everything is configured correctly per your approved standard(s), you can also automatically report against a significant number of other validation requirements (password complexity, access groups, log settings, time synch settings and so on).

And of course, the best way to blow PCI minimums out of the water is to compare a system against baselines stored centrally against each asset. These are system x’s available services, listening ports, permitted established connections and so on. Now you not only have configuration management, you have both policy and compliance validation built in automatically. Not once a year, but all day every day.

OK, I went WAY too far there, but hopefully you get the point. FIM is not just something you throw on a system because PCI demands it, you do it because it’s integral to how you do real security properly.

Yes Tripwire can do more than PCI asks for, but now you have just another management station to configure, maintain and monitor. FIM done well must integrate with all your other systems to have the necessary context, and FIM is only relevant if you have configured your systems correctly in the first place.

Finally, FIM should NEVER be seen as a stand-alone, end-point product, it can and should be a lot more than that.