OWASP Top 10 2017: Logging & Monitoring Makes the Hall of Shame

Fact #1: There is no effective incident response without logging and monitoring;

Fact #2: There is no effective disaster recovery without incident response; and

Fact #3: There is no effective business continuity without disaster recovery.

Therefore logging and monitoring should be a fundamental aspect of every security program, regardless of organisation size. So why is it performed so universally poorly? Don’t organisations want to stay in business?!

It’s not like EVERY STANDARD ON THE PLANET has it as a prerequisite! Well, except for these obscure ones:

  • ISO 27001 – A.12.4 Logging and monitoring
  • COBIT – F.10 Monitoring and Alert Services for Security-related Events
  • NIST – Anomalies and Events (DE.AE)
  • PCI DSS – Requirement 10: Track and monitor all access to network resources and cardholder data
  • …and so on

So you can imagine my surprise and delight when OWASP – more commonly known for coding vulnerabilities – singled this out as one of their Top 10 for 2017. Yes, it barely snuck in at number 10, but there it is, finally in the light of day.

Unfortunately, OWASP isn’t exactly up there with the NISTs of the world, so the importance of this is probably lost on most. I mean, the DSS uses [loosely] the OWASP Top 10 as one of its “industry accepted best practice” providers, which is actually why a lot of people have even heard of OWASP in the first place.

So now what? What difference is this going to make?

Well, very little probably, if you don’t understand now just how important centralised logging and monitoring is, you probably never will. If you’re in a position where this makes a difference (you’re in technology or cybersecurity) then the only time your organisation will care is when your business suffers a loss. Then I’m sure you’ll start to care as you’re updating your CV/resume.

Honestly, I really don’t know where I’m going with blog. It was either write about this or the bloody GDPR again. But it’s really the privacy regulations that are beginning to drive things like this forward. Record keeping, data breach notifications, accountability and so on all have an enormous impact in how we will be running our businesses and logging is intrinsic to them all.

In my consulting practice I very rarely use the word ‘recommend’, and I try never to mention the names of security control vendors except as examples. So while the due diligence is yours in terms of finding the right logging solution for your organisation’s needs, I HIGHLY recommend that you start looking.

I’m sure there’s some out there, but I’ve yet to see one argument for not performing logging and monitoring, and I’m willing to bet there are no valid ones. The problem, like most things in security these days is that the name is just not sexy enough. Perhaps if we include in a brand new acronym like ‘Episode Reply & Adversity Restoration (ERAR)’ as I did in Froud on Fraud’s Top 10 Cybersecurity Technologies to Implement in 2017 it would get more attention?

Whatever it takes…

[If you liked this article, please share! Want more like it, subscribe!]


WPA2 / KRACK, and the Coming Storm of Marketing BS!

This is going to be my shortest blog ever, because basically it’s just a warning: IGNORE THE MARKETING BULLSHIT AND THE DOOMSDAY JOURNALISTS!

Every time there is an outbreak of malware, or a new vulnerability exposed, or a protocol deprecated, the marketing departments of every security vendor go into overdrive. Their only goal; to make more money. Not to help, not to provide sound advice so that people don’t make bad decisions based on FUD, and not even because they know what the Hell they’re talking about.

Just money.

And the newspapers do what they do best; create panic with little to no understanding of the subject.

Yes, WPA2 has likely been broken, but because of the integrity of the researcher who discovered it we won’t have any information about it until later today. Which means we currently have no idea of the impact.

Apparently this is the guy you need to be watching; http://www.mathyvanhoef.com/

So here is what I would be doing right now if I were you:

  1. Determine what the impact would be on your organisation is WPA2 were truly broken;
  2. Update EVERY relevant device, as by now most of the bigger manufacturers should have a patch or a workaround;
  3. Tell your entire employee base NOT to panic, but they too should update their home computers (anti-malware etc.), mobile devices and home routers;
  4. Update your incident response plan to cover any issues.

The one thing you should NOT do is be part of the problem! Don’t spread rumours, spread fact, and be part of the SOLUTION! Share this blog if you want, or at least articles like it.

The security industry is rapidly becoming a bunch of used car salesmen, let’s each do our part to get THIS one right.

[If you liked this article, please share! Want more like it, subscribe!]

Been Breached? The Worst is Yet to Come, Unless…

The information security sector is rife with negativity and pronouncements of doomsday, and while this title is no better, this blog is not meant to scare, but to provide an alternative view of the worst case scenario; a data breach and resulting forensics investigation. The fact remains that if your data is online, someone has the necessary skill-set and wants it badly enough, they are going to get it. So the sooner you prepare yourself for the inevitable, the better you will be able to prevent a security event from becoming a business-crippling disaster.

By the time you make your environment as hack-proof as humanly possible, the chances are you have spent far more money than the data you’re trying to protect was worth, which in security equates to career suicide. Instead, you are supposed to base your security posture on the only thing that matters; a business need, then maintain your security program with an on-going cycle of test > fix > test again.

Unfortunately what happens in the event of a breach is that you are told what was broken and how to fix it from a technical perspective. This is analogous to putting a plaster / band-aid on a gaping wound. You’re not actually fixing anything. A forensics investigation, instead of being seen as the perfect opportunity to re-examine the underlying security program, is seen as an embarrassment to be swept under the carpet as soon as possible. Sadly, valuable lessons are lost, and the organisation in question remains clearly in the sights of the attackers.

For example, let’s say a breach was caused by an un-patched server. The first thing you do is fix the server and get it back online, but all you have you have done is fix the symptom, not the underlying cause;

  1. How did you not KNOW your system was vulnerable? – Do you not have vulnerability scanning and penetration testing as an intrinsic part of a vulnerability management program?
  2. How did you not know your system wasn’t patched? – Is not patch management and on-going review of the external threats landscape also part of your vulnerability management program?
  3. Did the breach automatically trigger a deep-dive examination of your configuration standards to ensure that your base image was adjusted accordingly?
  4. Did you fix EVERY ‘like’ system or just the ones that were part of the breach?
  5. Did your policy and procedure review exercise make ALL necessary adjustments in light of the breach to ensure that individual accountability and requisite security awareness training was adjusted?
  6. Were Incident Response, Disaster Recovery and Business Continuity Plans all updated to incorporate the lessons learned?

And perhaps the most important part of any security program; Is the CEO finally paying attention? Ultimately this was their fault for not instilling a culture of security and individual responsibility, so if THIS doesn’t change, nothing will.

If the answer is no to most of these, you didn’t just not close the barn door after horse bolted, you left the door wide open AND forgot to get your horse back!

Most breaches are not the result of a highly skilled and concerted attack, but by those taking advantage of the results of  systemic neglect on the part of the target organisation. i.e. MOST organisations with an Internet presence! Therefore, organisations that can work towards security from the policies up, and the forensics report down, have a distinct advantage over those who do neither.

[If you liked this article, please share! Want more like it, subscribe!]

PCI – Going Beyond the Standard: Part 24, Disaster Recovery (DR) & Business Continuity Management (BCM)

You may be wondering why I would put this after Governance seeing as that seems to bring everything together, and you may also be wondering why I did not included Disaster Recovery (DR) in the same post as Incident Response (IR) which everyone else always does.

They would be good questions, and my reasoning is relatively simple; You cannot HAVE Business Continuity Management (BCM) without Governance so that must be formalised first, DR represents the detailed processes summarised in the BCM, and IR is the feed INTO the DR/BCM, not the output from it.

To put it another way; the Business Continuity Plan (BCP) details what must be done, in what order, and how quickly to save the business, DR puts that plan into effect, and IR would have uncovered the inciting incident that brought both the BCP and DR plans into play in the first place.

Assuming that made any sense, the question is; What if I don’t HAVE a BCP?

I am surprised every time I ask a client for a BCP and don’t get one. Mostly because I’m not too bright, but partly because it makes absolutely no sense to me that ANY organisation in any industry sector, anywhere in the world would not make such a simple effort to help themselves STAY in business. While both DR and BCP represent what amounts to contingency planning and will hopefully never have to be invoked (assuming your IR is top notch of course), NOT having a plan is nothing short of irresponsible.

There are several well known standards related to Business Continuity, and for obvious reasons they encompass more than just IT systems:

  1. ISO 22301:2012: Societal security — Business continuity management systems – Requirements
  2. ISO 22313:2012: Societal security — Business continuity management systems – Guidance
  3. ISO/IEC 27031:2011: Information security – Security techniques — Guidelines for information and communication technology [ICT] readiness for business continuity
  4. NIST Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems
  5. ANSI/ASIS SPC.1-2009 Organizational Resilience: Security, Preparedness, and Continuity Management Systems

Unfortunately the ISO stuff will set you back a few hundred quid, so start with the NIST / ANSI stuff to ge yourself familiar enough with the concept to at least ask the right questions.

For DR, start with mapping out all of your business processes and asset dependencies. If you don’t know how things fit together, you’ll have no idea how to put them back in place. Clearly, if your asset management processes are not robust, you can’t even begin the mapping process, so get that done first.

Once you have mapped out your business processes, it’s a relatively simple task to organise all of your procedural documentation into how you reestablish all the moving parts. You have all that, right? So whether you have full redundancy in all things, hot swap, warm spares or a whole host of other DR clichés, how you get your systems back online boils down to a series of easily followed instructions.

From an IT perspective, all the BCP plan does is tell you in which order to bring those systems back online and in what timeframe. It should be needless to say – but it isn’t – the plan and all of its moving parts must be tested on an annual basis or even explicit instructions cannot get the response times to an optimal state.

No aspect of security should be performed half-arsed, DR and BCP processes are no exception. Even within the field of security BCP is a speciality, and making the plan simple and appropriate is a talent more than a skill. Expect to pay a lot for these services but rest assured it is money well spent.

PCI – Going Beyond the Standard: Part 20, Incident Response (IR)

First, you may be asking why this blog does not include Disaster Recovery (DR) and Business Continuity Management (BCM, which governs the entire IR / DR process). Because the PCI DSS section 12.10.x is almost entirely related to IR (with the exception of a VERY brief nod to DR / BCP, below in red), I will handle DR / BCP separately in the series (post 23 in fact).

“12.10.1 – Create the incident response plan to be implemented in the event of system breach. Ensure the plan addresses the following, at a minimum:

    • Roles, responsibilities, and communication and contact strategies in the event of a compromise including notification of the payment brands, at a minimum
    • Specific incident response procedures
    • Business recovery and continuity procedures [This is the only requirement in the DSS that goes beyond the protection of CHD.]
    • Data backup processes
    • Analysis of legal requirements for reporting compromises * Coverage and responses of all critical system components
    • Reference or inclusion of incident response procedures from the payment brands.

With regard Incident Response, I put it this way; “What’s the point of being in business, if you don’t intend staying in business?”, and; “Good incident response is what prevents a security event from becoming a business crippling disaster.”

It makes absolutely no sense to me that organisations who basically depend on IT for significant chunks of income (which is most of them), have very little idea how to stop bad things from happening in the first place, let alone fix things when they go wrong. Of course, no incident response is going to predict an earthquake at the datacenter, but the organisations I’ve seen don’t even perform log monitoring properly, let alone consider the impact of acts of nature.

The development of a good incident response plan start with? Yep, a good policy, from there you agree on an appropriate Risk Assessment / Business Impact Analysis process, which in turn provides you everything you need to not only determine if you have any control gaps (after a gap analysis), but – if you’ve done it properly – a good indication of what your incident response and disaster recovery plans should entail.

There is no appropriate IR without an understanding of the business goals. If you have a 4 hour Recovery Time Objective (RTO), your IR will be significantly more robust than one where you can take a week to be back online. Yes, I know that RTOs (and RPOs (Recovery Point Objective for that matter) are DR terms, but if your incident response cannot detect a business crippling event in good time, then neither of those DR goals is an option for you.

When setting up your IR program, the most important word to keep in mind is ‘baseline’. Without a baseline, you don’t have much of a concept of what constitutes an incident in the first place. Only a baseline can give you both context and relevance.

From your baselined system configuration standards (DSS 2.x), to AV (DSS 5.x), to logging (DSS 10.x), to scanning (DSS 11.1.x, and 11.2.x), to FIM (DSS 11.5.x), you have many available inputs into your IR program, none of which will be of the slightest help if you don’t know what they SHOULD look like.

That’s all IR is;, a process whereby an exception to the norm is investigated, and appropriate action taken.

In each of my individual going-beyond-the-standard blogs related to the above DSS requirements, I have stressed the importance of baselining (well, except AV perhaps). The reason I did so was because they all lead up to this. I don’t care how well you have done ANY of the previous requirements, unless you can bring the outputs all together into a comprehensive process of taking action, all you have is a bunch of data to give to your forensics investigator.

You’ll notice though that I did not say a CENTRAL process, because while having a 24X7 Security Operations Centre t manage all of this, it’s rarely practical, even if it involves a outsourced managed service provider (MSP). However, having the correct assignments and procedures to MANAGE the response is of utmost importance, and the details of this plan will vary considerably from company to company.

No IR is not easy, but there is simply too much information and help out there for this difficulty to be any sort of excuse. And no, there is not much in this blog that actually provides guidance, but if this makes SENSE, then you at have at least got enough to begin to ask the right questions.