COVID-19 Will Change Forever How We Look at Business Continuity / Crisis Management

The effects of the COVID-19 on businesses are already unprecedented. It’s also going to get worse before it gets better, and I don’t just mean the ridiculous demand for toilet roll. While I am not very good at thinking in ‘futuristic’ terms, even I already know that the businesses that manage to survive will have no choice but to fundamentally change how they do what they do.

Permanently.

Well, those for whom data and electronic communications are the primary keys to their business model that is. Face-to-face stuff (e.g. brick-and-mortar retail) is a whole other ball game and way beyond my ken.

From tele-working, to business travel / commuting, to the communication / collaboration technologies in use, the impact of this global phenomenon will be dissected and analysed for decades. The ‘old ways’ of working; 9-5; bum-on-seat; on -Fri could [and I think should] largely disappear if, and ONLY if, the lessons learned are taken on board. Every business is a series of functions, and it should not be of primary importance where the person who performs those functions is, or even who that person is.

This is the mistake most organisations make, and while the impact of something like COVID-19 has never been part of any BCP I’ve ever seen, we could certainly have extrapolated and prepared for events like it. Here in London for example, if TFL goes on strike there is an enormous impact on the daily commute; people take 3 to 4 weeks off in a row on annual leave; long term power outages at critical locations and so on. All of these things, and many more like them, have all pointed to what is now required but almost universally absent.

But while there are literally hundreds of articles on how to DO business continuity in the face of COVID-19, they are ALL too little too late. It’s not the security industry’s fault however, it’s the fault of every senior leadership team who saw the aspects of security from incident response onwards as nothing more than a paperwork exercise. Or worse, chose to remain ignorant of the right way forward.

Ignorance is a choice.

All that said, this blog is not actually about business continuity planning per se, that’s not really my forte, this is more about ‘crisis management’, and how the LACK of it has made the COVID-19 pandemic worse for everyone. Especially those in the medical professions.

At its heart, crisis management (and by extension, business continuity planning) is about four things:

  1. An understanding of the business’s individual functions;
  2. An understanding of how those functions are performed;
  3. An understanding of who performs those functions; and
  4. Appropriate communication

In other words, if what you do:

  1. and how you do it is known and documented; AND
  2. is assigned to the appropriate and accountable resources.

…then all you have to worry about is the ongoing communication. Yes, the implementation of appropriate technology(ies) is relevant, but that should really be a one-off exercise plus ongoing maintenance.

Clearly this is not happening as a matter of course. Very few organisations have been adequately proactive in communicating to their employees what COVID-19 is, what its impact could be, and what to do about it. Almost everything that has happened to date has been reactive, ad hoc, and ineffective.

You think maybe this is a little unfair? That it’s not the employer’s responsibility to keep their workforce both informed and safein the face of a pandemic? Tell me, who is better placed to do that? The Government? The newspapers? Your doctor?

It is my contention, and the real point of this blog [finally], that it’s the employers who should take the lead in these situations, because even Governments don’t have the level of influence over people that employers do. Of course everyone should follow what the Government and reputable experts say in these scenarios (CDC for example), but it’s the employers who have the most effective access to, and authority over, the lion’s share of the population.

They also have the best chance, by far, of heading off the rampant ignorance that leads to wearing a plastic bag over your head and other irretrievably stupid things that are still going on!

Not convinced? Think about it for a second. In the UK [for example] there are ~66 million people, ~half of whom are gainfully employed by ~2 million employers. If you exclude the public sector and the self-employed, you’re left with ~1 million employers with multiple employees.

I have long maintained that our employers have taken over the role of the communities of old (albeit very poorly):

  • Your and your family’s very livelihood (read Maslow’s Hierarchy of Needs) is largely dependent on them. Even your sense of identity;
  • You spend more than a third of your working life either at work or getting to and from it;
  • A huge chunk of your interpersonal interactions are a result of your place of work (I married an ex-colleague for example (much to her regret)).

Virtually everyone has a laptop/desktop, mobile phone, or both. And whether they are work-supplied or personally-owned makes no difference, your employer has direct and personalised access to you. They also have the ‘power’ to MAKE you listen/read/respond and ACT in accordance with their mandates.

Now imagine if your employer implemented [or had access to] a service that provided not only the most up to date information from all of the reputable and relevant resources, but detailed instructions on what each employee should be doing at any given time? Would these millions of people who are now armed against ignorance not significantly ‘flatten the curve’? Imagine almost one HALF of the population influencing and protecting the other half, even if it’s only against themselves.

Bottom line; I believe organisations not only have a responsibility to keep their employees both informed and safe, they should be held accountable for it (up to and including regulation). It is, after all, in everyone’s best interests including the employers themselves. It just makes sense even if you’re mercenary enough to only see this from a financial perspective.

Eventually I’ll write up more specifics on how every organisation can put something like this in place, but now is not the time. All I ask is that you pay particular attention to how YOU are managing to perform your duties while stuck at home, because if you can’t do it the next time you’ll have failed yourself and your employer equally.

Everyone, please stay safe, informed, and help out where you can, even if it’s by staying in the house.

[If you liked this article, please share! Want more like it, subscribe!]

Been Breached? The Worst is Yet to Come, Unless…

The information security sector is rife with negativity and pronouncements of doomsday, and while this title is no better, this blog is not meant to scare, but to provide an alternative view of the worst case scenario; a data breach and resulting forensics investigation. The fact remains that if your data is online, someone has the necessary skill-set and wants it badly enough, they are going to get it. So the sooner you prepare yourself for the inevitable, the better you will be able to prevent a security event from becoming a business-crippling disaster.

By the time you make your environment as hack-proof as humanly possible, the chances are you have spent far more money than the data you’re trying to protect was worth, which in security equates to career suicide. Instead, you are supposed to base your security posture on the only thing that matters; a business need, then maintain your security program with an on-going cycle of test > fix > test again.

Unfortunately what happens in the event of a breach is that you are told what was broken and how to fix it from a technical perspective. This is analogous to putting a plaster / band-aid on a gaping wound. You’re not actually fixing anything. A forensics investigation, instead of being seen as the perfect opportunity to re-examine the underlying security program, is seen as an embarrassment to be swept under the carpet as soon as possible. Sadly, valuable lessons are lost, and the organisation in question remains clearly in the sights of the attackers.

For example, let’s say a breach was caused by an un-patched server. The first thing you do is fix the server and get it back online, but all you have you have done is fix the symptom, not the underlying cause;

  1. How did you not KNOW your system was vulnerable? – Do you not have vulnerability scanning and penetration testing as an intrinsic part of a vulnerability management program?
  2. How did you not know your system wasn’t patched? – Is not patch management and on-going review of the external threats landscape also part of your vulnerability management program?
  3. Did the breach automatically trigger a deep-dive examination of your configuration standards to ensure that your base image was adjusted accordingly?
  4. Did you fix EVERY ‘like’ system or just the ones that were part of the breach?
  5. Did your policy and procedure review exercise make ALL necessary adjustments in light of the breach to ensure that individual accountability and requisite security awareness training was adjusted?
  6. Were Incident Response, Disaster Recovery and Business Continuity Plans all updated to incorporate the lessons learned?

And perhaps the most important part of any security program; Is the CEO finally paying attention? Ultimately this was their fault for not instilling a culture of security and individual responsibility, so if THIS doesn’t change, nothing will.

If the answer is no to most of these, you didn’t just not close the barn door after horse bolted, you left the door wide open AND forgot to get your horse back!

Most breaches are not the result of a highly skilled and concerted attack, but by those taking advantage of the results of  systemic neglect on the part of the target organisation. i.e. MOST organisations with an Internet presence! Therefore, organisations that can work towards security from the policies up, and the forensics report down, have a distinct advantage over those who do neither.

[If you liked this article, please share! Want more like it, subscribe!]

PCI – Going Beyond the Standard: Part 24, Disaster Recovery (DR) & Business Continuity Management (BCM)

You may be wondering why I would put this after Governance seeing as that seems to bring everything together, and you may also be wondering why I did not included Disaster Recovery (DR) in the same post as Incident Response (IR) which everyone else always does.

They would be good questions, and my reasoning is relatively simple; You cannot HAVE Business Continuity Management (BCM) without Governance so that must be formalised first, DR represents the detailed processes summarised in the BCM, and IR is the feed INTO the DR/BCM, not the output from it.

To put it another way; the Business Continuity Plan (BCP) details what must be done, in what order, and how quickly to save the business, DR puts that plan into effect, and IR would have uncovered the inciting incident that brought both the BCP and DR plans into play in the first place.

Assuming that made any sense, the question is; What if I don’t HAVE a BCP?

I am surprised every time I ask a client for a BCP and don’t get one. Mostly because I’m not too bright, but partly because it makes absolutely no sense to me that ANY organisation in any industry sector, anywhere in the world would not make such a simple effort to help themselves STAY in business. While both DR and BCP represent what amounts to contingency planning and will hopefully never have to be invoked (assuming your IR is top notch of course), NOT having a plan is nothing short of irresponsible.

There are several well known standards related to Business Continuity, and for obvious reasons they encompass more than just IT systems:

  1. ISO 22301:2012: Societal security — Business continuity management systems – Requirements
    o
  2. ISO 22313:2012: Societal security — Business continuity management systems – Guidance
    o
  3. ISO/IEC 27031:2011: Information security – Security techniques — Guidelines for information and communication technology [ICT] readiness for business continuity
    o
  4. NIST Special Publication 800-34 Rev. 1, Contingency Planning Guide for Federal Information Systems
    o
  5. ANSI/ASIS SPC.1-2009 Organizational Resilience: Security, Preparedness, and Continuity Management Systems

Unfortunately the ISO stuff will set you back a few hundred quid, so start with the NIST / ANSI stuff to ge yourself familiar enough with the concept to at least ask the right questions.

For DR, start with mapping out all of your business processes and asset dependencies. If you don’t know how things fit together, you’ll have no idea how to put them back in place. Clearly, if your asset management processes are not robust, you can’t even begin the mapping process, so get that done first.

Once you have mapped out your business processes, it’s a relatively simple task to organise all of your procedural documentation into how you reestablish all the moving parts. You have all that, right? So whether you have full redundancy in all things, hot swap, warm spares or a whole host of other DR clichés, how you get your systems back online boils down to a series of easily followed instructions.

From an IT perspective, all the BCP plan does is tell you in which order to bring those systems back online and in what timeframe. It should be needless to say – but it isn’t – the plan and all of its moving parts must be tested on an annual basis or even explicit instructions cannot get the response times to an optimal state.

No aspect of security should be performed half-arsed, DR and BCP processes are no exception. Even within the field of security BCP is a speciality, and making the plan simple and appropriate is a talent more than a skill. Expect to pay a lot for these services but rest assured it is money well spent.