Who is making cybersecurity so complicated?

Who’s Making Cybersecurity So Complicated?!

One of the goals of this blog, as well as the ultimate goal of my career, is to simplify all aspects of cybersecurity. Well, maybe not all. I have no idea how to simplify a penetration test (or even perform one), or encryption mechanisms, but I’ve got the high-level stuff covered! 🙂

From my perspective, cybersecurity is already simple. You would hope so, it’s what I do, but that’s not actually what I meant. Which is that every aspect of cybersecurity must be simple for it to even be effective security in the first place. There is no room for complicated. It must also be accessible to everyone who needs it, regardless of their current role or previous experience.

It is therefore the job of every cybersecurity professional to make this stuff easy, but clearly we are not doing a very good job. In fact, I would go as far as to say that there are certain elements that seem to go out of their way to make things difficult!

What / who are these elements, and why are they doing it?

o

  1. No offence, but Element 1 is You; While you may not be a security expert, you are every bit as responsible for security as those who are the experts. Ignorance of your responsibilities is no excuse, and if your organisation does not provide you the necessary training, demand that they do so. Unless you’ve lived in a hole for the last 10 years, you have seen the headlines related to data breaches. You really don’t want to be the cause of one.
    o
  2. Which is the ideal segue into the Element 2, which is; Senior Management. If they don’t care about security, there’s a very chance you don’t care (see element 1.). If cybersecurity is not in the Top 5 priorities of your BoD / CEO, then you likely have an entirely ineffectual security program. If you even have one at all. There is nothing more difficult and seemingly complicated than starting something from the very beginning, but start you must.
    o
  3. Element 3 is of course, Lawyers / Regulators. Not that they do this on purpose, it’s that they just can’t help themselves. The language of the law is practically incomprehensible to the rest of us, yet it has to be lawyers that write every contract, regulation, and [of course] law out there. Combine their legal-ese with something you already don’t understand [cybersecurity], and you’re left scratching your head in frustration. Or worse, avoiding it altogether.
    o
  4.  And the worst of the bunch, Element 4; Security Vendors. This is the one that is truly reprehensible. How many of you, for example, know what Cloud Access Security Brokers (CASBs) are? Or User and Entity Behavioral Analytics (UEBA)? What about Intelligence-Driven Security Operations Center Orchestration Solutions? No, me either. What I DO know is that you don’t need ANY of these things until such times as your risk assessment TELLS you need them! You have that process well oiled, right?

Of all the horrendous clichés out there, my favourite is ‘Back to Basics’. Cybersecurity is simple, bloody difficult, but simple. Anything that complicates it can be effectively ignored until such times as you’re ready for it. You will never get there by buying technology, and you will never get there until you get the basics right.

Luckily the basics are the cheapest things to fix. All you have to do is get your CEO to care, formalise your Governance, and get all of your policies and procedures in place.

Simple, right?

OK, that was facetious, but if you think any of these things is complicated you’re just not asking the right people the right questions.

[If you liked this article, please share! Want more like it, subscribe!]

Your Privacy is a Currency, Spend it Wisely

Whatever side you are on in the whole privacy debate, you have probably heard variants of the following two arguments:

  1. I don’t care if the Government happen to read my emails while looking for bad guys, I have nothing to hide, and I feel safer knowing they are doing something.
    o
  2. There is no evidence that mass digital surveillance has any positive impact on the reduction of crime or terrorism, so my individual right to privacy (UDHR, Article 12) is more important.

Privacy-is-everything advocates will say things like; “Saying you don’t care about the right to privacy because you have nothing to hide, is no different than saying you don’t care about free speech because you have nothing to say.”, or “You can’t give away the rights of a minority, even if you vote as a majority.”

Privacy-as-a-currency advocates will counter with things like; “Saying mass surveillance has no proven benefit is like saying laws are ineffective, you have no idea how many crimes were prevented for fear of being caught.“, or “The minority has no right to impose their will on the majority when personal safety is at stake.

It makes no difference what side you are on, I will not change your mind, and you will not change mine, but we each much pay the same cost for the conveniences and functionality we have come to expect. And accept the responsibility for our choices.

The Internet and now mobile devices have completely changed the way we do business, interact with family and friends, buy stuff, and according to Ian Morris in “The Decline and Fall of Empires”, they will even ‘help’ change our biology;

“As social development rises ever higher, revolutions in genetics, computing, robotics and nanotechnology are beginning to feed back into our biology, transforming what it means to be human.”

Yet we somehow have this expectation that both the Internet and mobile devices are human rights in and of themselves, that we can do whatever we want on them and through them yet still have an expectation for privacy. Governments aside, how can we be so naive?

From my overly simplistic perspective, the world is made up of three kinds of people:

  1. The Good – We don’t have to worry about the good, their lives are spent taking care of whatever it is they care about, which is always in-line with established societal norms / laws, and regardless of the area of influence (i.e. immediate family, community, country, or global).
    o
  2. The Bad – They care nothing for societal norms, they want, so they take. They care nothing for your right to privacy, and outside of instances of gross incompetence, fall almost entirely within your ability to point fingers if you are a victim. IF you can catch them.
    o
  3. The Ordinary – Basically decent, perhaps with a little ‘moral flexibility’ thrown in, who may not like the Bad guys, but understand them enough not to be shocked when they do bad things. These are the majority, and the smarter ones prepare for the worse case scenario.

Laws and rights are written to protect everyone, but not everyone can be protected in the same way. I have contended many times that the more ‘out there’ that’s known about me, the less someone else can pretend to BE me. My life’s story is the equivalent of a public ledger, and any anomalies immediately obvious. This is true for my blogs, my social media, my payment history, and hopefully, even my identity itself.

Of course, there are many people who, quite literally, think I’m 100% wrong, an idiot, or both.

Whatever course YOU choose cannot be seen entirely within the context of your rights, especially ones you are spending every moment you are online.

[Ed. Found this, thought it was well done; Amazing Mind Reader Reveals His ‘Gift’]

The Investigatory Power Bill, Why Are We So Nervous?

This will be my first blog where I am going to a) plead ignorance; b) ask for your input, and c) actually listed to someone else’s opinion (potentially).

[gasp!]

I have just spent all day today going over parts of The Investigatory Powers Bill (IPB), the Regulation of Investigatory Powers Act (RIPA) of 2000, as well as listening to Theresa May’s statement on Parliamentlive.tv, all in an effort to find SOME evidence that it could “…allow Government to ban end-to-end encryption, technology powering iMessage and WhatsApp“.

OK, so that’s just according to The Independent, but The Guardian has their; “Investigatory powers bill: snooper’s charter to remain firmly in place“, The Telegraph has their; “Internet firms to be banned from offering unbreakable encryption under new laws” and so on.

All I could find in the IPB was this;

189 – Maintenance of technical capability

(4) The obligations that may be imposed by regulations under this section include, among other things—

(c) obligations relating to the removal of electronic protection applied by a relevant operator to any communications or data;

(e) obligations relating to the handling or disclosure of any material or data.

…as well as a reference to encryption as it related to RIPA, which states;

21 – Lawful acquisition and disclosure of communications data.

(4) In this Chapter “communications data” means any of the following—

(b) any information which includes none of the contents of a communication (apart from any information falling within paragraph (a)) and is about the use made by any person—

So, first my ignorance; I do not speak Governmental legalese, so I have no idea is the vagueness of this is just way of saying ‘we can do anything we want’, or it’s an established-by-precedent way of saying ‘this is all we can do’.

I have also not read the whole, thing, it’s 300 pages long and makes the PCI DSS look like the last Harry Potter book,

Which brings me to the second part; your input. There will be those who have read not only the IPB, but the RIPA, the Communications Data Bill of 2012, as well as the Data Retention and Investigatory Powers Act of 2014. You also are likely to fall [mostly] into one of only two categories; for, and against.

I would love to hear reasoned thoughts from both, or at least point me to an unbiased Cliff Note version of each!

Finally, listening to someone else’s opinion; anyone who has been nice / bored enough to read my blogs over the last 2.5 years will not have read even one where I was in any way unsure of my opinion / stance. Even when it comes to security (what’s the font for sarcasm?).

In this case, I am 100% on the fence (mostly because of 1. above), but partially because any talk of ‘investigatory powers’ or ‘interception of communications’ will have significant impact on privacy, and the implementation of my real interest; Identity Management.

While my thoughts on privacy itself are public record, the impact of what these Governmental powers will have on putting true Identity Management into effect are far from clear to me. There will be no secure mobile payments, no Internet of Things, and no hiding from your wife if there is something in the middle capable of ‘reading’ my communications. Not because I don’t trust the Government, but because anything THEY have access to will eventually be available to the bad guys.

We work within established rules of decency, they don’t (the bad guys that is).

Basically, please help, all comments / thoughts welcomed.

No Mr. Big Data Service Provider, You CANNOT Do That!

In a very interesting presentation at the 2015 ISC2 EMEA Congress in Munich, Dr Lucas Feiler posited that any big data analytics performed, whether internally or outsourced, is going to attract significant legal challenges related to privacy. And even if the challenges CAN be resolved, it will likely be in ways that make something of a mockery of the EU General Data Protection Regulation (GDPR)’s intent.

Getting around privacy regulations will involve token human interaction (i.e. smoke and mirrors) where none is desired. In areas that needs to be dominated by AI and the resulting automated decisions (insurance for example), adding the human element to avoid the appearance of prejudiced results will probably be standard until the algorithms become smart enough to be considered ‘reasonable’ (my absolute favourite legal term, right up there with ‘appropriate’).

Human interaction is not desired by those doing the analysis mind you, we may think otherwise.

While not in place …yet, the European Council aims for adoption of the GDPR in 2017 and have it in full effect after a “two-year transition period“. While 4 years may sound like a long time, when you consider the following statistics (taken from www.unifiedsocial.com) you can only imagine how difficult it will be to clean up the mess if organisations don’t start following the regulation now:

  • The data volume in the enterprise is estimated to grow 50x year-over-year between now and 2020
    o
  • 35 zettabytes (that’s 35,000,000,000,000,000,000,000 bytes) of data generated annually by 2020
    o
  • According to estimates, the volume of business data worldwide, across all companies, doubles every 1.2 years

Granted, the vast majority of this data will be in the form of cat videos and Kardashian tweets, but that still leaves an extraordinary amount of YOUR data sitting on servers just waiting to mined, manipulated, and analysed in ways we cannot even imagine. We cannot imagine them because they have not been INVENTED yet, and that’s the Holy Grail for any organisation, and the impetus behind big data analytics in the first place; How to manipulate the data they have into the development of new revenue streams.

To put that another way; How to take the data they have on you already and present it back to you in a way that makes you spend more money.

I’m actually not against this per se, Amazon are already doing a mild version of it with their “Frequently Bought Together” and “Customers Who Bought This Item Also Bought” sections, but can you imagine how much data they have on their more than 1.5 MILLION servers across 17 global regions?

The card brands and Facebook can predict within a two week window whether or not you’re going to get divorced, how much other data do THEY have? Or Google?

But can the GDPR actually make a difference? Probably, it has a VERY big stick, and you know how lawyers love their class action suits!

Look at GDPR CHAPTER II, PRINCIPLES, Article 5 – Principles relating to personal data processing:

Personal data must be:

(a) processed lawfully, fairly and in a transparent manner in relation to the data subject;

(b) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes; (…);

(c) adequate, relevant and not excessive in relation to the purposes for which they are processed (…);

Etc…

…and now CHAPTER VIII, REMEDIES, LIABILITY AND SANCTIONS, Article 79a – Administrative fines:

The supervisory authority (…) may impose a fine that shall not exceed 1 000 000 EUR or, in case of an undertaking, 2 % of its total worldwide annual turnover of the preceding financial year, on a controller or processor who, intentionally or negligently:

(a) processes personal data without a (…) legal basis for the processing or does not comply with the conditions for consent pursuant to Articles 6, 7, 8 and 9;

If you LOSE personal data the fines can be a much as 5% of worldwide annual turnover.

Will that make a difference?

I hope so.

 

Is Authentication of Identity Even Possible?

Before I can answer that questions, I need to define what I think Identity is. Too often authentication is used interchangeably with identity, but that’s like saying a bank account and money are the same thing.

In its most basic terms, authentication is the what-of-you, identity is the WHO-of you. You can authenticate via password to log into your computer or buy a cup of coffee, but if you want a mortgage, considerably more background information is required. I could give you 5 usernames & passwords, 5 forms of biometrics, and have 5 different hardware tokens and you would still not know to any degree of certainty if I’m good for a loan.

Example: Two people are standing in front of you, one’s a stranger and one’s a close friend. You know [for the sake of this hypothetical] that they are both who they say they are, but do you feel equally comfortable lending them your car?

I would assume the answer is no, you would NOT be comfortable loaning a stranger your car, so what’s the difference? Trust, pure and simple. You trust your friend because you know WHO they are, not WHAT they are.

Unfortunately you will never be able to know everyone on the planet as well as your friends, so how can you assure a sufficient level of trust to do business of any sort? Currently, authentication is enough, but it’s almost entirely one way. If you want to buy something on the Internet YOU have to complete the login details (often including a permanent account), you have to enter all of you payment details, and you have to accept the risk that the merchant will send the goods as promised.

With an identity, built over the course of time and receiving input from many sources, every individual and every organisation can build a demonstrable level of trust so that both sides have the assurance they need to conclude the transaction. Fraud in e-commerce is rampant because we simply don’t have this 2-way assurance.

From the individual side: Credit score, confirmation of available funds, payment history, and any number of other factors can build a Trust Assurance Score (TAS), and it will be up to both the buyer and the seller to agree on the level of score required to complete a purchase. e.g. on a scale of 1 – 100 (100 being a perfect TAS) the merchant needs a score of 5 to buy the ubiquitous cup of coffee, but a score of 50 to rent a car, and a score of at least 75 to get a mortgage.

From the merchant side: Time in business, corporate credit rating, ratings and reviews and so on can build their TAS, so you can decide up front the level of risk you are prepared to accept to conduct the business at hand.

Clearly there are many challenges with this; How do you build a rating in the first place (the young and new businesses should not be unfairly advantaged)?; How do you  provide instant access to this rating without exposing all of the detailed information behind it?; How do you tie in the level of authentication required to even request a TAS? And so on.

I’m not proposing a way to fix this, I’m simply trying to demonstrate that  the reason we don’t HAVE identity built into transaction authentication is that these issues have not been addressed yet. And until we have identity built into transactions, we won’t have the levels of trust required to make significant change. Payments for example will move from plastic to mobile, but authentication (even multi-factor) is not enough to significantly reduce fraud.

I suspect block-chains (the technology behind crypto-currencies) has a big chunk of the answer, but I can’t even conceive on how this will be done. I just know it needs to.