Information Security vs Privacy

Information Security vs Privacy, are the Lines Blurring?

My original title was “Data Security vs Data Protection[…]”, but an unfortunate number of people see these as pretty much the same thing, even interchangeable. Then I chose Cybersecurity instead of Data Security but that doesn’t cover all forms/formats of personal data, so I finally had to settle on Information Security.

As for Data Protection, it’s not, in and of itself Privacy, and so on…

But you see the problem already? If we can’t even agree on common terminology, how are we expected to ask the right people the right questions in order to solve our problems? But I digress…

For the purposes of this blog I have chosen the following definitions of ‘Information Security’ and ‘Privacy’:

  • Information Security – “…is the practice of preventing unauthorised access, use, disclosure, disruption, modification, inspection, recording or destruction of information.”; and
    o
  • Privacy – “…is the ability of an individual or group to seclude themselves, or information about themselves, and thereby express themselves selectively.”

It should be immediately obvious that these are NOT the same thing. Significant overlap, yes, but as always, security is just an enabler. Security does not dictate the goals of a business, it enables them; security does not give you privacy, it enables you to have it. A personal trainer does not make you healthy, s/he provides guidance in ONE aspect of your health goals. You still have to eat better, drink less, stop smoking, reduce stress and so on.

But now there seems to be an expectation that security people should also be privacy experts (I’m not saying they can’t be, but I actually don’t know any). Because GDPR is a big deal and ‘data protection’ is seen as the same as ‘data security’, everyone is looking to security people for guidance. Would you hire a fat personal trainer?

Take me for example: I have spent a large chunk of the last 2 years learning more about privacy (and GDPR in particular), I still consider myself 99.9% a security guy. I have even written fairly extensively on both privacy (personal opinion) and GDPR (hopefully accurately), but once again, neither of these things is what I DO. Privacy is not a core competence of security (just look at the CISSP CBKs).

But, and to the point of this blog, can a ‘security guy’ keep doing just security in the brave new world post-May 25th? The short answer is of course yes, if that’s all they want, but are they doing their careers any favours? And what about their clients? Can a security expert without at least a foundation in privacy really perform their function appropriately? For security to enable anything, they need context, privacy is now a major factor of that context for any business.

In other words, has privacy now become so important, that any field with a significant impact on it must revise its training syllabus? And given that information security has such a significant overlap with privacy, are security people best placed to take on a bigger role in providing privacy guidance?

The answer, as in everything else, is; that depends. A business has to be able to find the appropriate help, and the ‘expert’ has to have the appropriate skillset. There is no standard here, and only the people [on both sides of the equation] who educate themselves should be making any decisions. Should.

In reality, most organisations don’t even have in-house security expertise, let alone privacy expertise, so where is this guidance supposed to come from? I now think that security folks are very well placed to begin taking on a larger privacy mantle. I even believe that security folks who don’t get a foundation in privacy are severely limiting their careers. Could you imagine hiring a CISO who hasn’t even read the GDPR?

Information Security and Privacy will never merge completely, they are just too big and too different, but the lines are indeed blurring.

[If you liked this article, please share! Want more like it, subscribe!]

Privacy

The Right to Privacy: Don’t Tell Me I Have to Care!

I’ve already written on the subject of privacy several times, and will likely be regurgitating a lot of what I’ve said previously, but an article I read last week really pissed me off; Three Reasons Why the “Nothing to Hide” Argument is Flawed. It’s exactly this kind of absolutist nonsense [from both sides of the privacy ‘debate’] that makes true progress so bloody difficult.

Their first point:1) Privacy isn’t about hiding information; privacy is about protecting information, and surely you have information that you’d like to protect.” is backed up by several metaphors, one of which is “Do you close the door when you go to the bathroom?” Seriously? Even the Universal Declaration of Human Rights qualifies the right to privacy with the word ‘arbitrary’:

“Article 12 – No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.”

Every other treatise [that I’ve read] on privacy has a similar qualifier, which clearly infers that there can be very good reasons for ‘interference’. This is further supported by the fact that privacy is only a fundamental right, not an absolute right.

Their second point:2) Privacy is a fundamental right and you don’t need to prove the necessity of fundamental rights to anyone.“. If you’ve never read anything about privacy, you would think that a fundamental right is immutable and incontestable. It’s not. As Recital 4 of the GDPR phrases it; “The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality.

In other words, your right to privacy must be put into context with EVERYONE else’s OTHER rights. e.g. Hypothetically, if I believed that ‘mass surveillance’ increases the safety of myself and my family, then your demand for privacy-first puts my loved ones directly in harms way. Therefore, my absolute (or ‘unalienable’) rights to what American’s call ‘life, liberty and the pursuit of happiness’ are more important than you not being seen with your trousers around your ankles.

But then they go big and say: “We change our behavior when we’re being watched, which is made obvious when voting; hence, an argument can be made that privacy in voting underpins democracy.“, which is a ridiculous stretch. Democracy through a “cohesion produced by a homogenous people.”? Sure. Democracy through a ‘consensus on fundamental principles’? Absolutely. Democracy through “privacy in voting”? Get a bloody grip.

And their final point; “3) Lack of privacy creates significant harms that everyone wants to avoid.” is basically true. But their example of “You need privacy to avoid unfortunately common threats like identity theft, manipulation through ads, discrimination based on your personal information, harassment, the filter bubble, and many other real harms that arise from invasions of privacy.“, makes it sound like organisations and governments are forcing us to put this stuff online. WE have the choice about what personal data we expose online, and while there absolutely should be [more] checks and balances against Governments overstepping their bounds, and organisations like Google should be completely transparent in their dealings, we are the ones giving our personal data away in exchange for convenience.

You’ve probably heard the quote by Snowden; “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.

If that’s true, I could argue that what most people actually do online is little different from someone who cuts out their tongue. Regardless of whether we have the RIGHT to privacy, it does not mean we HAVE privacy, and certainly not online. If it’s online, it’s exposed, so you have two choices:

  1. Don’t put it online, so no more online banking, Facebook, Amazon, and so on; or
  2. Put online only the things you don’t care about losing (i.e. no nude selfies), or can protect in other ways (i.e. insure your bank accounts)

To one degree or another we all trade our privacy for functionality. We all want the convenience of online banking, shopping, communication, and all the world’s knowledge at our fingertips. But did you really think this was free? Our right to privacy is both a privilege, and a currency, which means you have a responsibility to protect it, and a responsibly to spend it wisely respectively. Both of these responsibilities require you to NOT be ignorant, to educate yourselves and not rely on others to do it for you.

But in the end it has to remain a CHOICE! The ‘privacy-first’ side of the debate will NEVER agree with the ‘nothing-to-hide’ side, but like every fundamental right we have (and yes, democracy itself), this choice will be determined by the majority. So even though, as Snowden said; “[…] the majority cannot vote away the natural rights of the minority.“, the opposite is equally true; “The wishes of the minority cannot outweigh the wishes of the majority.” To put it another way, if a person wants total privacy, then they should have the right to have it, but not if that conflicts with the rights of the others.

What very few people address is the fact that my definition of privacy may be different from yours. You may think ‘secrecy’ is the best way to privacy, but I think ‘hiding in plain sight’ is more appropriate in the Information Age. The more that is known about me, the more unlikely it is that someone can pretend to BE me.

I could go on bitching, but there’s no point. I will not change your mind, and you will not change mine. The only difference is that I’m not going to try to shame you for your opinions, or even LACK of opinion. We choose the things we care about, and NO ONE can care about everything. As long as your decisions are not based on ignorance of the subject, do as you wish.

[If you liked this article, please share! Want more like it, subscribe!]

GDPR Deadline

GDPR: May 25th is NOT a Deadline!

It seems there are only two ways to sell GDPR products and services:

  1. Tell everyone they are going to get fined €20M or 4% of their annual revenue; and
  2. Tell everyone that they only have until May 25th to get compliant or they’re in big trouble

These are both utter nonsense.

While the monster fines are a theoretical possibility (per Article 83), I would hope by that you know they will be reserved for the VERY worst offenders. If you don’t, read this from the UK’s Information Commissioner herself; GDPR – sorting the fact from the fiction. With my favourite quote being:

Issuing fines has always been and will continue to be, a last resort. Last year (2016/2017) we concluded 17,300 cases. I can tell you that 16 of them resulted in fines for the organisations concerned.”

And not one of these 16 (0.09% of the total!) was anywhere near the maximum of £500K, so forget the damned fines!! Unless of course you work for a bunch of total scumbags like Keurboom, then I hope you get completely reamed.

Anyway, so here we are, less than 3 months away from May 25th, and the ‘deadline’ for compliance is the most prevalent scare tactic!

“Get compliant before May 25th or else!!” “Deadline fast approaching!!” “Trust me, I’m a certified practitioner!!”

The thing is, “or else” what, exactly? What do you think is going to happen on May 25th? That your supervisory authority is going to be banging on your door with cries of “Article 30!! Show us your records!!“? Do you expect to receive hundreds of requests for access from people who know even less about GDPR than almost anyone reading this blog? Do you think you’ll suddenly be the subject of a class action suit?

Do you think your supervisory authority even knows who you ARE at this point? [No offence]

I’ll tell you what’s going to happen on May 25th …not a bloody thing different. It will be business as usual.

However, what WILL happen from May ONWARDS is a gradual increase in how the GDPR is enforced in each member state. Guidance from supervisory authorities will increase in-line with the real-world issues they face; certification mechanisms will be released forcing all organisations to at least review and consider them; the general public will gradually come to expect the heightened protection mechanisms and vilify those organisation who are not up to speed and so on.

To put this another way; Data Protection law is not going away and cannot be ignored. By anyone. In fact, in light of things like AI/ML, Big Data and the Internet of Things, data protection is only going to become more embedded in everything we do. It has to, and you need to keep up with it.

So the more time that passes the fewer excuses you will have for doing nothing, regardless of the size / type / industry vertical in which your business operates. In the UK for example you are already 20 years too late to be proactive. The DPA has been out since 1998 and compliance to it would have covered the lion’s share of the GDPR. Which itself has been out for almost 2 year.

While I can sympathise with organisations fumbling around but doing their best, I have little sympathy for organisations who have done nothing. It’s these folks who should be the most concerned, not for May 25th, but every day after it.

Not one organisation out there is incapable of doing these 6 things before the ‘deadline’. Not to completion perhaps, but a good chunk:

  1. Find out where all your personal data is; – [even crappy questionnaires and interviews will get you most of the way there]
  2. Map that data to the business processes that created it; – [HR, Sales, Marketing and so on…]
  3. Agree on which business processes should continue as they are, which should change, and which should stop altogether;
  4. Get rid of all instances of personal data that do not support the agreed business processes;
  5. Obtain appropriate guidance on the lawful basis(es) for processing what’s left; and
  6. Commit, in writing, at the Board level, to achieving full compliance

While this is nowhere near a full demonstration of compliance, you have done 3 things that the ICO have every right to expect. You have:

  1. reduced your risk by minimising your threat exposure – you can’t lose or misuse what you don’t have;
  2. done your best to ensure that you are supporting the data subject’s rights – the whole point of this exercise; and
  3. MADE A BLOODY START!

I don’t care if you only achieve full compliance 5 years from now, and it’s unlikely the supervisory authorities will, if, and ONLY if:

  1. Your commitment is real;
  2. You have a plan; and
  3. You don’t get reported or breached

It’s up to you to do ENOUGH now to make sure 3. doesn’t happen, work on the rest when you can. Just make sure you can justify your timelines.

[If you liked this article, please share! Want more like it, subscribe!]

PIN on Mobile

PCI: Software-Based PIN Entry on COTS (a.k.a. PIN-on-Mobile)

Almost four YEARS ago I wrote Software PIN, the Rosetta Stone of Future Payments, then just over a year later I wrote; Mobile Authentication: Exceeding Card Present Security?

Just this month the SSC finally came out with their Software-Based PIN Entry on COTS Security Requirements v1.0.

[Ed. While I don’t have to wonder why PIN was my primary focus, I can see how pointless it was …almost. It just makes the delay on this standard that much more inexcusable.]

On with the story… Software PIN is more commonly referred to as PIN-on-Mobile (or the catchier PIN-on-Glass), and is the ‘game-changing’ technology that will; “enable merchants to accept PIN-based payments with the PIN entered on a commercial off-the-shelf device, such as a consumer-grade mobile phone or tablet.”

What has taken them so long to make what – from my jaded perspective – is the only move that will delay their inevitable demise? It’s not like there was some miraculous innovation in mobile or encryption technology in the last couple of years! Every requirement in the standard was available/achievable long before I even wrote my blogs. As were viable solutions for that matter.

I suspect there’s lots of reasons of why they were so slow, but chief amongst these has to be their complete inability to adapt to the fast-paced innovation rampant in the FinTech industry. Especially given their hopelessly antiquated technology. It’s only their global adoption and sheer ubiquity that keeps them where they are. I blame the banks too, change for them means acceptance of liability.

Come to think of it, what an amazing coincidence that PSD2 – the biggest nail in the payment card’s coffin since …well ever, came out this month as well. Weird huh?

As far as I am concerned, PIN-on-Mobile was the card brand’s last hold-out, now they’re done. Hopefully between the XYZ-Pays (ApplePay, SamsungPay etc.) and now the entry of cardholder PIN on [almost] any CoTS device, big merchants / retail associations will finally have the balls to stand up for themselves.

How many millions have they spent in the US on EMV terminals just to find out a few years later that it was not only entirely unnecessary, but they’re now tied into an investment that will leave them lagging behind their competition who were slower of the EMV block?

I know that’s harsh, and we really have no right to judge. Have any of the following questions ever occurred to you?:

  1. If I can use my phone to pay for something, why do I have to tie that payment to a branded card?;
  2. With all of the security requirements required for the entry of a software PIN, why the Hell do I still have to use one? In other words, if it’s that bloody difficult to secure it, why not use something else?; and
  3. Isn’t there a better way!?

If you’re like the majority of the population, these questions are more like:

  1. Why doesn’t MY bank support this?! (looking at YOU Barclay Business!), or more commonly; why would I use this service when I have a piece of plastic?;
  2. What’s wrong with PIN?; and
  3. [nothing]

The fact is that the lion’s share of the cashless transactions globally are performed by those who have never known a time before payment cards. We simply can’t imagine anything else and we don’t even notice their inconvenience. We also don’t see the costs imposed by the middlemen.

But let me ask you this; Would you ever go back to using a feature phone? I’ll [almost] guarantee that you had no idea what features you wanted in a phone until you used a smartphone for the first time. And now you can’t live without it. Hell, most of us can’t even put the damned things down!

The same thing WILL happen to payments, but not until consumer indifference is overcome by something shiny and new.

Frankly this blog is boring even to me, and I really have nothing more to say about payment innovation that I have not already said a hundred times. But I simply can’t let anything so patently meaningless as PIN on Mobile to go unanswered.

Innovation my arse.

[If you liked this article, please share! Want more like it, subscribe!]

AI

If AI is the Answer, You’ve Asked ALL the Wrong Questions

For those reading this who are cybersecurity professionals (and who else would read this crap?); In your entire career, have you ever come out of the back-end of a risk assessment and said; “We need Artificial Intelligence.”

Anyone?

I seriously doubt it, unless you happen to sell artificial intelligence, or more likely, you’re trying to pass off your product as artificial intelligence.

But let me just clarify before I continue whining; AI is exciting as Hell, and I cannot WAIT to see what comes next. I am not in the ‘Skynet’ camp, and I even disagree with people a thousand times smarter than me. No, not my wife (this time), but the likes of Stephen Hawking, Bill Gates and Elon Musk, all of whom have issued their own warnings/predictions on the subject. I think AI is going to make our lives better in almost every way. Almost.

But not in cybersecurity at the organisation level. Not yet. Most businesses simply don’t have anywhere near the foundations in place to implement it appropriately, let alone effectively. Implementing any technology on top of broken processes and/or an indifferent security culture may only serve to make things worse.

I can see it in working the threat intelligence arena, where a behemoth like Alphabet – and their mind-boggling access to almost everything -, can fund something like Chronicle. But this is just one small part of a security program, feeding into the ages-old clichés of ‘defence in depth’ or ‘layered security’. AI is certainly not the panacea those with a vested interest would have you believe. Basically, if you don’t have the same access and deep pockets as Alphabet, you should be probably be focusing on the hundreds of other things you should have done long before now.

And even if there was an AI ‘appliance’ that you could just plug-and-play on your network, do you honestly think the bad guys won’t work out how to circumvent it with some AI tricks of their own? Regardless of the technology, the good guys always have to play by the rules and the bad guys will always do whatever it takes. This is not a fight we are EVER going to win, so stop trying. The only thing we can do, and the sole premise of my career, is to minimise the damage. Security folks are the definitive guys bringing a knife to a gunfight. But we will fight.

This is neither cynical, nor a cop-out, it’s reality, and spending money on a technology you’ll never understand, or maintain yourself, is not going to change that.

But none of this will stop organisations spending money on nonsense. On the one side you have product vendors, technology-centric consultants, hype in the press, and indifferent CEOs. On the other side, you have the ages-old security basics and a very limited number of stubborn practitioners. It’s not really that surprising that acronyms and the latest shiny-things get all the attention, just unfortunate.

In fact, it’s no different from ‘get rich quick schemes’ or ‘diet pills’, there are very few shortcuts to wealth and none to losing weight. Both involve getting off your lazy arse and doing something. So does security.

But most of all I simply can’t abide vendors who try to fit every single problem into the one thing they can do. From operationalising the whole of GDPR with ISO 27001, to solving every limitation of digital payments with biometrics, the attraction of the silver-bullet is just too much for some to resist. AI and machine learning are the latest purveyors in a long line of empty promises.

Perhaps I’m no better, all I can do is help you implement the basics. But I’ll guarantee what I’m selling is a damned sight cheaper and significantly more permanent! 🙂

[If you liked this article, please share! Want more like it, subscribe!]