GDPR Vulture

Want on the GDPR Bandwagon? Be Qualified, or Stay the Hell Off!

First, what do I mean by ‘qualified’? – I mean that the only people truly qualified to lead a GDPR project are lawyers specialising in privacy. That’s it.

EVERYONE else only has a part to play. Often a very significant part, but that’s it for them as well. A part.

I’m NOT saying that every single organisation has to make the significant investment in a privacy lawyer to meet the intent of GDPR. I’m saying that the only ones qualified to determine ‘intent’ in your organisation’s specific context, are privacy lawyers. No-one who is an expert in information technology, or cybersecurity, or any other subject is qualified …unless they are also a privacy lawyer.

To even further labour the point, a qualified person is neverCertified EU General Data Protection Regulation Practitioner …unless – you guessed it – they are also a privacy lawyer.

I’ve seen every type of vendor from Cyber Insurance providers, cybersecurity consultants, to single-function technology vendors, make the most ridiculous claims as to their suitability to ‘help’ with GDPR. All to make a bit more money while the GDPR bandwagon is on the roll.

The prize so far goes to a consultant who maintains that the entire GDPR can be ‘operationalized’ under the ISO 27001 standard. Unfortunately this attitude is pervasive, as no organisation seems to want to share the opportunity with appropriate partners. The attitude of ‘land-the-gig-and-we’ll-work-out-how-to-deliver-it-later’ cannot apply here. GDPR is a law, one with significant penalties attached, so unless you really know what you’re doing, stick to what you know. And ONLY what you know.

For example, I can be [very] loosely categorised as a ‘cybersecurity expert’, so that limits my ability to help with GDPR to:

  1. Data Security – As I’ve said a few times now, of the 778 individual lines of the GDPR Articles, only 26 of them are related directly to data security. That’s only 3.34%. Yes, I can help you implement ISO 27001 to cover that 3.34% (a.k.a. “appropriate security and confidentiality”), but if GDPR is the only reason you have to implement ISO, don’t bother, you’ve missed the point;
  2. Secure Technology Implementation – GDPR is not about technology, but the implementation of GDPR will have significant technology implications. From collection of consent (Recital 32), to age identification (Recital 38), to the rights to erasure and rectification (Recital 39), technology will play a big role. All of this technology will require appropriate security wrappers in-line with demonstrable good security practices; and
  3. Governance Design and Implementation – Any organisation that has a Governance function already has a GDPR Implementation Team in place. Since there can be no true Governance without full departmental representation (Technology, Security, Legal, PMO, Sales, Marketing and so on), it follows that the Security team will have full understanding of GDPR’s impact from the Legal team. In turn, Technology and Security will have significant input to Legal’s decisioning, and it’s this ‘negotiation’ under the Governance umbrella that gives GDPR its ‘organisation specific context’.

This should be more than enough for any security consultant, but apparently it’s not enough for some consultants who want to replace Governance all by themselves. But, what’s wrong with partnering up with others to do the parts you absolutely should not touch? Is it not better to be really good at the one thing you do for a living and be part of a team of experts who can cover the other bases?

To put this another way, do you really want to ruin your reputation by lying to your clients now, or be the resource they come to to solve every similar problem from this point forward? Do you want to sell used cars or be a trusted advisor?

GDPR, like security, is not complicated. It’s actually very simple, just BLOODY difficult to implement. There is not one individual who can simplify this for you, not even a privacy lawyer. So if you’re looking to implement GDPR, you can rest assured that anyone who is a) not a privacy layer, AND 2) not part of a team of experts with collaborative skill-sets, AND 3) trying to sell you something, should be listened to with caution.

As always, I am not going to lay the blame entirely at vendor’s feet, they too have a business to run. In the end, the only people who get the answers they need on GDPR are the ones asking the right questions.

You MUST do your homework!

[If you liked this article, please share! Want more like it, subscribe!]

Who is making cybersecurity so complicated?

Who’s Making Cybersecurity So Complicated?!

One of the goals of this blog, as well as the ultimate goal of my career, is to simplify all aspects of cybersecurity. Well, maybe not all. I have no idea how to simplify a penetration test (or even perform one), or encryption mechanisms, but I’ve got the high-level stuff covered! 🙂

From my perspective, cybersecurity is already simple. You would hope so, it’s what I do, but that’s not actually what I meant. Which is that every aspect of cybersecurity must be simple for it to even be effective security in the first place. There is no room for complicated. It must also be accessible to everyone who needs it, regardless of their current role or previous experience.

It is therefore the job of every cybersecurity professional to make this stuff easy, but clearly we are not doing a very good job. In fact, I would go as far as to say that there are certain elements that seem to go out of their way to make things difficult!

What / who are these elements, and why are they doing it?


  1. No offence, but Element 1 is You; While you may not be a security expert, you are every bit as responsible for security as those who are the experts. Ignorance of your responsibilities is no excuse, and if your organisation does not provide you the necessary training, demand that they do so. Unless you’ve lived in a hole for the last 10 years, you have seen the headlines related to data breaches. You really don’t want to be the cause of one.
  2. Which is the ideal segue into the Element 2, which is; Senior Management. If they don’t care about security, there’s a very chance you don’t care (see element 1.). If cybersecurity is not in the Top 5 priorities of your BoD / CEO, then you likely have an entirely ineffectual security program. If you even have one at all. There is nothing more difficult and seemingly complicated than starting something from the very beginning, but start you must.
  3. Element 3 is of course, Lawyers / Regulators. Not that they do this on purpose, it’s that they just can’t help themselves. The language of the law is practically incomprehensible to the rest of us, yet it has to be lawyers that write every contract, regulation, and [of course] law out there. Combine their legal-ese with something you already don’t understand [cybersecurity], and you’re left scratching your head in frustration. Or worse, avoiding it altogether.
  4.  And the worst of the bunch, Element 4; Security Vendors. This is the one that is truly reprehensible. How many of you, for example, know what Cloud Access Security Brokers (CASBs) are? Or User and Entity Behavioral Analytics (UEBA)? What about Intelligence-Driven Security Operations Center Orchestration Solutions? No, me either. What I DO know is that you don’t need ANY of these things until such times as your risk assessment TELLS you need them! You have that process well oiled, right?

Of all the horrendous clichés out there, my favourite is ‘Back to Basics’. Cybersecurity is simple, bloody difficult, but simple. Anything that complicates it can be effectively ignored until such times as you’re ready for it. You will never get there by buying technology, and you will never get there until you get the basics right.

Luckily the basics are the cheapest things to fix. All you have to do is get your CEO to care, formalise your Governance, and get all of your policies and procedures in place.

Simple, right?

OK, that was facetious, but if you think any of these things is complicated you’re just not asking the right people the right questions.

[If you liked this article, please share! Want more like it, subscribe!]

Your Privacy is a Currency, Spend it Wisely

Whatever side you are on in the whole privacy debate, you have probably heard variants of the following two arguments:

  1. I don’t care if the Government happen to read my emails while looking for bad guys, I have nothing to hide, and I feel safer knowing they are doing something.
  2. There is no evidence that mass digital surveillance has any positive impact on the reduction of crime or terrorism, so my individual right to privacy (UDHR, Article 12) is more important.

Privacy-is-everything advocates will say things like; “Saying you don’t care about the right to privacy because you have nothing to hide, is no different than saying you don’t care about free speech because you have nothing to say.”, or “You can’t give away the rights of a minority, even if you vote as a majority.”

Privacy-as-a-currency advocates will counter with things like; “Saying mass surveillance has no proven benefit is like saying laws are ineffective, you have no idea how many crimes were prevented for fear of being caught.“, or “The minority has no right to impose their will on the majority when personal safety is at stake.

It makes no difference what side you are on, I will not change your mind, and you will not change mine, but we each much pay the same cost for the conveniences and functionality we have come to expect. And accept the responsibility for our choices.

The Internet and now mobile devices have completely changed the way we do business, interact with family and friends, buy stuff, and according to Ian Morris in “The Decline and Fall of Empires”, they will even ‘help’ change our biology;

“As social development rises ever higher, revolutions in genetics, computing, robotics and nanotechnology are beginning to feed back into our biology, transforming what it means to be human.”

Yet we somehow have this expectation that both the Internet and mobile devices are human rights in and of themselves, that we can do whatever we want on them and through them yet still have an expectation for privacy. Governments aside, how can we be so naive?

From my overly simplistic perspective, the world is made up of three kinds of people:

  1. The Good – We don’t have to worry about the good, their lives are spent taking care of whatever it is they care about, which is always in-line with established societal norms / laws, and regardless of the area of influence (i.e. immediate family, community, country, or global).
  2. The Bad – They care nothing for societal norms, they want, so they take. They care nothing for your right to privacy, and outside of instances of gross incompetence, fall almost entirely within your ability to point fingers if you are a victim. IF you can catch them.
  3. The Ordinary – Basically decent, perhaps with a little ‘moral flexibility’ thrown in, who may not like the Bad guys, but understand them enough not to be shocked when they do bad things. These are the majority, and the smarter ones prepare for the worse case scenario.

Laws and rights are written to protect everyone, but not everyone can be protected in the same way. I have contended many times that the more ‘out there’ that’s known about me, the less someone else can pretend to BE me. My life’s story is the equivalent of a public ledger, and any anomalies immediately obvious. This is true for my blogs, my social media, my payment history, and hopefully, even my identity itself.

Of course, there are many people who, quite literally, think I’m 100% wrong, an idiot, or both.

Whatever course YOU choose cannot be seen entirely within the context of your rights, especially ones you are spending every moment you are online.

[Ed. Found this, thought it was well done; Amazing Mind Reader Reveals His ‘Gift’]

The Investigatory Power Bill, Why Are We So Nervous?

This will be my first blog where I am going to a) plead ignorance; b) ask for your input, and c) actually listed to someone else’s opinion (potentially).


I have just spent all day today going over parts of The Investigatory Powers Bill (IPB), the Regulation of Investigatory Powers Act (RIPA) of 2000, as well as listening to Theresa May’s statement on, all in an effort to find SOME evidence that it could “…allow Government to ban end-to-end encryption, technology powering iMessage and WhatsApp“.

OK, so that’s just according to The Independent, but The Guardian has their; “Investigatory powers bill: snooper’s charter to remain firmly in place“, The Telegraph has their; “Internet firms to be banned from offering unbreakable encryption under new laws” and so on.

All I could find in the IPB was this;

189 – Maintenance of technical capability

(4) The obligations that may be imposed by regulations under this section include, among other things—

(c) obligations relating to the removal of electronic protection applied by a relevant operator to any communications or data;

(e) obligations relating to the handling or disclosure of any material or data.

…as well as a reference to encryption as it related to RIPA, which states;

21 – Lawful acquisition and disclosure of communications data.

(4) In this Chapter “communications data” means any of the following—

(b) any information which includes none of the contents of a communication (apart from any information falling within paragraph (a)) and is about the use made by any person—

So, first my ignorance; I do not speak Governmental legalese, so I have no idea is the vagueness of this is just way of saying ‘we can do anything we want’, or it’s an established-by-precedent way of saying ‘this is all we can do’.

I have also not read the whole, thing, it’s 300 pages long and makes the PCI DSS look like the last Harry Potter book,

Which brings me to the second part; your input. There will be those who have read not only the IPB, but the RIPA, the Communications Data Bill of 2012, as well as the Data Retention and Investigatory Powers Act of 2014. You also are likely to fall [mostly] into one of only two categories; for, and against.

I would love to hear reasoned thoughts from both, or at least point me to an unbiased Cliff Note version of each!

Finally, listening to someone else’s opinion; anyone who has been nice / bored enough to read my blogs over the last 2.5 years will not have read even one where I was in any way unsure of my opinion / stance. Even when it comes to security (what’s the font for sarcasm?).

In this case, I am 100% on the fence (mostly because of 1. above), but partially because any talk of ‘investigatory powers’ or ‘interception of communications’ will have significant impact on privacy, and the implementation of my real interest; Identity Management.

While my thoughts on privacy itself are public record, the impact of what these Governmental powers will have on putting true Identity Management into effect are far from clear to me. There will be no secure mobile payments, no Internet of Things, and no hiding from your wife if there is something in the middle capable of ‘reading’ my communications. Not because I don’t trust the Government, but because anything THEY have access to will eventually be available to the bad guys.

We work within established rules of decency, they don’t (the bad guys that is).

Basically, please help, all comments / thoughts welcomed.

No Mr. Big Data Service Provider, You CANNOT Do That!

In a very interesting presentation at the 2015 ISC2 EMEA Congress in Munich, Dr Lucas Feiler posited that any big data analytics performed, whether internally or outsourced, is going to attract significant legal challenges related to privacy. And even if the challenges CAN be resolved, it will likely be in ways that make something of a mockery of the EU General Data Protection Regulation (GDPR)’s intent.

Getting around privacy regulations will involve token human interaction (i.e. smoke and mirrors) where none is desired. In areas that needs to be dominated by AI and the resulting automated decisions (insurance for example), adding the human element to avoid the appearance of prejudiced results will probably be standard until the algorithms become smart enough to be considered ‘reasonable’ (my absolute favourite legal term, right up there with ‘appropriate’).

Human interaction is not desired by those doing the analysis mind you, we may think otherwise.

While not in place …yet, the European Council aims for adoption of the GDPR in 2017 and have it in full effect after a “two-year transition period“. While 4 years may sound like a long time, when you consider the following statistics (taken from you can only imagine how difficult it will be to clean up the mess if organisations don’t start following the regulation now:

  • The data volume in the enterprise is estimated to grow 50x year-over-year between now and 2020
  • 35 zettabytes (that’s 35,000,000,000,000,000,000,000 bytes) of data generated annually by 2020
  • According to estimates, the volume of business data worldwide, across all companies, doubles every 1.2 years

Granted, the vast majority of this data will be in the form of cat videos and Kardashian tweets, but that still leaves an extraordinary amount of YOUR data sitting on servers just waiting to mined, manipulated, and analysed in ways we cannot even imagine. We cannot imagine them because they have not been INVENTED yet, and that’s the Holy Grail for any organisation, and the impetus behind big data analytics in the first place; How to manipulate the data they have into the development of new revenue streams.

To put that another way; How to take the data they have on you already and present it back to you in a way that makes you spend more money.

I’m actually not against this per se, Amazon are already doing a mild version of it with their “Frequently Bought Together” and “Customers Who Bought This Item Also Bought” sections, but can you imagine how much data they have on their more than 1.5 MILLION servers across 17 global regions?

The card brands and Facebook can predict within a two week window whether or not you’re going to get divorced, how much other data do THEY have? Or Google?

But can the GDPR actually make a difference? Probably, it has a VERY big stick, and you know how lawyers love their class action suits!

Look at GDPR CHAPTER II, PRINCIPLES, Article 5 – Principles relating to personal data processing:

Personal data must be:

(a) processed lawfully, fairly and in a transparent manner in relation to the data subject;

(b) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes; (…);

(c) adequate, relevant and not excessive in relation to the purposes for which they are processed (…);


…and now CHAPTER VIII, REMEDIES, LIABILITY AND SANCTIONS, Article 79a – Administrative fines:

The supervisory authority (…) may impose a fine that shall not exceed 1 000 000 EUR or, in case of an undertaking, 2 % of its total worldwide annual turnover of the preceding financial year, on a controller or processor who, intentionally or negligently:

(a) processes personal data without a (…) legal basis for the processing or does not comply with the conditions for consent pursuant to Articles 6, 7, 8 and 9;

If you LOSE personal data the fines can be a much as 5% of worldwide annual turnover.

Will that make a difference?

I hope so.