Superstition in Security

Feeling lazy, this is a re-blog, but the last few weeks at work has made this especially relevant;

Been composing this blog for several months now, and it started when I was thinking about how superstitions begin; It’s bad luck to walk under ladders, or it’s 7 years of bad luck if you break a mirror for example.  And then it occurred to me that these superstitions were probably the only way to scare children into, or out of, certain behaviour.

Walking under ladders, well duh, things fall OFF ladders, so don’t walk under them, and mirrors used to be really, REALLY, expensive, so telling children that breaking them would have horrific consequences makes a lot of sense (in a very negative way of course). I’m surprised that playing with matches didn’t become a superstition, but then again, household-use matches were not readily available until the 1800’s.

Unfortunately, these things have a way of sticking around long after the original cause is either meaningless, or worse, is twisted and perverted by those with a vested interest in the status quo. ‘Heretics’ were burned at the stake for suggesting that the Earth revolved around the Sun, and not the other way around*, and ‘witches’ were similarly killed in horrific ways when they suggested that herbal remedies were better than leaches and other forms of bleeding. Priests and Doctors respectively were very protective of their power.

Human nature has changed very little since then, only societal laws and the more progressive ‘norms’ keep the peace.

I have for years likened information security to insurance, in that no-one wants to spend money on it, but they know it’s a cost of doing business. And more recently I have likened security to the law, because it’s becoming so complex in terms of regulation / legislation / standards etc, that’s it’s often out of reach for the organisations and individuals who need it most.

Now I find myself likening security to superstition, because from the way we’re going, it won’t be long before being in security will have the same stigma as being a tax auditor, a parking enforcer, or a lawyer. QSAs are almost there already because the entire concept of PCI is so limited, but there is no reason true security professionals should not be seen in the same light as those responsible for driving revenue growth or competitive innovation.

Security departments are something people go out of their way to avoid, or to circumvent. They are seen as the department-who-says-no, who will stifle innovation and good ideas, and generally do the one thing that would label them heretics; get in the way of revenue.

Nothing could be further from the truth, as no other department has the knowledge and DESIRE to do the things that make staying is business possible:

  1. Innovation: It’s the 2000s, the vast majority of innovation now is in technology. Who else is best placed to pick the RIGHT technologies to ensure that innovation is implemented in a way that enhances the organisation and not just adds risk?
  2. Business Transformation: Competitive advantage in the information age is now measure in weeks and months, not years, organisations without the ability to adjust critical business processes quickly and appropriately will be left behind. What other department has the knowledge of exiting processes to enable the adjustments?
  3. Revenue Protection: Can you think of anything worse than seeing all your revenue disappear into the hands of regulators because your focus on selling failed to take into account that your processes for doing so were completely inappropriate. I understand completely the pressures, but revenue generation is not about doing what it takes, it’s about doing what’s right.
  4. Reputation Protection: I could have put this under revenue protection, but wanted to break this out as corporate reputation goes way beyond just revenue, and my OCD will not allow for an even number of bullet points. Damage of reputation through loss of data C.I.A. can have long-term negative effects on a business, just ask CardSystems who went from $25M / annum to out of business in less than 1 year after their breach.
  5. Infrastructure Investment Optimisation: OK, long title, but consider that the amount of money spent on PCI is already in the multi-billions, when a huge chunk of that could have been save by adjustments in PROCESS. Technology purchase is the last resort of a true security professional.

I really don’t have an answer to HOW we can ensure our reputations remain unsullied, and there are a lot of so called security experts out there giving the rest of us a bad name, but I think the worst thing to do is fall back one of the phrases I hate most in this world; “It is, what it is.”

Actions speak louder than words, and I will never stop trying to show my clients that security is something to be embraced, not avoided.

Forward this to all your friends or you’ll have 3 years of bad luck.

Screen Shot 2015-11-16 at 12.36.09

The Automation of PCI Reports on Compliance

If you really want to piss off the PCI Security Standards Council (PCI SSC), show them how you are writing your Reports on Compliance (RoCs) automatically. You’ll find yourself in remediation very quickly.

But why? What possible difference could it make that you say things the same way from report to report as long as the validation of the controls was performed correctly?

For example, let’s take Requirement 2.2.b; “Examine policies and interview personnel to verify that system configuration standards are updated as new vulnerability issues are identified, as defined in Requirement 6.1.

In order to validate that this in place you must perform the following three processes;

  1. Identify the policy documentation verified to define that system configuration standards are updated as new vulnerability issues are identified;
  2. Identify the personnel interviewed for this testing procedure, and;
  3. For the interview, summarize the relevant details discussed that verify that the process is implemented.

So, for 1.,  if you have mapped all relevant documentation to the PCI requirements (which you should) in your RoC Writing Tool (RWT), this will simply be an automated regurgitation of the document names, and hopefully section numbers. If not, you have the relevant documents already summarised in Section 4.10.

For 2., you should already have your personnel mapped against system grouping in your asset register, so again, this is just a regurgitation. If not, you have the relevant group(s) already summarised in Section 4.11.

For 3., this is where the SSC is looking for a true narrative, but all validation relevant to 2.2.b is performed in the same way for each system type, so as long as the QSA and their client are actually doing their jobs properly, the contents of this narrative will be basically the same;

For [asset group]:

QSA interviewed [personnel], examined [documents], and obtained [validation evidence] for [client] [name of vulnerability management process] and [name of configuration standard process], as well as examined production configurations for [sample(s)].

…and so on for each distinct and relevant asset grouping.

A huge advantage of this is that for any asset type you add, the list of related DSS Requirements, and the related validation evidence all become pre-defined and mandatory action items, assigned to an individual. Assuming you have also defined a compliance goal, you will also have due dates. Even further, with a correctly defined hierarchy you also have the beginning of true project management.

Imagine that, with asset management done well, you have an up-front list of EVERYTHING required to achieve compliance, along will full accountability for the collection of the necessary validation evidence. As the target organisation collects and uploads the evidence, the RoC is writing itself in the background, thereby giving full insight into the gaps and level of effort indicators required to either adjust resources, or justify technology or outsourcing investments.

Now imagine how simple APIs could accept information from end-point technologies like A/V for FIM, or from centralised management stations for logging or IDS, or how a simple agent could report against running services / listening ports / registry settings for an operating system, and you are starting to perform the validation itself automatically. Not against PCI requirements, but against your full gamut of corporate policies and standards, all of which are mapped against you asset type. And not annually, but all day, every day.

Forget PCI compliance, this is the type of Continuous Compliance Validation everyone needs, regardless of the data type, compliance regime, or industry sector.

Difficult? Yes. Simple? Always.

Screen Shot 2015-11-09 at 14.59.54

The Investigatory Power Bill, Why Are We So Nervous?

This will be my first blog where I am going to a) plead ignorance; b) ask for your input, and c) actually listed to someone else’s opinion (potentially).


I have just spent all day today going over parts of The Investigatory Powers Bill (IPB), the Regulation of Investigatory Powers Act (RIPA) of 2000, as well as listening to Theresa May’s statement on, all in an effort to find SOME evidence that it could “…allow Government to ban end-to-end encryption, technology powering iMessage and WhatsApp“.

OK, so that’s just according to The Independent, but The Guardian has their; “Investigatory powers bill: snooper’s charter to remain firmly in place“, The Telegraph has their; “Internet firms to be banned from offering unbreakable encryption under new laws” and so on.

All I could find in the IPB was this;

189 – Maintenance of technical capability

(4) The obligations that may be imposed by regulations under this section include, among other things—

(c) obligations relating to the removal of electronic protection applied by a relevant operator to any communications or data;

(e) obligations relating to the handling or disclosure of any material or data.

…as well as a reference to encryption as it related to RIPA, which states;

21 – Lawful acquisition and disclosure of communications data.

(4) In this Chapter “communications data” means any of the following—

(b) any information which includes none of the contents of a communication (apart from any information falling within paragraph (a)) and is about the use made by any person—

So, first my ignorance; I do not speak Governmental legalese, so I have no idea is the vagueness of this is just way of saying ‘we can do anything we want’, or it’s an established-by-precedent way of saying ‘this is all we can do’.

I have also not read the whole, thing, it’s 300 pages long and makes the PCI DSS look like the last Harry Potter book,

Which brings me to the second part; your input. There will be those who have read not only the IPB, but the RIPA, the Communications Data Bill of 2012, as well as the Data Retention and Investigatory Powers Act of 2014. You also are likely to fall [mostly] into one of only two categories; for, and against.

I would love to hear reasoned thoughts from both, or at least point me to an unbiased Cliff Note version of each!

Finally, listening to someone else’s opinion; anyone who has been nice / bored enough to read my blogs over the last 2.5 years will not have read even one where I was in any way unsure of my opinion / stance. Even when it comes to security (what’s the font for sarcasm?).

In this case, I am 100% on the fence (mostly because of 1. above), but partially because any talk of ‘investigatory powers’ or ‘interception of communications’ will have significant impact on privacy, and the implementation of my real interest; Identity Management.

While my thoughts on privacy itself are public record, the impact of what these Governmental powers will have on putting true Identity Management into effect are far from clear to me. There will be no secure mobile payments, no Internet of Things, and no hiding from your wife if there is something in the middle capable of ‘reading’ my communications. Not because I don’t trust the Government, but because anything THEY have access to will eventually be available to the bad guys.

We work within established rules of decency, they don’t (the bad guys that is).

Basically, please help, all comments / thoughts welcomed.

Screen Shot 2015-10-22 at 11.56.08

No Mr. Big Data Service Provider, You CANNOT Do That!

In a very interesting presentation at the 2015 ISC2 EMEA Congress in Munich, Dr Lucas Feiler posited that any big data analytics performed, whether internally or outsourced, is going to attract significant legal challenges related to privacy. And even if the challenges CAN be resolved, it will likely be in ways that make something of a mockery of the EU General Data Protection Regulation (GDPR)’s intent.

Getting around privacy regulations will involve token human interaction (i.e. smoke and mirrors) where none is desired. In areas that needs to be dominated by AI and the resulting automated decisions (insurance for example), adding the human element to avoid the appearance of prejudiced results will probably be standard until the algorithms become smart enough to be considered ‘reasonable’ (my absolute favourite legal term, right up there with ‘appropriate’).

Human interaction is not desired by those doing the analysis mind you, we may think otherwise.

While not in place …yet, the European Council aims for adoption of the GDPR in 2017 and have it in full effect after a “two-year transition period“. While 4 years may sound like a long time, when you consider the following statistics (taken from you can only imagine how difficult it will be to clean up the mess if organisations don’t start following the regulation now:

  • The data volume in the enterprise is estimated to grow 50x year-over-year between now and 2020
  • 35 zettabytes (that’s 35,000,000,000,000,000,000,000 bytes) of data generated annually by 2020
  • According to estimates, the volume of business data worldwide, across all companies, doubles every 1.2 years

Granted, the vast majority of this data will be in the form of cat videos and Kardashian tweets, but that still leaves an extraordinary amount of YOUR data sitting on servers just waiting to mined, manipulated, and analysed in ways we cannot even imagine. We cannot imagine them because they have not been INVENTED yet, and that’s the Holy Grail for any organisation, and the impetus behind big data analytics in the first place; How to manipulate the data they have into the development of new revenue streams.

To put that another way; How to take the data they have on you already and present it back to you in a way that makes you spend more money.

I’m actually not against this per se, Amazon are already doing a mild version of it with their “Frequently Bought Together” and “Customers Who Bought This Item Also Bought” sections, but can you imagine how much data they have on their more than 1.5 MILLION servers across 17 global regions?

The card brands and Facebook can predict within a two week window whether or not you’re going to get divorced, how much other data do THEY have? Or Google?

But can the GDPR actually make a difference? Probably, it has a VERY big stick, and you know how lawyers love their class action suits!

Look at GDPR CHAPTER II, PRINCIPLES, Article 5 – Principles relating to personal data processing:

Personal data must be:

(a) processed lawfully, fairly and in a transparent manner in relation to the data subject;

(b) collected for specified, explicit and legitimate purposes and not further processed in a way incompatible with those purposes; (…);

(c) adequate, relevant and not excessive in relation to the purposes for which they are processed (…);


…and now CHAPTER VIII, REMEDIES, LIABILITY AND SANCTIONS, Article 79a – Administrative fines:

The supervisory authority (…) may impose a fine that shall not exceed 1 000 000 EUR or, in case of an undertaking, 2 % of its total worldwide annual turnover of the preceding financial year, on a controller or processor who, intentionally or negligently:

(a) processes personal data without a (…) legal basis for the processing or does not comply with the conditions for consent pursuant to Articles 6, 7, 8 and 9;

If you LOSE personal data the fines can be a much as 5% of worldwide annual turnover.

Will that make a difference?

I hope so.


Screen Shot 2015-10-20 at 16.16.54

The Analogies Project, We Should ALL Be Involved

I’m sure that in an earlier blog I stated that I would never use this medium to promote a vendor or specific product. I cannot find that quote so it clearly didn’t happen, and seeing as this promo is for something that’s actually not-for-profit, I don’t feel like a complete sell-out.

An analogy is defined as; “a comparison between one thing and another, typically for the purpose of explanation or clarification.” and as such is an incredibly powerful tool to provide a necessary context to understand something for which we have limited knowledge or experience. For example, the immortal (well, except for his death and all that) Douglas Adams used what to me was the funniest analogy of all time;

The ships hung in the sky in much the same way that bricks don’t.

I have used analogies through my blogs and my career, and frankly, any ‘security expert’ who DOESN’T use them is likely a poor consultant, or just starting out. Too many of us are horribly guilty of the Curse of Knowledge, and end up blaming our clients for what, in the end, can only be our deficiencies.

In a conversation with Bruce Hallas, the founder and passionate driving force behind The Analogies Project, it was not surprising that two famous quotes from Einstein were used to perfectly summarise the issues faced by those giving, and those trying to receive, InfoSec services:

  1. Insanity: doing the same thing over and over again and expecting different results.”, and;
  2. If you can’t explain it simply, you don’t understand it well enough.”

And on further reflection, there’s this one that I have always loved by Alan Greenspan; “I know you think you understand what you thought I said, but I’m not sure you realize that what you heard is not what I meant.”

Any guidance we provide to our clients on information security is only as good as what is understood and retained. Imparted knowledge is meaningless without the listener’s understanding of it (knowledge = seeds, understanding = ploughed field, ooooh an analogy!!).  I have long maintained that the ultimate consultant is one who teaches, and there are no great teachers who do not take their audience’s individuality into account. You don’t explain where babies come from the same way to your 5 year old child as you would your teenager would you?

Yes, your client must WANT to learn in the first place, and the constant fight against the lack of security culture is not something we can fix by ourselves, but I firmly believe that a change in culture can only come with a true understanding of the benefits, and that will never be a one-size-fits-all, even within the same organisation.

This is where The Analogies Project could truly shine. Having an analogy for a risk assessment is one thing, but having a series of analogies for Receptionists, the C-level, and everyone one in between, broken down by personal interest or sector applicability and so on, will provide usable experience to everyone. Giver and receiver.

I am signing on as a contributor and will be mentioning The Analogies Project in all of my subsequent training or InfoSec presentations (ISC2, ISACA, ISSA etc.), I urge you to do the same;

Go here to begin;