GDPR - Data Discovery

GDPR Compliance Step-by-Step: Part 2 – Data Discovery

In truth, this should be called Data Discovery & Asset Management, because there’s absolutely no point having one without the other. Nor should these things not already be part of your standard practices.

It’s 2018 and I can think of very few businesses who don’t have data as some of their most critical assets. No businesses bothering to read my blog anyway. So if data assets are that critical, why don’t you already KNOW where all of your personal data is? Why don’t you already have a record of who has access to it, and what they are doing with it?

Assuming yours is like 99.9% of business out there, you don’t have these mappings, and the reasons are as myriad as the business themselves. And maybe it didn’t matter that much up to now, but now it does. Very much so.

Under GDPR you are responsible for:

  1. Determining your lawful basis for processing for each of your separate business processes (both internal and client facing);
  2. Implementation of data subject rights in-line with 1. (erasure, portability etc.);
  3. Data minimisation (during collection and data retention);
  4. Data confidentiality, integrity, and availability (all to defensible levels);
  5. ‘Legitimising’ all transfers of, and responsibilities for, data to third parties

…and so on.

How exactly can you perform any of these things if you don’t KNOW what you have and where it is?

So assuming you agree with me so far and you need to get started, you have 3 choices:

  1. Run a series of interviews and questionnaires with all unique departmental stakeholders to manually track this stuff;
  2. Run some form of data discovery technology to find the data on end systems / databases / other file stores etc.; or
  3. Do both at the same time

Clearly 3. is the best option because a) you’ll need to involve the departmental stakeholders anyway, and b) no manual process will ever find the stuff you had no idea was even there (which will be a lot).

While I’m not going to go into step-by-step instructions on how to run the two main processes above – which are bespoke to each business – I will provide sufficient guidance for you to ask the right people the right questions:

Interviews and Questionnaires

At a bare minimum, you will need to collect this information from the non-IT departmental verticals (HR, Sales, Finance etc.):

  1. All categories of data received (name, phone number, home address etc.) both ‘direct’ and ‘ancillary’;
    i. Direct Personal Data (DPD, my term) is any data that by itself can determine the data subject – e.g. name, email address, mobile phone number, passport number etc.
    ii. Ancillary Personal Data (APD, again, my term) is any data that in and of itself cannot be used to determine the data subject, but a combination of APDs may do so. Or the loss of this data along with the related DPD would make the breach significantly worse – e.g. salary, disciplinary actions, bonus payments, disability etc.;
  2. In which application, database, file store etc. this data is kept – not looking for mappings down to the asset tag, just a general understanding;
  3. Data retention related to each business process – e.g. all data categories related to the ‘payroll’ process must be retained for x years post termination;
  4. List of third parties – if applicable, from whom do you receive the relevant categories of data, and to whom do you share them;
  5. Other – Ideally you should also be able to determine, per business process, whether; a) you are the controller or processor for each data category, and b) the data category is ‘mandatory’ (process won’t work without it) or ‘non-mandatory (nice-to-have)

Assuming the non-IT stakeholders cannot answer the below questions, you will need to speak to those who can (from IT, InfoSec, relevant third parties etc.):

  1. Location of data sources – physical location, including end-system (e.g. host name of server, AWS instance ID, etc.), data centre, address and country;
  2. Format of data – e.g. ‘structured’ (databases, spreadsheets, tables etc.), and ‘unstructured’ (Word documents, PDFs, photos, etc.)
  3. Data ownership – someone has to be responsible;
  4. Security controls – e.g. encryption / anonymisation / pseudonymisation, and everything related to ISO 27001/NIST et al;
  5. Network and data flow diagrams – a ‘baseline’ of how personal data should flow in your environment

Data Discovery

I’m really not sure how the phrase ‘data discovery’ got lumped together with ‘business intelligence’, but that’s the way it seems on Google at least. And no matter which data discovery technology vendor you ask, they can all provide everything you need for GDPR compliance.

For those of you who remember the PCI DSS v1.0 way back in December 2004, you will also remember the disgusting land-grab of every vendor whose technology or managed services fulfilled one of the 12 requirements. You will also remember the even more disgusting price-hikes by those technology vendors, especially by Tripwire whom Visa was dumb enough to list as an ‘example’ of file integrity monitoring.

So guess what’s happening now? Because every data protection regulation requires you to know what data you have, data discovery vendors are crawling all over each other to push their wares down your throat.

But all data discovery tools are not the same, and unless you can fully define what results YOU need from one, stick to the manual process until you do. These are a few of the questions to which you need answers:

  1. Do I need a permanent solution, or can a one-time, consultant-led, discovery exercise suffice for now?;
  2. Should the solution be installed locally on self-installed server(s), an appliance, cloud-based?;
  3. Can the solution perform discovery on end-systems, databases, AND traffic ‘on-the-wire’?;
  4. How will the solution manage encrypted files / databases / network protocols?;
  5. Does the solution need to accommodate Cloud-based environments?;
  6. Do I need an agent, or agent-less solution?;
  7. Should the solution be able to perform network discovery (NMAP-esque) and some semblance of asset management to guarantee coverage?;
  8. Do you need data-flow mapping as well?;
  9. Should the solution be integrated with some form of compliance management tool?;
  10. Is this solution to be self-managed or outsourced?

…and the list goes on.

As I’ve stated many times over the almost 5 years I’ve been blogging; “No technology can fix a broken process, it can only make a good process better”. Data discovery is no different, and its many and very significant benefits will only be realised when you ask the right questions.

Don’t know the right questions? Find someone who does.

[If you liked this article, please share! Want more like it, subscribe!]

Change Control

Change Control: Break the Vicious Cycle

Have you ever tried to fill a colander with water? Of course not, that would be ridiculous given that it’s full of holes. So why would you try to implement a security program without ensuring that whatever you fix does not get broken behind you?

Do you give your IT administrators permission to change the setting on your personal phone? Again, of course not, so why would you allow them to make significant changes to corporate assets without proper oversight?

While these analogies are flippant and geared toward emphasising my point, I would not be writing this blog if the issue of change control was not an enormously important one. At best, poor change control can cause additional unnecessary work, at worst you could be out of business. It’s bad enough that bad guys want to break in, most organisations I have seen are making it easier for them from the inside.

The definition of change control is; “…a systematic approach to managing all changes made to a product or system.“, and it’s purpose is “…to ensure that no unnecessary changes are made, that all changes are documented, that services are not unnecessarily disrupted and that resources are used efficiently.” Sounds fair, right? No disruption? Efficient? Are these not good things?

The biggest issue is that change control requires not only planning, but extra effort. You have to fill out a form, send an email, or log into a GUI of some sort, all of which may take longer than making the change in the first place. Change control is time-consuming and can be seen as a bottleneck, both of which are no-nos in the rapid evolution towards more and more function. But what would you rather have; 1) an insecure service quickly, or 2) a secure service a very short time later?

Unfortunately, given that change control is a primary function of governance, few organisations have the oversight to implement change control well. so how can organisation perform this most critical of processes?

First, it has to be appropriate. There is little point in a 5 person company buying a change control software, but larger organisations should not be using email and spreadsheets. As long as the right people are involved in making the change decisions, this process can be as formal or informal as is sustainable. If this is ever seen as a burden, it will be either circumvented, or ignored altogether.

Often overlooked, but critical to change control success, are a few pre-requisites…

Change Control Pre-Requisites:

  1. Ensure that the asset register contains not only physical devices, but applications, CotS software, data stores, location, unique skill-sets etc.
  2. Assign business criticality and maximum data classification to all assets;
  3. Assign ownership to all assets;
  4. Map all assets to the business processes they support (note: these maps becomes assets in and of themselves); and
  5. Ensure that the change request form includes a list of the affected assets.

Change Control Form:

Every change request must, at a minimum, include these things.

  1. List of affected systems;
  2. Details related to affected users (if applicable);
  3. Criticality of change request;
  4. Indication of additional risk;
  5. Success criteria / test plan;
  6. Back-out or fix-forward plan; and
  7. Appropriate authorisation.

By mapping the affected asset to their corresponding business processes, their owners, and both their criticality and maximum data classification, you can automatically bring the right decision maker to bear to authorise the change.

Too often the business owners have little to no insight to technology changes, when in reality, they are the only ones who should be authorising the change. IT and IS are, and have always been, business enablers, nothing more. First and foremost, change control need to reflect the goals of the business. In the absence of governance, the above minimums are about the only way to see that this happens.

Of course, if you also link change control to your ticketing system and incident response processes you would have the Holy Grail, but baby steps…

[If you liked this article, please share! Want more like it, subscribe!]

In Security, Technology is Always the LAST Resort

The temptation to spend money to make something annoying just go away is almost irresistible. I’m not just talking about security now, this is a human condition. From get-rich-quick schemes, to diet pills, to online ‘dating’, we want instant gratification and / or results. Sadly we also expect the underlying cause of our issues to be miraculously fixed as part of the fee.

What do you mean “Get your fat arse off the couch and go for a walk!”, I paid you to make me thin!? There are no shortcuts to fitness, and there are no shortcuts in security.


But with phrases like; ‘panacea’, ‘silver bullet’ and my personal favourite; ‘guaranteed hack-proof’, the cybersecurity industry is becoming one of the worst offenders. Money is clearly more important than good service to many security vendors, and to those expounding on their virtues.

And we’re letting them get away with it! Whether it’s because we’re lazy, don’t know the right questions to ask, or just don’t care, it’s immaterial. Vendors will keep making useless products and we’ll keep buying them if things don’t change. Vendors have sold F.U.D. for years and we’re bringing only a few of them to task (FireEye for example).

The more complicated vendors can make security appear, the easier it is to sell their technology. At least that’s how it seems. There’s really no escaping that security must be simple to be effective; forget big data, use baselines; forget microsegmentation, just segment properly, forget user and entity behavioural analytics, fix your access control. In fact, ignore every acronym in the Gartner ‘Top 10 Technologies for Information Security in 2016‘ and focus on the basics, I’ll almost guarantee they aren’t addressed appropriately.

From policies and procedures, to change control, to vulnerability management, to incident response, worry about the base processes. They are not only more effective than any new technology, they are a damned sight more sustainable, more scalable, and cheaper!

One of the universal truths in security is that you cannot fix a broken process with technology, you can only make a good process even better. Better in terms of accuracy, speed, effectiveness, efficiency, long-term cost, you name it, the underlying process had to have worked beforehand.

Take incident response (IR) for example. If you have top-notch plans, a well trained team, and robust vulnerability management, a technology that gives you earlier event warnings is of distinct value. As would technologies that; reduces false-positives; automatically quarantine infected machines; supplies greater forensic information up-front, and so on.

However, if your IR plans are crap, your team has no idea what to do, and your systems have not kept up with the threat landscape, no technology in the world will stop an event from becoming a business crippling disaster.

Be honest,  how many of you have:

  1. Firewalls but poor segmentation?
  2. Routers but no mapping of your business processes?
  3. Anti-Virus and no OS hardening?
  4. HSMs and no idea where all your data is?
  5. Centralised logging with no idea what ‘normal’ looks like?
  6. …and the list goes on.

How can you expect a new technology to help when you’ve haven’t optimised what you already have?

There are of course exceptions to every rule, and in this case the exception is to buy an Asset Management System. Everything else you do in security has your assets at the core. Do this well and everything else becomes much easier.

[If you liked this article, please share! Want more like it, subscribe!]

[For a little more information on technology purchases, this may help; Security Core Concept 2: Security Control Choice & Implementation]

PCI – Going Beyond the Standard: Part 6, Asset Management

The thing with security is that there is always more than 1 top priority, so the trick is not to choose which comes first, it’s to get them ALL assigned and moving forward at the same time. There are simply too many interdependencies, and you will only avoid the inevitable road-blocks or analysis paralysis if you plan accordingly.

Asset Management is one of those top priorities, and is at the core of everything else you will ever do in the development, maintenance, and continuous improvement of your security program.

If you do it properly that is.

Prior to v3.0 of the DSS, the requirement for asset management only went so far as an understanding of every system type, function, and number of them. Basically a spreadsheet to support the sample sizes and PCI validation efforts. But this undermines the entire assessment process itself, as the whole point of an assessment is that you are able to make educated judgment calls. Knowing that you have 20 Windows web servers tells you nothing about the potential impact of their loss, for example.

I think everyone’s heard the famous mis-quote by Peter Drucker; “If you can’t measure it, you can’t manage it.”, but how do you measure the value of an asset? The answer, like everything else in security, is simple. Not easy, and pretty much never done well, but it IS simple;

The value of each of your assets is directly related to the value of the data that flows through it.” and;

The value of your data is directly related to its importance to your business.

If you don’t know the above values you have a lot more problems than security.

It does not matter whether or not the ‘value’ is in financial or criticality terms, what matters is that every other security process must directly reflect its relative importance to your organisation. Does a web server have more importance to an e-commerce only merchant than it does to a plague/nest/whoop of lawyers (or whatever their collective noun is)? Maybe, maybe not. Would you expend far more effort protecting your intellectual property than you would your public web content? Of course you would, unless you’re irretrievably stupid (my favourite quote from A Fish Called Wanda).

But what IS an asset? It’s not just your servers, network devices and software, it’s your locations, your vendors, your business processes, and just as importantly, it’s your PEOPLE. Or more to the point, your people’s knowledge and skill-sets. There are often many single-points of failure in most organisations, and the one that’s most often overlooked is the human factor.

Unless you include ALL of these things, none of the following business processes will be anywhere near as effective, and perhaps not even possible:

  1. Risk Assessment – No point trying to examine your risks if you don’t know what those risks are related to.
  2. Gap Analysis & Security Control Acquisition – A logical follow on from a risk assessment, what are the gaps you have to fill? Can you use existing assets?
  3. Change Control – How can you give appropriate attention to change requests if you have no indication of regulatory relevance, maximum data classification, or the business criticality?
  4. Automated / Continuous Compliance Validation – If [for example] you don’t have a list of all the running services and listening ports against your systems, how can you hope to automate the detection of policy / compliance violations?
  5. Business Transformation – Try adjusting your business in the face of competition when you don’t know what you have and how it fits together.

Quite simply, Asset Management is too important and too core to security to give it real justice in a blog. Suffice to say, it is one of the easiest ways to centralise the required information to support every other process used to manage your security program. It is because Asset Management is so overlooked by PCI that everything else is seen as being so difficult.

This is one of the few areas where I actually recommend you look into implementing technology. An Asset Management System (AMS), especially if it forms the core of a Governance, Risk and Compliance tool. Surprisingly few do.

[If you liked this article, please share! Want more like it, subscribe!]

Do You Really Need a Governance, Risk Management & Compliance (GRC) Framework?

The answer, as any good consultant will tell you, is “That depends.”

Usually that’s a our way of saying we don’t know the answer, but then again, we don’t have to, we’re consultants, and it’s up to you to tell us more so we can now go GET the answer for you.

Like most things, GRC must start with a definition in order to apply context;

According to my old friend Wikipedia, GRC is “… the umbrella term covering an organization’s approach across these three areas.”

Which tells us absolutely nothing, so now we have to break it down:

Governance – Per my Security Core Concept 4: Governance & Change Control, governance is “…where the IT and business sides have conversations.”

Risk Management – “…is the set of processes through which management identifies, analyses, and, where necessary, responds appropriately to risks that might adversely affect realisation of the organisation’s business objectives.”

Compliance – “…means conforming with stated requirements (defined for example in laws, regulations, contracts, strategies and policies)

Hopefully you are asking yourself why these 3 things were ever apart in the first place for us to even NEED GRC to bring them together.  Done properly, Risk Management is owned by Governance, who have already taken compliance into account while designing their overarching security framework.  In other words, if Governance had been doing their job correctly, the way they approach risk management would spit compliance out the back end.

To understand why this is not the case in an overwhelming percentage of businesses, is to get back to how security is viewed in the first place: 1) Governance does not exist, or if it does, it has no authority,  2) Risk Management is woefully inadequate, and is certainly nowhere near the Plan > Do > Check > Act (PDCA) cycle, and 3) Compliance is seen as an annual project and not part of  Business-as-Usual.

Despite the fact that GRC is a term that SHOULD be redundant, it is seen as a goal in and of itself, and in my view, may detract from the business’s true end goal; Staying in business responsibly, with IT as an enabler.  The 4 Foundations of Security, and the 6 Security Core Concepts lay down SOME of the groundwork necessary to design an effective security framework, but neither these, nor GRC really get to the detail of how you begin this process.

You should start with an inventory of your assets, ALL of them.  i.e. Asset Management

There are a significant number of GRC tools and applications out there, and while I’m sure their intentions are good, they fall a long way short of providing the functionality necessary to do GRC well.

For a start, how can any GRC tool not begin with Asset Management, and I don’t just mean input from vulnerability scans, or network enumeration tools, which are only a small part of what asset management entails?  Assets are not just network devices and servers, assets are applications, processes, people, locations and so on, and without a good understanding of what these are, how can you perform a risk assessment, or monitoring, or incident response, or disaster recovery, or…..

True asset management will include all the following, and no GRC tool I know of can do it all;

  1. Front-End, Off-Line Audit and Data Collection Tool – inputting the information into the GRC tool is a laborious process, and not all information can be gathered while online. An offline assessment tool should be configured to run both your asset data collection processes, as well as any compliance process that you are subject to (PCI for example).  This offline tool can be used by external auditors, and internal auditors alike to build the full asset picture.
  2. Integration of System Settings Policies – your policies will dictate your minimum security standards; passwords, access control, logging etc
  3. Integration of Data Classification Policies – if your systems are to be configured differently for different data classification levels, this will need to be defined
  4. Network Enumeration & Network Mapping –  accept feeds from network mapping and enumeration tools in order to a) find and make initial stab at node identification, and b) gather any other ad hoc information available.
  5. Vulnerability Scanning – accept feeds from scanning tools to ensure that a) all systems are covered in the scans, and b) systems meet both policy and security minimums.  Ideally, the GRC tool would also feed into the scanning tools to provide up-to-date scan profiles, and exception rules.
  6. Automated Collection of Validation Evidence – PCI requires an annual validation of compliance, and only against a sample of systems. Security done correctly will have continuous compliance (i.e. near real-time), and automated validation of requirements (access control, passwords, logging etc).  This could be achieved by either server based agents, or integration with AD/LDAP for credentialed remote procedure calls.
  7. Baselined System Profiles – it is not enough to know the OS, IP, Hostname, location, owner etc (the usual asset management minimums), you should have record of it’s patch level, running services, listening ports, disk space, memory, even temperature.  A baselined system can then report against ANY anomalies.
  8. Firewall & Router Ruleset Validation – if you can feed a firewall or router ruleset into this system, you can a) compare it to the known business justifications, but you can also compare it to the system profiles to ensure you have no rules without corresponding business processes, running services on systems without corresponding rules, insecure services and so on.  Ideally, you could even create and maintain your network diagrams from this.
  9. Change Control & Trouble Ticketing – The change control process should feed into the ‘GRC’ tool to ensure that all monitoring and alerting mechanisms are up to date, and not triggering false positives.  Alerts FROM the GRC tool should automatically create trouble tickets based on a the data classification, system ‘sensitivity’/priority.
  10. Ease of Use – there is no point have ANY system or process that is too difficult to set-up, or impossible to maintain.

There are two main ways GRC vendors get you to use their product; 1) they ‘give’ you the software to use as part of a consultancy engagement, then charge you licensing fees if you want to keep the product after the engagement is complete., and 2) Sell you the product, set it up for free (or a nominal charge), then hope you need them to come back and engage them as a managed service provider for ongoing maintenance.

I’m not saying either of these is bad, you just need to decide EXACTLY what it is you want from your GRC tool and perform your due diligence accordingly.

No GRC tool can do everything I described, so you either must buy several different systems and integrate them yourselves, or forget the GRC tool and run the above functionality in an operations centre.

There are vendors out there who do each individual aspect of above well, and I will be talking to them about creating a consortium to bring their functioality together into a unified system.  Call it GRC if you want, but it’s not security until it’s simple enough to implement, and cost-effective enough to add real business value.

Do your due diligence before you buy anything, and again, if you need help, ask.