Change Control

Change Control: Break the Vicious Cycle

Have you ever tried to fill a colander with water? Of course not, that would be ridiculous given that it’s full of holes. So why would you try to implement a security program without ensuring that whatever you fix does not get broken behind you?

Do you give your IT administrators permission to change the setting on your personal phone? Again, of course not, so why would you allow them to make significant changes to corporate assets without proper oversight?

While these analogies are flippant and geared toward emphasising my point, I would not be writing this blog if the issue of change control was not an enormously important one. At best, poor change control can cause additional unnecessary work, at worst you could be out of business. It’s bad enough that bad guys want to break in, most organisations I have seen are making it easier for them from the inside.

The definition of change control is; “…a systematic approach to managing all changes made to a product or system.“, and it’s purpose is “…to ensure that no unnecessary changes are made, that all changes are documented, that services are not unnecessarily disrupted and that resources are used efficiently.” Sounds fair, right? No disruption? Efficient? Are these not good things?

The biggest issue is that change control requires not only planning, but extra effort. You have to fill out a form, send an email, or log into a GUI of some sort, all of which may take longer than making the change in the first place. Change control is time-consuming and can be seen as a bottleneck, both of which are no-nos in the rapid evolution towards more and more function. But what would you rather have; 1) an insecure service quickly, or 2) a secure service a very short time later?

Unfortunately, given that change control is a primary function of governance, few organisations have the oversight to implement change control well. so how can organisation perform this most critical of processes?

First, it has to be appropriate. There is little point in a 5 person company buying a change control software, but larger organisations should not be using email and spreadsheets. As long as the right people are involved in making the change decisions, this process can be as formal or informal as is sustainable. If this is ever seen as a burden, it will be either circumvented, or ignored altogether.

Often overlooked, but critical to change control success, are a few pre-requisites…

Change Control Pre-Requisites:

  1. Ensure that the asset register contains not only physical devices, but applications, CotS software, data stores, location, unique skill-sets etc.
  2. Assign business criticality and maximum data classification to all assets;
  3. Assign ownership to all assets;
  4. Map all assets to the business processes they support (note: these maps becomes assets in and of themselves); and
  5. Ensure that the change request form includes a list of the affected assets.

Change Control Form:

Every change request must, at a minimum, include these things.

  1. List of affected systems;
  2. Details related to affected users (if applicable);
  3. Criticality of change request;
  4. Indication of additional risk;
  5. Success criteria / test plan;
  6. Back-out or fix-forward plan; and
  7. Appropriate authorisation.

By mapping the affected asset to their corresponding business processes, their owners, and both their criticality and maximum data classification, you can automatically bring the right decision maker to bear to authorise the change.

Too often the business owners have little to no insight to technology changes, when in reality, they are the only ones who should be authorising the change. IT and IS are, and have always been, business enablers, nothing more. First and foremost, change control need to reflect the goals of the business. In the absence of governance, the above minimums are about the only way to see that this happens.

Of course, if you also link change control to your ticketing system and incident response processes you would have the Holy Grail, but baby steps…

[If you liked this article, please share! Want more like it, subscribe!]

In Security, Technology is Always the LAST Resort

The temptation to spend money to make something annoying just go away is almost irresistible. I’m not just talking about security now, this is a human condition. From get-rich-quick schemes, to diet pills, to online ‘dating’, we want instant gratification and / or results. Sadly we also expect the underlying cause of our issues to be miraculously fixed as part of the fee.

What do you mean “Get your fat arse off the couch and go for a walk!”, I paid you to make me thin!? There are no shortcuts to fitness, and there are no shortcuts in security.

None.

But with phrases like; ‘panacea’, ‘silver bullet’ and my personal favourite; ‘guaranteed hack-proof’, the cybersecurity industry is becoming one of the worst offenders. Money is clearly more important than good service to many security vendors, and to those expounding on their virtues.

And we’re letting them get away with it! Whether it’s because we’re lazy, don’t know the right questions to ask, or just don’t care, it’s immaterial. Vendors will keep making useless products and we’ll keep buying them if things don’t change. Vendors have sold F.U.D. for years and we’re bringing only a few of them to task (FireEye for example).

The more complicated vendors can make security appear, the easier it is to sell their technology. At least that’s how it seems. There’s really no escaping that security must be simple to be effective; forget big data, use baselines; forget microsegmentation, just segment properly, forget user and entity behavioural analytics, fix your access control. In fact, ignore every acronym in the Gartner ‘Top 10 Technologies for Information Security in 2016‘ and focus on the basics, I’ll almost guarantee they aren’t addressed appropriately.

From policies and procedures, to change control, to vulnerability management, to incident response, worry about the base processes. They are not only more effective than any new technology, they are a damned sight more sustainable, more scalable, and cheaper!

One of the universal truths in security is that you cannot fix a broken process with technology, you can only make a good process even better. Better in terms of accuracy, speed, effectiveness, efficiency, long-term cost, you name it, the underlying process had to have worked beforehand.

Take incident response (IR) for example. If you have top-notch plans, a well trained team, and robust vulnerability management, a technology that gives you earlier event warnings is of distinct value. As would technologies that; reduces false-positives; automatically quarantine infected machines; supplies greater forensic information up-front, and so on.

However, if your IR plans are crap, your team has no idea what to do, and your systems have not kept up with the threat landscape, no technology in the world will stop an event from becoming a business crippling disaster.

Be honest,  how many of you have:

  1. Firewalls but poor segmentation?
  2. Routers but no mapping of your business processes?
  3. Anti-Virus and no OS hardening?
  4. HSMs and no idea where all your data is?
  5. Centralised logging with no idea what ‘normal’ looks like?
  6. …and the list goes on.

How can you expect a new technology to help when you’ve haven’t optimised what you already have?

There are of course exceptions to every rule, and in this case the exception is to buy an Asset Management System. Everything else you do in security has your assets at the core. Do this well and everything else becomes much easier.

[If you liked this article, please share! Want more like it, subscribe!]

[For a little more information on technology purchases, this may help; Security Core Concept 2: Security Control Choice & Implementation]

PCI – Going Beyond the Standard: Part 6, Asset Management

The thing with security is that there is always more than 1 top priority, so the trick is not to choose which comes first, it’s to get them ALL assigned and moving forward at the same time. There are simply too many interdependencies, and you will only avoid the inevitable road-blocks or analysis paralysis if you plan accordingly.

Asset Management is one of those top priorities, and is at the core of everything else you will ever do in the development, maintenance, and continuous improvement of your security program.

IF you do it properly that is.

Prior to v3.0 of the DSS, the requirement for asset management only went so far as an understanding of every system type, function, and number of them. Basically a spreadsheet to support the sample sizes and PCI validation efforts. But this undermines the entire assessment process itself, as the whole point of an assessment is that you are able to make EDUCATED judgment calls. Knowing that you have 20 Windows web servers tells you nothing about the potential impact of their loss, for example.

I think everyone’s heard the famous mis-quote by Peter Drucker; “If you can’t measure it, you can’t manage it.”, but how do you measure the value of an asset? The answer, like everything else in security, is simple. Not easy, and pretty much never done well, but it IS simple;

The value of each of your assets is directly related to the value of the data that flows through it.” and;

The value of your data is directly related to its importance to your business.

If you don’t know the above values you have a lot more probelms than security.

It does not matter whether or not the ‘value’ is in financial or criticality terms, what matters is that every OTHER security process must directly reflect its relative importance to your organisation. Does a web server have more importance to an e-commerce only merchant than it does to a plague/nest/whoop of lawyers (or whatever their collective noun is)? Maybe, maybe not. Would you expend far more effort protecting your intellectual property than you would your public web content?  Of course you would, unless you’re irretrievably stupid.

But what IS an asset? It’s not just your servers, network devices and software, it’s your locations, your vendors, your business processes, and just as importantly, it’s your PEOPLE (or more to the point, your people’s knowledge and skill-sets.). There are often many single-points of failure in most organisations, and the one that’s most often overlooked is the human factor.

Unless you include ALL of these things, none of the following business processes will be anywhere near as  effective, and perhaps not even possible:

  1. Risk Assessment – No point trying to examine your risks if you don’t know what those risks are related to.
  2. Gap Analysis & Control Acquisition – A logical follow on from a risk assessment, what are the gaps you have to fill? Can you use existing assets?
  3. Change Control – How can you give appropriate attention to change requests if you have no indication of regulatory relevance, maximum data classification, or the business criticality?
  4. Automated / Continuous  Compliance Validation – If you don’t have a list of all the running services and listening ports against your systems, how can you hope to automate the detection of policy / compliance violations?
  5. Business Transformation – Try adjusting your business in the face of competition if you don’t know what you have and how it fits together.

Asset Management is simply too important and too core to security to give it real justice in a blog, but suffice to say it is one of the easiest ways to centralise the required information to support every other process used to manage your security program. It is because Asset Management is so overlooked by PCI that everything  else is seen as being so difficult.

This is one of the few areas where I actually recommend you look into implementing an Asset Management system, especially if it forms the core of a Governance, Risk and Compliance tool. Surprisingly few do.

Do You Really Need a Governance, Risk Management & Compliance (GRC) Framework?

The answer, as any good consultant will tell you, is “That depends.”

Usually that’s a our way of saying we don’t know the answer, but then again, we don’t have to, we’re consultants, and it’s up to you to tell us more so we can now go GET the answer for you.

Like most things, GRC must start with a definition in order to apply context;

According to my old friend Wikipedia, GRC is “… the umbrella term covering an organization’s approach across these three areas.”

Which tells us absolutely nothing, so now we have to break it down:

Governance – Per my Security Core Concept 4: Governance & Change Control, governance is “…where the IT and business sides have conversations.”

Risk Management – “…is the set of processes through which management identifies, analyses, and, where necessary, responds appropriately to risks that might adversely affect realisation of the organisation’s business objectives.”

Compliance – “…means conforming with stated requirements (defined for example in laws, regulations, contracts, strategies and policies)

Hopefully you are asking yourself why these 3 things were ever apart in the first place for us to even NEED GRC to bring them together.  Done properly, Risk Management is owned by Governance, who have already taken compliance into account while designing their overarching security framework.  In other words, if Governance had been doing their job correctly, the way they approach risk management would spit compliance out the back end.

To understand why this is not the case in an overwhelming percentage of businesses, is to get back to how security is viewed in the first place: 1) Governance does not exist, or if it does, it has no authority,  2) Risk Management is woefully inadequate, and is certainly nowhere near the Plan > Do > Check > Act (PDCA) cycle, and 3) Compliance is seen as an annual project and not part of  Business-as-Usual.

Despite the fact that GRC is a term that SHOULD be redundant, it is seen as a goal in and of itself, and in my view, may detract from the business’s true end goal; Staying in business responsibly, with IT as an enabler.  The 4 Foundations of Security, and the 6 Security Core Concepts lay down SOME of the groundwork necessary to design an effective security framework, but neither these, nor GRC really get to the detail of how you begin this process.

You should start with an inventory of your assets, ALL of them.  i.e. Asset Management

There are a significant number of GRC tools and applications out there, and while I’m sure their intentions are good, they fall a long way short of providing the functionality necessary to do GRC well.

For a start, how can any GRC tool not begin with Asset Management, and I don’t just mean input from vulnerability scans, or network enumeration tools, which are only a small part of what asset management entails?  Assets are not just network devices and servers, assets are applications, processes, people, locations and so on, and without a good understanding of what these are, how can you perform a risk assessment, or monitoring, or incident response, or disaster recovery, or…..

True asset management will include all the following, and no GRC tool I know of can do it all;

  1. Front-End, Off-Line Audit and Data Collection Tool – inputting the information into the GRC tool is a laborious process, and not all information can be gathered while online. An offline assessment tool should be configured to run both your asset data collection processes, as well as any compliance process that you are subject to (PCI for example).  This offline tool can be used by external auditors, and internal auditors alike to build the full asset picture.
  2. Integration of System Settings Policies – your policies will dictate your minimum security standards; passwords, access control, logging etc
  3. Integration of Data Classification Policies – if your systems are to be configured differently for different data classification levels, this will need to be defined
  4. Network Enumeration & Network Mapping –  accept feeds from network mapping and enumeration tools in order to a) find and make initial stab at node identification, and b) gather any other ad hoc information available.
  5. Vulnerability Scanning – accept feeds from scanning tools to ensure that a) all systems are covered in the scans, and b) systems meet both policy and security minimums.  Ideally, the GRC tool would also feed into the scanning tools to provide up-to-date scan profiles, and exception rules.
  6. Automated Collection of Validation Evidence – PCI requires an annual validation of compliance, and only against a sample of systems. Security done correctly will have continuous compliance (i.e. near real-time), and automated validation of requirements (access control, passwords, logging etc).  This could be achieved by either server based agents, or integration with AD/LDAP for credentialed remote procedure calls.
  7. Baselined System Profiles – it is not enough to know the OS, IP, Hostname, location, owner etc (the usual asset management minimums), you should have record of it’s patch level, running services, listening ports, disk space, memory, even temperature.  A baselined system can then report against ANY anomalies.
  8. Firewall & Router Ruleset Validation – if you can feed a firewall or router ruleset into this system, you can a) compare it to the known business justifications, but you can also compare it to the system profiles to ensure you have no rules without corresponding business processes, running services on systems without corresponding rules, insecure services and so on.  Ideally, you could even create and maintain your network diagrams from this.
  9. Change Control & Trouble Ticketing – The change control process should feed into the ‘GRC’ tool to ensure that all monitoring and alerting mechanisms are up to date, and not triggering false positives.  Alerts FROM the GRC tool should automatically create trouble tickets based on a the data classification, system ‘sensitivity’/priority.
  10. Ease of Use – there is no point have ANY system or process that is too difficult to set-up, or impossible to maintain.

There are two main ways GRC vendors get you to use their product; 1) they ‘give’ you the software to use as part of a consultancy engagement, then charge you licensing fees if you want to keep the product after the engagement is complete., and 2) Sell you the product, set it up for free (or a nominal charge), then hope you need them to come back and engage them as a managed service provider for ongoing maintenance.

I’m not saying either of these is bad, you just need to decide EXACTLY what it is you want from your GRC tool and perform your due diligence accordingly.

No GRC tool can do everything I described, so you either must buy several different systems and integrate them yourselves, or forget the GRC tool and run the above functionality in an operations centre.

There are vendors out there who do each individual aspect of above well, and I will be talking to them about creating a consortium to bring their functioality together into a unified system.  Call it GRC if you want, but it’s not security until it’s simple enough to implement, and cost-effective enough to add real business value.

Do your due diligence before you buy anything, and again, if you need help, ask.