Cloud Computing

Are Cloud Providers ‘Too Big to Fail’ – Let’s Hope So

In a rather ludicrously titled article (yes, even for me!) ‘Too big to fail’ cloud giants like AWS threaten civilization as we know it” the author nevertheless addresses an interesting point. And while I almost entirely disagree with the final conclusions, they represent a valid, if extreme viewpoint. If those conclusions are a little self-serving, this can be forgiven in light of my own issues with some Cloud Providers.

The basic premise is that traditional hardware (servers etc.) sales are dropping, while cloud-based and managed services are on the rise. With the corresponding drop in hardware related skills (no demand), eventually we’ll be dependent on one of the big providers (Amazon, Google & Microsoft).

This is apparently very bad, as: “If one of these goes down hundreds of thousands of other companies go down too.” This is the “interesting point” I referred to earlier, unfortunately the reasoning presented simply makes no sense. Two examples provided are:

  1. power grid failures or natural disasters – with the fallout propagated worldwide; and
  2. AWS’ hiking of its UK prices post-Brexit as an example of how quickly customers could be affected.

First, suggesting the Google, Amazon or Microsoft have a single point of failure that could take them down globally is ridiculous. Second, with regard price fluctuations, this is likely the result of organisations choosing a provider based on price alone, and not performing adequate due diligence. In trying to save money by using US based provider, and not writing mitigating language into contract, you are the ones leaving yourselves exposed.

I’m really not picking on either the subject of the article, or the author, I’m just using this to demonstrate my point. Cloud services, done PROPERLY, are the future. Or without the stupid buzz-phrase; outsourced services over the Internet are the future of infrastructure management. The issue is that a lot of Cloud services are abysmal, and the due diligence performed by many organisations nothing short of a disgrace.

But outsource they will, and they should. For example, how many organisation really want to hire dedicated teams to perform all of the following;

  1. Design Operating System Hardening Guides;
  2. Build and maintain servers;
  3. Install and configure all relevant security software/application;
  4. Patching and Vulnerability Management;
  5. Data Encryption;
  6. Access Control;
  7. Logging & Monitoring
  8. …and the list goes on.

Whilst finding a single cloud provider to take care of this is almost impossible at this stage, that’s where it’s going. Only the economy of scale available to large providers can make these offerings cost effective enough to be an option for non-enterprise businesses. And frankly, the only businesses who actually care about how data is made available, are the ones being paid to make it happen for someone else.

The motivations behind the referenced article are rather simple to deduce; 1) they have a vested interest in selling hardware, and b) they can make more money through channel than Cloud.

Fair enough, but channel’s loss of market share, and their inability to pivot is entirely their fault. They are now suffering because they have never tried to put their products into perspective. The rush to maximise profit margins was at the expense of making themselves a truly valuable partner.

If channel had only put a consulting wrapper around their offerings, they could still be selling solutions, not stuck trying to flog pieces of metal and plastic.

Perhaps this article will make more sense now they they are feeling the pain; Attention Channels/Resellers, Don’t Forget Consulting Services!

[If you liked this article, please share! Want more like it, subscribe!]

4 thoughts on “Are Cloud Providers ‘Too Big to Fail’ – Let’s Hope So

  1. David – totally with you on the theoretical premise of cloud and its potential but I’m not sure where you’re going with your list of example tasks. Not sure if you’re saying those functions can be handed off to cloud providers? If the check list approach works with vulnerability management , then I agree, but in more sensitive cases the org has an imperative to keep control over the OS. So they need to design their OS sec standards in cooperation with ops and devops, and retain admin rights and RDP/ssh access for credentialed scans and incident investigation. A lot of orgs are taking the appproach of shoving everything into cloud, such as Oracle 10g (yep – real world) and ‘PaaS’ing it all. That’s bad. Generally speaking anything that requires lots of in-house local environmental knowledge can’t really be outsourced. That doesn’t mean it can’t go in the cloud, it just means it needs to be IaaS’d and not [S,P]aaS’d.

    • Hi Ian, many thanks again for taking the time to comment.

      I will somewhat disagree with you, but only in terms of semantics. Shockingly few organisations have the “ops and devops” necessary to design their own configuration standards in-house, so that can/should be outsourced. Once complete, the resulting standard MUST be owned entirely by the original org. Full accountability and liability.

      All other services related to the maintenance of the platform (access control, vulnerability scans etc.) must now be performed with full transparency. Management metrics, change control, incident response etc can all be outsourced, but must meet the defined goals of the original org.

      In other words, every function can be outsourced, but responsibility and accountability cannot. The issue is that no Cloud providers I’ve ever come across provide anywhere near this level of service, and few organisations ask the right questions. Lack of due diligence completely negates the potential benefits.

      • I _think_ we’re sync’d on the overall premise. Some services can be handed over completely, including OS support, others not, its just that the list of services that are completely outsourced should be rather small – for example I’m still not behind the whole SaaS and O365 show – messaging isn’t a simple area and I want full visibility of my email server please. Certainly the organisation maintains responsibility, yes, but the ‘response’ part of responsibility isn’t feasible if we lose control of VMs.

        I did not suggest that ops and devops design standards – hell no and if this were the case, then definitely outsource it, but also give up InfoSec because ‘so, its come to this’ 🙂 Few organisations ever take what could be called a close-to-advisable approach, so I’m not surprised if there lots of folk out there who believe the whole vulnerability management show can be outsourced. But for the sake of useful dialog we need to stay all utopia and stuff, or delusional, not sure which.

        Security makes a first draft, which is basically ‘this is a list of stuff we want to do with Windows 2012 Server’ for example. OPs rationalise it based on operational needs of BAU services – and if they’re clued up (which is rare, and getting rarer) they’ll kick out 50%+ of the standard. For example: Microsoft correctly leave some job scheduling features enabled by default because a lot of customers will use it, but an ever-prevalent automated attacker such as Conficker loves this, it uses it to be the ‘P’ in APT. Security standard therefore mandates disabling this, but ops kicks back with ‘lot of production services use this scheduling feature’ …toys out of pram proceed.

        The aforementioned is the only approach that works but its also a utopia approach.

        I really think you’re aware of everything I’ve covered here and I think you’re basing your premise on the real world where in-house expertise is non-existent. If its non-existent, then literally nothing will happen with standards and linking it to a working TVM program. in this case, as you said, outsource it. A generic copy – n – paste from CIS benchmarks is better than nothing. 🙂

If you think I'm wrong, please tell me why!

This site uses Akismet to reduce spam. Learn how your comment data is processed.