Arguably the single biggest challenge for Cloud vendors is helping customers understand and justify the implications of handing over not only data but business processes to a Cloud Vendor, especially when the Cloud space has lacked maturity and standards.
And it’s becoming an increasingly important decision as Cloud becomes the “default” choice for many businesses, they need to understand where their data is and how safe it is.
Yes, Cloud Computing is still in its relative infancy, but it’s growing up fast. To hear a highly respected and influential Gartner analyst saying that he rarely recommends anything but SaaS solutions to companies looking to change their email security service shows that the die is well and truly cast. It’s a similar picture in the archiving space. SaaS vendors are growing far faster than their on-premise counterparts, although SaaS still accounts for a small share of the overall market. And of course, with Microsoft’s strategic priority to transfer the on premise dominance of Exchange into the cloud (with Office 365), it’s fairly clear that at some point in the future, all these technologies will be delivered to customers from the cloud.
It’s a matter of when, not if.
What’s surprising however, there seems to be a two stream approach to Cloud adoption, the haves and the have not’s- those who have Cloud and those who don’t. Yet.
On the one hand, especially in the SMB and midmarket, cloud vendors are now dealing with a far more enlightened customer base. Many CIOs are now on their second or third cycle of purchasing cloud services. They have wised up to vendors who over-promise, or hide behind bogus SLAs, and they will have rejected out of hand any service that doesn’t do what it says on the tin. Their next decision could potentially be based on a specific business or technical need, but more likely, it will be based not simply on the service but on the vendor’s approach to delivering that service. In other words, it will be based largely on the vendor itself.
The second stream is convincing the have not’s to adopt, often larger enterprises that their data is safe in the cloud. This is a slower burning challenge, because these businesses often have massive legacy investments in on premise IT resources, both in terms of tin and human capital. That makes a move to cloud technology not only a technological change in mindset but a cultural shift as well. But it doesn’t matter how big the organization is, the pressure on IT departments to reduce costs while delivering more value is the same. And most if not all roads lead to the cloud.
But IT departments needn’t fear- Jevons Paradox predicts that more IT will be required for the future, not less- it’s just going to be different to what they’re doing today. But that’s technology for you. When was the last time IT staff used their Windows 3.1 skills?
The danger here is that CIOs of large enterprises tend to ‘trust’ the biggest, most established technology brands with the deepest marketing pockets, best placed to “Cloudwash” their dated technologies. I use the term ‘danger’ because, when it comes to cloud, money can’t buy you trust. The big brands have whole shoals of fish to fry and are usually more interested in wooing consumers than they are safeguarding the interests of customers and their data. For smaller, pure play cloud vendors like Mimecast, this is ALL we do. And that means we can’t slip up. So we have to earn trust the hard way, and the only way. And that’s by building a history of excellence in delivering Cloud Services.
For those CIOs who’ve already made the leap of faith and are committed to a cloud strategy, we’re now hearing – anecdotally at least – that customer service and support has jumped up the purchasing priority list alongside cost. That is largely because customer support has been the single biggest pain point for consumers of cloud service over the last two years. Why? Because it is, arguably, the most underinvested business function in the cloud industry.
But of course, the economics of SaaS and cloud only work if you retain those customers for long periods. At Mimecast we retain over 98% of our customers. It goes without saying that the product has to work. But perhaps the key variable is our ability to look after our customers. To put it politely, the cloud industry has a patchy record in providing customer service.
To some extent, then, in the SMB and mid-market space, there will be a period of ‘natural selection’, where the new breed of cloud savvy IT purchasers weed out the suppliers whose service doesn’t match the promise, for whatever reason — unreliable product, unrealistic SLA, non-existent support, dodgy security protocols, or fudged solutions built on OEM arrangements or poorly integrated acquisitions. The cloud vendors who are playing the long game and investing properly where it matters will rise to the top through this process, and others will fall by the wayside. (In fact we’re already seeing this happening in the early part of 2012.)
For first time purchasers and larger enterprises, though, we still have to help them with their trust issues, and we won’t achieve that by focusing on customer service excellence. Instead, we have to put our weight behind meaningful industry initiatives that can turn ‘trust’ from an intangible to a tangible purchasing criterion. One example of this is Cloud Security Alliance’s Security Trust and Assurance Registry, or STAR, which is addressing the need for Enterprises moving applications and data to the cloud, or consuming a provider’s services, to understand cloud provider security. Another is an organisations willingness to adhere to security standards such as ISO 27001. But providers remain hesitant to give up proprietary information, or expose themselves to exploitation. In fact, to date, only Mimecast, Microsoft and Solutionary have agreed to publish their STAR controls.
Transparency is clearly going to be a major factor in the success of cloud technology, particularly as a means of building confidence amongst enterprise CIOs that their data is safe and secure in the cloud. But while we will continue to embrace standards initiatives such as STAR and ISO27001 that make trust a tangible factor, our growth in the mid-market will most likely come from good old fashioned values, such as delivering strong after-sales support, and from sharing stellar recommendations from existing customers.
STAR launched in the fourth quarter of last year and its aim is to be a public repository of providers’ security controls. Providers who are STAR members can fill out either the CSA’s Consensus Assessments Initiative Questionnaire or the Cloud Controls Matrix framework questionnaire, both built according to the ISO 27001 standard, and ultimately agree to have that data published online and publicly accessible.
Over the past few weeks, I’ve personally had a sharp reminder of what life was like in IT before cloud and I thought I’d share it.
At home I run a Windows Home Server to store my media- primarily for Music but also for Videos, Pictures and Files. Windows Home Server (WHS) is a really nice bit of kit for managing the home network and for storing data it’s got a built in drive replication system so that hopefully you won’t lose anything if a disk fails. Pair that with Squeezecenter and you’ve got an audiophile quality sound network for very little money.
Everything had been going swimmingly for about 3 years, until suddenly the other day it wouldn’t boot anymore… just a very loud ticking noise when I tried to turn it on. You know, the sound of a hard drive head graunching against the platter and the “click of death”. The sort of dreaded sound that when you hear, you know there isn’t going to be any data left….
Worse, because it wouldn’t boot I thought I’d lost the un-replicated system disk, the only disk without a backup. Was I going to lose some of our digital photos, music and videos? I was in heart attack (and divorce?) territory…
Thankfully I quickly discovered that it was one of my data disks that had failed, not the system disk. Unplugging it meant the system could boot again, and I was relieved to find that all my data was there. What a relief! But because the disk that failed was 1.5tb and represented nearly a quarter of all data stored it meant the disk management service wouldn’t start, so I couldn’t add new disks or remove the failed disk.
I was totally stuck.
But a few hours lost Googling and reconfiguring Windows services, I got the disk management service to start again and then started the process of adding new disks and removing the missing one- 3 days later it finally completed.
I don’t want to be stuck in this situation again, so now the storage is healthy again, I need to do something about the OS drive, the only un-replicated drive to prevent future heart attacks. But anyone who’s dealt with storage knows this isn’t a simple task… Migrating disks, repartitioning and building a RAID array are not for the faint hearted… The RAID BIOS is characteristically unhelpful and at each stage I’m unclear as to whether I’m going to lose my data or not. I refer back to the manual and restart the process multiple times to try and make sure I’m not overwriting the disk with my data with the blank one. I’m pretty sure I haven’t as I sit here, 4 hours later waiting for the array to build with 16% done.
But more than anything what this process really reminded me of was how life used to be, before the Cloud. Endless technical issues, constant fiddling with hardware and software, and above all downtime. It’s no wonder that IT job satisfaction levels are at an all time low:
Yet another worker described bosses who expect their employees to work late into the night if need be to fix problems and then be on the job the next day at the usual time. Even vacation time is no longer sacrosanct: one person said he expects to be contacted “more than a half dozen times” during his time off.
Imagine my server was not just a home server, but an Exchange server, serving hundreds of users email. The trivial downtime issues I experienced with no access to my music and videos would have been horrendous to any organisation that relies on email. And I remember it happening to my old companies Exchange server- I still have the hard disk platter to prove it (makes a great coaster BTW).
But naturally there are fears when handing off critical business services to a third party and people are rightly hesitant, but do your due diligence right and you could end up never having the stress of fixing these types of problems again, often for an order of magnitude less cost.
That’s a question I’ve heard many times recently. Going to the Microsoft Worldwide Partner Conference in Washington helped me answer some of those nagging questions.
One of the problems with a long time away travelling to conferences like I did in July is that there is hardly any time to really cogitate over what you learn, and even less time to write about it. There’s only one person that I know that does this well is Ray Wang- which prompted another analyst to ask- Does he sleep?
Now that I’m back, I’ve been thinking a lot about the Azure appliance and wondering if it represents the future of all Microsoft products, not just Azure? Let me explain…
I think one of Microsoft’s biggest problems is its customers.
Well, not actually its customers but the way software is deployed to its customers: on-premise.
It’s shipped and customers configure it extensively for their environment. The code and configuration are co-mingled.
But before the software can be shipped it’s extensively tested and throughout its life on-premise it’s continuously patched. This is an incredibly slow and time consuming process because of the sheer number of configurations and variations of hardware the software needs to exist on. Millions and millions of combinations have to be tested, bugs found and fixed, and even then they don’t catch them all and have to patch often. It’s painful for everyone.
One of the key tenets of the SaaS and Cloud business model is multi-tenancy. The reason that analysts get so vexed about multi-tenancy being core to SaaS / Cloud is because it separates the code and configuration. As the vendor upgrades the underlying code it doesn’t change the customers configuration.
The impact of this seemingly small innovation should not be be underplayed. Not only does it underpin the economics of SaaS it also underpins the agility of the vendor. With total control of hardware and software, the vendor is free to continuously deploy and test new code without interrupting client operations in a much more lightweight way than ever before.
This is why Google has been disrupting the on premise email market for the past few years with Google Apps. They use Agile development practices and iterate early and often, all on infrastructure and code they operate and maintain.
But as we know, some customers aren’t happy losing control of all their data to the Cloud, and like it or not, Private Cloud is here to stay. That’s why we’ve developed a Just Enough On Site philosophy- mixing the right amount of Cloud and on-premise IT, not forcing anyone down a prescriptive route but at it’s core still retaining the benefits of the Cloud.
With on-premise appliances has Microsoft figured out a way to beat Google at its own game, by deploying in the Cloud and On-Premise but still retaining enough control of the hardware and code to enable agility?
I think they might just have a shot.
And I wonder in the future we’re likely to see lots of Microsoft’s products being deployed like this?
p.s. You can see Bob Muglia launch the Azure Appliance at WPC here.