by Orlando Scott-Cowley
As the torrent of malicious content and spam moved away from our enterprise inboxes to more consumer and social platforms, we were perhaps lulled into a false sense that we’d finally beaten the spam problem.
But this simply isn’t the case. The risks to our enterprise inboxes and data have morphed into more harmful and effective security threats.
Forrester and Mimecast Webinar ‘Protecting Against Targeted Attacks’ – join us next Tuesday, September 30th, at 10am Eastern (1500 UK, 1600 RSA). Register free here: http://mim.ec/Zdm7qY
Spear-phishing, or targeted attacks by email, is the next generation of threat our IT teams are scrambling to deal with. Plus, as more high profile security breaches hit the headlines, where spear-phishing is often the initial point of entry, it’s a threat that has got the attention of the C-suite.
So Mimecast is hosting another webinar in our series of ‘Expert Webinars’ to share essential advice on how to protect your business against spear-phishing and targeted attacks - the webinar is next Tuesday, September 30th, at 1000 Eastern (1500 UK, 1600 RSA) and you can register for free here.
I’ll be joined by two industry experts; Rick Holland, the well-known Forrester Research analysts and IT security commentator, as well as Steven Malone, Mimecast’s own Security Product Manager.
Spear-phishers are specifically targeting you and your business in an effort to steal your intellectual property, customer lists, credit card databases and corporate secrets.
Whereas old style phishing was a scatter gun attack, spear-phishing is specifically targeted at a handful of individuals within a business. The attackers research their targets over many months, often using social media platforms to gain useful information about you. Like phishing, email is the main attack vector for spear-phishing, with well-crafted social engineered emails being the tool of choice.
During the webinar we’ll be discussing: the biggest threats and most dangerous attack tactics. Recent high profile case studies. The real life cost of attacks and the practical steps you can take to protect and educate your users.
Do leave a comment under this post or @reply me at @orlando_sc if you’ve any particular areas you want us to cover next week.
by Nathaniel Borenstein
Earlier this month, as you’ve no doubt heard, a batch of private pictures of celebrities were circulated widely on the Internet, having been either leaked or stolen from a storage medium the celebrities considered private and trustworthy.
One security breach doesn’t prove that the cloud is unsafe. It’s still safer than the alternatives.
On the theory that one person’s misfortune is another’s teachable moment, the Internet has been flooded, not by the pictures, but by well-meaning explanations of how users can protect themselves from such privacy violations. Most of them give advice that is mostly good; it’s certainly true that most people take far too few precautions with their most sensitive information. But some of it’s misleading, perhaps even betraying an ulterior motive and a hidden agenda.
While experts can agree on the vast majority of things you should do to be safe — which I won’t reiterate here — sometimes their advice reflects unspoken assumptions or agendas. While there’s a great deal of consensus about how to protect data stored in a given manner, there’s much more debate about whether one type of storage is fundamentally more secure than another.
Consider the lowly flash drive. Some would tell you that the safest place to put your data is on such a drive. It’s true that the lack of networking on a storage card makes it immune to network-based attacks, but instead it’s vulnerable to physical ones — those tiny drives are easy to steal, or to lose. Is your security better overall with the flash drive? It’s not easy to say.
Similarly, in the recent disclosure of scandalous pictures, some have rushed to say that this shows the insecurity of the cloud. Leaving apart the fact that Apple ultimately concluded that the pictures were not stolen from their cloud service, there’s a legitimate (albeit misplaced) question here: Is cloud storage less secure than other forms of large-scale storage?
Obviously it depends on what you look at. As I’ve said, USB vs cloud strikes me as too close to call on the personal side. But for business users, the right comparison is to on-premises systems. Many executives feel safer knowing that the data doesn’t leave their site, where they believe they have complete control. However, while that control might be complete for a small number of businesses, the typical business is far from expert in matters of security, whereas for cloud providers it’s a live-or-die issue. With very few exceptions, I think business data is more secure with a good cloud provider than with on overextended, undertrained IT team on premises.
So, does that mean the cloud is more secure than on-premise storage? Again, the answer isn’t black and white. How do you know how good your cloud provider is? Do you trade off professional security in the cloud with perceived security in your organization? There’s room for disagreement and nuance, for sure.
However, we should all beware of self-interested pundits who draw overly broad conclusions. Not only was the recent leak not a cloud leak after all, but even if it had been, we can’t read too much into an isolated event, remembering that nothing is perfect. One security breach doesn’t prove that the cloud is unsafe, any more than one accident with a change machine proves that change machines are a menace.
Life is dangerous. The only way to know how much a particular thing endangers us is to look at some longer-term statistics. An isolated event means nothing, but when someone uses such an event to broadly generalize, it can tell you a good deal about their own agenda.
by Mounil Patel
The 2014 Atlantic Hurricane season is in full swing through November, putting your organization – and mission-critical systems, like email – at sudden risk of exposure to tropical storms, floods and fires.
Ask yourself: When was the last time you tested your business continuity plan? If the answer is one year or longer, you risk significant network downtime, data leakage and financial loss. According to Gartner, depending on your industry, network downtime can typically cost $5,600 per minute or more than $300,000 per hour, on average. Don’t wait for disaster to strike. Treat email like the critical system it is, and avoid making these six mistakes that could jeopardize business continuity – and your job.
Combat downtime during hurricane season by planning ahead.
- Not testing your continuity solution. You’ve devised and implemented what you believe to be a solid continuity solution, but you’ve not given it a production test. Instead, you cross your fingers and hope when (and if) the time comes, the solution works as planned. There are two major problems with not testing your plan from the start. First, things get dusty over time. It’s possible the technology no longer works, or worse, maybe it was not properly configured in the first place. Plus, you might not be regularly backing up critical systems. Without testing the solution, you’ll learn the hard way that data is not being entirely backed up when you perform the restore. Second, when it comes to planning, you need a clear chain of command, should disaster strike. If your network goes down, you need to know who to call, immediately. Performing testing once simply is not enough. You need to test your solution once a year, at a minimum. Depending on the tolerance of your business, you’ll likely have to test more frequently, like quarterly or even monthly.
- Forgetting to test fail back. Testing the failover capabilities of your continuity solution is only half the job. Are you prepared for downtime that could last hours, days or even weeks? The ability to go from the primary data center to the secondary one – then reverting back – is critical, and this needs to be tested. You need to know that data can be restored into normal systems after downtime.
- Assuming you can easily engage the continuity solution. It’s common to plan for “normal” disasters like power outages and hardware failure. But in the event of something more severe, like a flood or fire, you need to know how difficult it’s to trigger a failover. Also, you need to know where you need to be. For example, can you trigger the fail over from your office or data center? It’s critical to know where the necessary tools are located and how long it’ll take you or your team to locate them. Physical access is critical. Distribute tools to multiple data centers, as well as your local environment.
- Excluding policy enforcement. When an outage occurs, you must still account for regulatory and policy-based requirements that impact email communications. This includes archiving, continuity and security policies. Otherwise, you risk non-compliance.
- Trusting agreed RTP and RPO. In reality, you’ve got to balance risk and budget. When an outage happens, will the email downtime agreed upon by the business really stick? In other words, will the CEO really be able to tolerate no access to email for two hours? And will it be acceptable for customers to be out of touch with you for one day? The cost associated with RTO and RPO could cause a gap in data restore. If you budget for a two-day email restore, be prepared that during an outage, this realistically means two days without email for the entire organization. As part of your testing methodology, you may discover that you need more or less time to back up and restore data. It’s possible that, as a result, you may need to implement more resilient technology – like moving from risky tape backup to more scalable and accessible cloud storage.
- Neglecting to include cloud services. Even when you implement cloud technologies to deliver key services, such as email, you still have the responsibility of planning for disruptions. Your cloud vendor will include disaster recover planning on their end to provide reliable services, but mishaps – and disasters – still happen. Mitigate this risk by stacking multi-vendor solutions wherever possible to ensure redundancy, especially for services like high availability gateways in front of cloud-based email services, or cloud backups of key data.
With the proper testing and upfront business continuity preparation, you can significantly reduce – or even prevent – email downtime, data leakage and financial loss after disaster strikes.
by Orlando Scott-Cowley
It’s been years in the making and has had its fair share of media hype, but according to Gartner’s August ‘Hype Cycle Special Report for 2014‘ the concept of Big Data has now entered its aptly named ‘Trough of Disillusionment’.
It’s been years in the making, but according to Gartner’s August ‘Hype Cycle Special Report for 2014′ the concept of Big Data has now reached the point where we’re now in the ‘Trough of Disillusionment’.
And it’s not Gartner alone. Talk to industry stalwarts and a clear message comes back – the honeymoon is over. No longer is it a positive buzzword in meeting rooms. It’s becoming tangible…real people with real salaries and real job titles are now associated with the discipline of managing and making the most of a company’s big (or small) data, both locally and in the cloud.
We’ve come to realize there are a number of opportunities for big data and it’s management, as outlined in IBM’s August report titled ‘The New Hero of Big Data and Analytics‘. In it, a new C-suite role is outlined, along with five areas are a Chief Data Officers (CDOs) can optimize and innovate in:
- Leverage: finding ways to use existing data.
- Enrichment: existing data is joined up with previously inaccessible (fragmented) data either internal or external.
- Monetization: using data to find new revenue streams.
- Protection: ensuring data privacy and security, usually in collaboration with the Chief Information Security Officer.
- Upkeep: managing the health of the data under governance.
It’s a great list of general outcomes for those who manage data to plan around over the coming years, but what might be even more useful is a planning framework to help develop these plans now.
Obviously this framework will evolve, and to some extent there will be a degree of trial and error as organizations try to wrangle increasingly large data-sets. But I thought it’d be useful to make some suggestions for considerations against these outcomes. So I’ve come up with some key questions to gather information to help in the CDOs strategic planning. Answering yes to most, if not all of these questions is a good indication a CDO in your organization would have a beneficial business impact.
- People: as mentioned in IBM’s report – is the CDO’s office a guiding, enforcing authority? Is the office fully aligned to the business and scalable? Are the skills available appropriate? Is the business giving the CDO authority or permission to operate?
- Compliance: not just with regional and industry regulation but with the company culture.
- Intelligence: how can the right information reach the right people in a digestible form that catches their attention? Does the information remain useful throughout its lifecycle?
- CIA: Confidentiality, Integrity and Availability. The triangular cornerstones of any information security policy, no less important. Can your CDO guarantee data CIA, and have board level authority therein?
- Technology: which technology providers can help support these outcomes today, and well into the future? Does the chosen technology scale in line with the parabolic growth of data, or is it linear or worse, unpredictable?
It’s by no means a definitive list, but we hope it helps stimulate the conversation around this emerging discipline of curating data to a commercial end. I look forward to sharing ideas with our customers and partners on this over the next few months. And as always, I’d appreciated any comments under this post.