by

Forrester and Mimecast Webinar: Protecting Against Targeted Attacks

As the torrent of malicious content and spam moved away from our enterprise inboxes to more consumer and social platforms, we were perhaps lulled into a false sense that we’d finally beaten the spam problem.

But this simply isn’t the case. The risks to our enterprise inboxes and data have morphed into more harmful and effective security threats.

Forrester and Mimecast Webinar ‘Protecting Against Targeted Attacks’ - join us next Tuesday, September 30th, at 10am Eastern (1500 UK, 1600 RSA). Register free here:

Forrester and Mimecast Webinar ‘Protecting Against Targeted Attacks’ – join us next Tuesday, September 30th, at 10am Eastern (1500 UK, 1600 RSA). Register free here: http://mim.ec/Zdm7qY

Spear-phishing, or targeted attacks by email, is the next generation of threat our IT teams are scrambling to deal with. Plus, as more high profile security breaches hit the headlines, where spear-phishing is often the initial point of entry, it’s a threat that has got the attention of the C-suite.

So Mimecast is hosting another webinar in our series of ‘Expert Webinars’ to share essential advice on how to protect your business against spear-phishing and targeted attacks -  the webinar is next Tuesday, September 30th, at 1000 Eastern (1500 UK, 1600 RSA) and you can register for free here.

I’ll be joined by two industry experts; Rick Holland, the well-known Forrester Research analysts and IT security commentator, as well as Steven Malone, Mimecast’s own Security Product Manager.

Spear-phishers are specifically targeting you and your business in an effort to steal your intellectual property, customer lists, credit card databases and corporate secrets.

Whereas old style phishing was a scatter gun attack, spear-phishing is specifically targeted at a handful of individuals within a business. The attackers research their targets over many months, often using social media platforms to gain useful information about you. Like phishing, email is the main attack vector for spear-phishing, with well-crafted social engineered emails being the tool of choice.

During the webinar we’ll be discussing: the biggest threats and most dangerous attack tactics. Recent high profile case studies. The real life cost of attacks and the practical steps you can take to protect and educate your users.

Do leave a comment under this post or @reply me at @orlando_sc if you’ve any particular areas you want us to cover next week.

by

Six Email Continuity Mistakes – and How to Avoid Them

The 2014 Atlantic Hurricane season is in full swing through November, putting your organization – and mission-critical systems, like email – at sudden risk of exposure to tropical storms, floods and fires.

Ask yourself: When was the last time you tested your business continuity plan? If the answer is one year or longer, you risk significant network downtime, data leakage and financial loss. According to Gartner, depending on your industry, network downtime can typically cost $5,600 per minute or more than $300,000 per hour, on average. Don’t wait for disaster to strike. Treat email like the critical system it is, and avoid making these six mistakes that could jeopardize business continuity – and your job.

Combat downtime during hurricane season by planning ahead.

Combat downtime during hurricane season by planning ahead.

  1. Not testing your continuity solution. You’ve devised and implemented what you believe to be a solid continuity solution, but you’ve not given it a production test. Instead, you cross your fingers and hope when (and if) the time comes, the solution works as planned. There are two major problems with not testing your plan from the start. First, things get dusty over time. It’s possible the technology no longer works, or worse, maybe it was not properly configured in the first place. Plus, you might not be regularly backing up critical systems. Without testing the solution, you’ll learn the hard way that data is not being entirely backed up when you perform the restore. Second, when it comes to planning, you need a clear chain of command, should disaster strike. If your network goes down, you need to know who to call, immediately. Performing testing once simply is not enough. You need to test your solution once a year, at a minimum. Depending on the tolerance of your business, you’ll likely have to test more frequently, like quarterly or even monthly.
  2. Forgetting to test fail back. Testing the failover capabilities of your continuity solution is only half the job. Are you prepared for downtime that could last hours, days or even weeks? The ability to go from the primary data center to the secondary one – then reverting back – is critical, and this needs to be tested. You need to know that data can be restored into normal systems after downtime.
  3. Assuming you can easily engage the continuity solution. It’s common to plan for “normal” disasters like power outages and hardware failure. But in the event of something more severe, like a flood or fire, you need to know how difficult it’s to trigger a failover. Also, you need to know where you need to be. For example, can you trigger the fail over from your office or data center? It’s critical to know where the necessary tools are located and how long it’ll take you or your team to locate them. Physical access is critical. Distribute tools to multiple data centers, as well as your local environment.
  4. Excluding policy enforcement. When an outage occurs, you must still account for regulatory and policy-based requirements that impact email communications. This includes archiving, continuity and security policies. Otherwise, you risk non-compliance.
  5. Trusting agreed RTP and RPO. In reality, you’ve got to balance risk and budget. When an outage happens, will the email downtime agreed upon by the business really stick? In other words, will the CEO really be able to tolerate no access to email for two hours? And will it be acceptable for customers to be out of touch with you for one day? The cost associated with RTO and RPO could cause a gap in data restore. If you budget for a two-day email restore, be prepared that during an outage, this realistically means two days without email for the entire organization. As part of your testing methodology, you may discover that you need more or less time to back up and restore data. It’s possible that, as a result, you may need to implement more resilient technology – like moving from risky tape backup to more scalable and accessible cloud storage.
  6. Neglecting to include cloud services. Even when you implement cloud technologies to deliver key services, such as email, you still have the responsibility of planning for disruptions. Your cloud vendor will include disaster recover planning on their end to provide reliable services, but mishaps – and disasters – still happen. Mitigate this risk by stacking multi-vendor solutions wherever possible to ensure redundancy, especially for services like high availability gateways in front of cloud-based email services, or cloud backups of key data.

With the proper testing and upfront business continuity preparation, you can significantly reduce – or even prevent – email downtime, data leakage and financial loss after disaster strikes.

by

The Dangers of Convenience

We live in an always-on, digital world. Information is at our fingertips. Mobile devices are pervasive.

Interactive websites, allowing users to comment on posts, and social networking are de rigueur. All these things encourage us to consume—and share—information continuously and often without regard for the consequences. Criminals are increasingly using this information, often detailed about personal lives, to their advantage in social engineering exploits that specifically target individuals and that attempt to exploit the trust that they have in the technology, applications and websites that they use.

Ransomware was distributed through Dropbox, with attackers demanding users pay a ransom to have their files, which have been encrypted and are hence unusable, returned to them.

Ransomware was distributed through Dropbox, with attackers demanding users pay a ransom to have their files, which have been encrypted and are hence unusable, returned to them.

In recent years, consumers have flocked to file sharing sites that allow them to upload and share very large files such as photos and videos with friends and family. Seeing just how convenient such sites are, many users are increasingly adopting their use for business purposes as well, using them to upload information so that it’s available to them from any device that they wish to use, wherever they are. It has been recognized for some time that this creates security risks for organizations regarding sensitive data being placed on file sharing sites that are outside of the control of the IT department—often without their knowledge. Bloor Research has recently published research that discusses the problems surrounding unsanctioned use of file sharing sites in organizations and that provides pointers as to what organizations can do to provide employees with the convenience and flexibility they demand, but in a way that safeguards sensitive information and shields them from the perils of data loss.

But a relatively new problem with the use of file sharing sites is currently in the news. Criminals are turning to the use of such sites for hosting and spreading malware and viruses. In one such campaign, the Dropbox file sharing service has been targeted, with an estimated 500,000 users affected. In this case, ransomware was distributed, with attackers demanding users pay a ransom to have their files, which have been encrypted and are hence unusable, returned to them. It’s believed the attackers have so far netted $62,000 from this campaign alone.

Such attacks have been known about for some five years or so, but appear to be increasingly common. Just this month, an emerging practice came to light in terms of using file sharing sites for high-value, low-volume attacks against high-profile, lucrative industries that include banking, oil, television and jewelry businesses. Discovered by Cisco, these attacks are attributed to a group calling itself the “String of Paerls” group, which has been flying under the radar or security researchers since 2007, constantly changing their tactics to avoid detection.

These attacks highlight the problems many organizations are facing with the use of consumer-oriented services. Many organizations are still grappling with the issue of controlling the deluge of personally owned devices that are connecting to their networks—often outside of the purview of the IT department—as well as the use of cloud-based services by individuals or particular business units, many of which are not officially sanctioned by the organization. Now there is further evidence that they must add control of consumer-oriented file sharing services into the mix—not just to guard against the loss of sensitive information, but to prevent them being used as another vector for attacking the organization.

There are options available to IT that allow them to offer the same levels of convenience to users, but in a way that can bring back control over who is sharing what and with whom. Some of these options are discussed in the research published by Bloor Research referenced above. Centralized control and high levels of security are paramount. They must also be as easy to use as the consumer-oriented services employees are already used to if they are to gain widespread acceptance.

Today’s generation of consumers and employees demand convenience and the freedom to work as they wish. But that convenience brings many dangers to organizations if they cannot control where sensitive information is being posted or transferred, and who is accessing it, or guard against the dangers employees might be exposing the organization to through the use of unsanctioned services. There is a fine line to be tread between ensuring employees are satisfied and productive, and guarding the organisation from malicious exploits and data loss that could dent their revenues, brand and reputation.

by

Graymail – Mail That You Want, but Just Not in Your Inbox Right Now

The mail you want, but just not right now. Seems like an odd way to talk about email, either you want it or you don’t. For years we’ve been talking about the unwanted types of email, like spam, that have grown to be a pest, but which have largely been dealt with by effective anti-spam services; but now there’s a less distinct line between good and bad as far as our users are concerned. The email that sits in this middle ground has become known as graymail.

Mimecast’s new Graymail Control automatically categorizes graymail and moves it to a separate folder – allowing end users to review the messages at their leisure and keeping the inbox optimized.

Mimecast’s new Graymail Control automatically categorizes graymail and moves it to a separate folder – allowing end users to review the messages at their leisure and keeping the inbox optimized.

More specifically, graymail is email like newsletters, notifications and marketing email. The types of email marketing you are bombarded with receive when you buy something online or use your email address to sign up for something. Normally you are opted-in to these marketing emails unless you manage to spot the often well-hidden opt-out tick box. These emails are initially interesting, but grow tiresome quickly.

You’re unlikely to want them all in your inbox right now, but somewhere else that makes them easier to read later. Many consumer grade email providers offer a way of categorizing graymail, such as Gmail’s Primary Inbox and Promotions tabs.

Graymail isn’t new. The idea was first suggested by Microsoft researchers in 2007, at the now defunct CEAS conference. Graymail, or Gray Mail as it was called then, was defined as messages that could be considered either spam or good. It’s fair to say many end users consider newsletters that they opted-in to, mostly unknowingly, as spam even though they could easily unsubscribe from the sender’s distribution lists.

Graymail is also described by the phrase “Bacn”, (as in bacon). The first use of the term Bacn is thought to have been coined at PodCamp Pittsburgh 2, as a way to differentiate between spam, ham and bacn in your inbox.

The unwillingness of end users to unsubscribe, or understand the problem as being somewhat self-inflicted, has led many enterprise IT teams to look for a solution. As a provider of email security services, Mimecast’s Threat Operations and Spam teams know first-hand how users are inclined to report bacn or graymail as spam email. A large percentage of the email submitted to Mimecast for analysis as spam is in fact legitimate marketing email with valid unsubscribe links.

It has become increasingly obvious that end users will continue to be frustrated by this graymail problem. The most straightforward solution is stemming the flow in such a way that keeps an enterprise inbox free of bacn so legitimate business-related emails take priority. Mimecast’s new Graymail Control provides this capability, by automatically categorizing graymail and moving it off to a separate folder – allowing your end users to review the messages at their leisure and keeping the inbox optimized.

If you’d like to find out more technical detail about how to configure Mimecast’s Graymail Control please visit our Knowledge Base article here.