by Clint Boessen
Microsoft has changed the way Offline Address Book (OAB) Distribution works over previous versions of the product to remove a single point of failure in the Exchange 2007/2010 OAB Generation design. While this new method of generating and distributing the Offline Address Book has its advantages, there is also a disadvantage which can result in a breach of privacy especially in multi-tenant environments. In this article we will be looking over how OAB Generation worked in the past as opposed to how it works now highlighting both the good and the bad.
Back in May 2009, I published an article entitled “How OAB Distribution Works” which has received a large number of visits and can be found on my personal blog under the following URL link. This article explains in detail the process behind OAB Generation in Exchange 2007 and 2010 and I highly recommend this read to anyone who is not familiar OAB Generation in previous releases of the product.
If you have not read the above article, let’s quickly summarise. In Exchange 2007/2010 every OAB has a mailbox server responsible for OAB Generation. The mailbox server responsible for OAB generation would generate the OAB according to a schedule and place it on an SMB share under \\mailboxservername\ExchangeOAB. The Exchange 2007/2010 CAS servers responsible for distributing this Offline Address Book would then download the OAB from this share to a folder advertised through Internet Information Services (IIS). Outlook clients then discover the path of the IIS website through autodiscover and download the files located under the OAB IIS folder through HTTP or HTTPS. If you need to gain a more in-depth understanding of this process again I encourage you to read the blog post above.
Now the problem with the above design is every OAB has one Mailbox server hard coded to be the server responsible for performing OAB Generation. The whole point of Exchange Database Availability Groups is to allow mailbox servers to fail and have databases failover to other mailbox servers which is a member of the same Database Availability Group. This presents a single point of failure. In the event the server responsible for generating the OAB was to fail, this OAB generation process would not failover to another server as the OAB is hardcoded to use that specific mailbox server as the OAB generation server. This means until an administrator brings back the mailbox server which failed or moves the OAB generation process for the specific OAB to another mailbox server, the OAB in question will never get updated.
To fix this in development of Exchange 2013, Microsoft needed a method to allow any mailbox server to fail without disrupting the OAB generation process, after all this was the whole idea behind Database Availability Groups – the ability to allow mailbox servers to fail. Instead of spending development time on putting together a failover technology around OAB Generation, Microsoft decided to incorporate the OAB Generation process into Database Availability Groups. This means instead of having one mailbox server generate the OAB and share it out via SMB, the Exchange 2013 server hosting the active mailbox database containing the Organization Mailbox is now the server responsible for generating the OAB. In fact in Exchange 2013, the OAB is now stored in an Organisation Mailbox so in the event a mailbox server fails or a database failover occurs, the OAB will move along with it. This architecture change has removed the OAB generation single point of failure which caused problems for organisations in previous releases of the product.
Whilst Microsoft removed the single point of failure from the generation process of the OAB, they introduced a problem with the distribution process. In previous releases there was a service running on CAS servers known as the Exchange File Distribution Service, a process which downloaded a copy of the OABs from various mailbox servers performing the OAB Generation task and placed the OABs in a web folder available for clients to download. This allowed companies running multiple OABs to provide NTFS permissions on the OAB folders to restrict who is allowed to download the OAB. This is especially useful in Exchange multi-tenant environments to ensure each tenant is allowed to only download the address book applicable to their organisation.
In Exchange 2013 Client Access Servers the Exchange File Distribution Service has been removed and the Exchange 2013 CAS now proxies any OAB download requests to the Exchange 2013 mailbox server holding the active organisation mailbox containing the requested OAB. The Exchange 2013 CAS finds which mailbox server this is by sending a query to Active Manager. As the Exchange 2013 CAS no longer stores each OAB in a folder under the IIS OAB directory, companies can no longer set NTFS permissions on the folders to restrict who has permissions to download each respective OAB. It is also important to note that inside each organisation mailbox there is no means provided for organisations to lock down who can download each OAB through access control lists. This introduces privacy issues for companies who offer hosted Exchange services as it presents a privacy breach. Someone who knew what they were doing and has a mailbox within the Exchange environment could download OABs from other organisations and in result gather full list of employee contacts for data mining purposes. Microsoft’s response to this threat documented in the multi-tenant guidance for Exchange 2013 is for hosting companies to “monitor the OAB download traffic” – in other words there is no real solution to prevent this from happening.
For more information about the Exchange 2013 OAB distribution process I strongly recommend the following article published by the Exchange Product Team.
by Barry Gill
Clint Boessen is a Microsoft Exchange MVP located in Perth, Western Australia. Boessen has over 10 years of experience designing, implementing and maintaining Microsoft Exchange Server for a wide range of customers including small- to medium-sized businesses, government, and also enterprise and carrier-grade environments. Boessen works for Avantgarde Technologies Pty Ltd, an IT consulting company specializing in Microsoft technologies. He also maintains a personal blog which can be found at clintboessen.blogspot.com.
Over the past two days, 55 of our technical customers in the UK, joined us to find out more about Exchange 2013. We organized this with friends of ours that wrote a new book called ‘Microsoft Exchange Server 2013: Design, Deploy, and Deliver an Enterprise Messaging Solution’, Nathan Winters and Nicholas Blank. They were joined by other expert speakers Brian Reid and Carl Holt.
The presenters have the full attention of the audience at the Mimecast Exchange event
We put this event on for our customers as part of our commitment to help them best exploit their messaging environment. We also have established a private community for Sys Admin customers on LinkedIn where we will share content including video content after the event.
Over the two days we dived deep into the technical detail of Exchange. In day 1 we looked at mailbox and client access Exchange architectures, load balancing and publishing, and, most importantly, designing Exchange. Day 2 was hybrid deployments, High Availability and Site Resilience, and finally migration to Exchange 2013.
This event has been extremely well received with many of the attendees being able to use the last two days as an opportunity to rapidly skill up in preparation for pending upgrades!
So check back on our blog over the coming week for highlights from the discussions. Or join our LinkedIn community if you are a technical customer.
by Tim Bond
Last month, Mimecast announced Large File Send for Outlook and personally I can’t wait for it to be rolled out. This is something, Capsticks and our clients, as a security aware firm have been crying out for.
Obviously, it was exciting news for the end users who will now be able to send files of up to 2GB from Outlook. But equally as important is the impact it will have to people like me running an IT Department. That’s why when Mimecast asked me to guest post about the service on its blog I jumped at the chance.
Capsticks is a specialist healthcare law firm, ranked as the number one healthcare firm in both Legal 500 and Chambers legal directories
The first thing that struck me about the new service was that we can now meet the user demand for large file sharing whilst ensuring the files are stored within our existing infrastructure governance. Plus, as the user experience is so easy, it should be easy enough to persuade our users to switch from consumer file sharing services. On the governance side, Mimecast has really delivered on the user experience – users get to set custom expiration dates on the files that they send. Administrators can control these expirations and the Administrator is provided with audit logs and download counters and reporting.
The other benefit I’m really looking forward to is improved storage. As the Mimecast service intercepts the large file and stores it in the secure cloud, it bypasses the constraints of our Exchange server. Plus, Large File Send makes each inbox work harder – as large attachments are no longer hogging the users’ storage allowance on the server. Cloud storage also eliminates duplication of large files, instead of a large file sitting in the sender and receiver’s mailbox, one copy sits in the cloud. In addition, the service allows for large attachments from internal and external mail to be removed from the email server by administrator defined policy.
As you’d expect from Mimecast, the large files are protected with advanced security. Data Leak Prevention controls can be set centrally for sensitive information and all file uploads are SSL encrypted and stored with AES encryption. Also, you can be notified of policy breaches as well as determine the file size that can be sent and received within the organization. All large file attachments are securely uploaded from Microsoft Outlook to the Mimecast cloud, where they’re scanned according to security policies defined by the IT administrator before being sent on to the recipient.
It was only when I really started using Large File Send for Outlook and exploring these features that I appreciated how useful it was going to be for IT teams. If, like us at Capsticks, you’re always looking for ways to tighten security and improve visibility of data but want to offer users modern features it’s a big piece of the puzzle. Mimecast really gets the challenges I face as Head of IT in a law firm and I can’t wait to see what Mimecast brings out next.
by Nicolas Blank
In chapter one of “Microsoft Exchange 2013: Design, Deploy and Deliver an Enterprise Messaging Solution”, we talk about constraints that may be forced upon us when designing Exchange. One of these constraints may be that we must use either existing hardware or the incumbent virtualization solution. Existing hardware can be a bear of an issue, since if the sizing doesn’t fit the hardware, then you don’t really have an Exchange deployment project anymore.
However, virtualization carries with it the promise of over committing memory, disk and CPU resources, which are features deployed by most customers taking advantage of virtualization technologies. Note that over committing anything in your virtualization platform when deploying Exchange is not only a bad idea, it’s an outage waiting to happen.
Virtualization is not free when it comes to the conversion of physical hardware to emulated virtual hardware, and the figures vary between vendors, however you may be looking at a net loss in the range of 5-12 percent across the entire guest’s performance. Coming back to constraints, let us assume your customer – or your company – requires you to virtualize and use VMware as the chosen hypervisor.
Once you’ve taken into account that you’re virtualizing, you then need to size your guest, as if you’re sizing the real world equivalent of the server. Let’s assume that for arguments sake you end up requiring four cores per server, but you allocated eight, since more cores never hurt anyone, right?
You read the prevailing guidance carefully so you decide to use an existing blade with eight cores, and allocate another eight cores to Exchange, bearing in mind that you’ve already allocated eight cores to two other applications on the same blade. No fuss, you may think, the guidance states that you should allocate no more than two virtual cores, per physical core. Since you’re a conscientious SysAdmin, you’ve benchmarked CPU usage on the VMware host and decided that the values are acceptable.
Now it turns out that for some reason Exchange seems to run non-optimally. You decide then to move Exchange to another blade with more CPU’s and double the core count within the guest from eight to 16 CPU’s, since more CPU’s never hurt anyone, right?
Turns out that the expectation for a linear increase in performance is not fulfilled…where do you turn to next?
A good place to start may have been the vendors specific guidance pertaining to Exchange, in this case VMware supplies the Exchange 2010 best practices guide, which states (emphasis added)
Consequently, VMware recommends the following practices:
- Only allocate multiple vCPUs to a virtual machine if the anticipated Exchange workload can truly take advantage of all the vCPUs.
- If the exact workload is not known, size the virtual machine with a smaller number of vCPUs initially and increase the number later if necessary.
- For performance-critical Exchange virtual machines (production systems), the total number of vCPUs assigned to all the virtual machines should be equal to or less than the total number of cores on the ESXi host machine.
While larger virtual machines are possible in vSphere, VMware recommends reducing the number of virtual CPUs if monitoring of the actual workload shows that the Exchange application is not benefitting from the increased virtual CPUs. For more background information, see the “ESXi CPU Considerations” section in the white paper Performance Best Practices for VMware vSphere 5 (http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf).
Before “consequently”, the guide briefly introduces VMware’s Virtual Symmetric Multi-Processing model, as well as detailing a wait state known as “ready time”. Ready time is the metric revealing why your Exchange workloads in VMware are not benefiting from more processors, assuming that “Ready Time” is consistently high (more than 5%).
The consequence of throwing more vCPU’s at a guest than required is that the guest spends more time in “ready time” than is required, as the hypervisor waits for ALL underlying cores which it believes are available to the guest to become available to execute instructions. In other words, the guest OS is ready to process instructions on the processor, however the hypervisor is forcing the guest to wait until all the physical cores are available. This state becomes much worse, as the ratio of vCPU’s to physical CPU’s increases.
In several chapters we make reference to the Windows Server Virtualization Validation Program (SVVP) and guide you to make sure that your chosen virtualization platform is listed and supported.
VMWare is listed as supported for multiple versions of Exchange and Windows Operating Systems however your server performance is still bad. Does that mean it’s a bad hypervisor?
The point is that VMWare is not a bad hypervisor but not understanding how VMWare allocates CPU resources, as well as not following VMWare’s guidance will result in poor performance for your Exchange servers.
Had you followed the guidance, you would have chosen to start with fewer CPUs (you needed four) instead of more CPUs, and following VMware guidance (reducing the number of cores), you would have ensured that you would have allocated one vCPU per physical CPU, thus leading to gratifyingly low ready states.
This is another stark reminder of the fact that we need to read relevant documentation as part of our planning process. All relevant documentation, not just that of the new software we are planning to use.
Nicolas Blank has more than 15 years of experience with various versions of Exchange, and is the founder of and Messaging Architect at NBConsult. A recipient of the MVP award for Exchange since 2007, Nicolas is a Microsoft Certified Master in Exchange and presents regularly at conferences in the U.S., Europe, and Africa.
Nicolas will be running a two day ‘Mimecast Exchange’ training event on the 31st of October and the 1st of November at Microsoft’s Cardinal Place in London. For your opportunity to win a place at the event, please read this blog post about the event.
(Image courtesy of Random Tony)
by Glenn Brown
This week, we released an update to the Mimecast Mobile app for iOS and Android devices.
We’ve already received a number of positive Tweets about the new app and some ideas to take into future updates – special mention to Duncan James, Colin, Peter Annandale, Alex Heer and Rory John for your comments!
The key benefit for users is of course mobile email continuity – being able to send and receive emails when primary mail servers are down! But in addition, the app introduces new archive search options that allow you to access old emails going back years, as well as the ability to search across delegate mailboxes. Plus you can now control your personal or moderated ‘on hold queues’ where you can review messages on hold, and block or release messages from Mimecast. You can also add senders to your blocked or permitted sender lists. Finally, customers who make use of email Smart Tags can now access these through the app.
Mimecast Mobile App – iPad Screenshot
Even though the new slick user interface grabs the attention, it still delivers on the areas that are traditionally valued in business – security and control.
IT Managers are constantly trying to balance the security of corporate data with user productivity – ensuring users have the right tools to get the job done. Providing users with access to corporate data from smartphones and tablets invariably enhances productivity, however controlling and securing that data without impeding the user is not only a challenge but a major concern. Mobile users can inadvertently put corporate data at risk by doing things such as opening email attachments in third party apps (e.g. Dropbox, Evernote, PDF Reader), using unsecured networks such as public Wi-Fi hotspots, and sending corporate mail from personal accounts when primary email servers are unavailable. All of these are very accessible and convenient to the user, however it is these characteristics that make them a target for hackers and a concern for the organization.
With this in mind we wanted to provide functionality that supported the challenge of securing corporate data on mobile devices.
You now have complete control over your Mimecast UEM administration with the ability to specify access to mobile apps using Active Directory groups, while also being able to allow individual features within the apps.
The app runs within its own secure container, with all communications secured between the app and your Mimecast UEM account. This ensures all emails and files sent from the app are subjected to existing DLP and security policies, and prevents users from inadvertently sending corporate data from a personal account. Another key security feature is the protection of email attachments from distribution or access by third party apps on the device by disabling the ‘open in’ functionality native to the device’s operating system.
For more security and management, you can add additional security controls by enabling enforcement of trusted networks’ access; which restricts the app to accessing data from known and trusted networks; preventing access from the free, public Wi-Fi hotspots that are often insecure and left open to attack. You can also enforce pinlock and password requirements to prevent unauthorized access.
The new app is available now and can be downloaded from Google Play or the iTunes App Store. We hope you enjoy it!