Meeting the bandwidth demands of taking your business into the cloud

Deciding if the cloud is right for an organization is a tricky, mathematical dance.


Aurich Lawson / Thinkstock

Moving infrastructure from premises to the cloud can make sense for a lot of businesses. Few are going to look back on their lives, warmly recalling that time that they backed up their Exchange server or increased someone’s mail quota. Microsoft and Google (among others) can both run your mail system for you for a reasonable monthly fee.

And Mail is just the start. While mail is perhaps the most natural cloud service (since it’s been a cloud service before there was such a thing as “the cloud”), customer relationship management software (such as Salesforce.com or Microsoft Dynamics), communication and collaboration (such as SharePoint and Lync), office productivity (such as Google Apps, Zoho, and Office 365), and even entire desktops are all available as cloud services.

To use any of these services you will need a connection to the Internet. The office Internet connection is always important, of course. Even in the pre-cloud days, it was annoying to not have access to e-mail or the Web. But with cloud services, the burdens placed on the Internet connection become a whole lot more serious. An overtaxed Net connection becomes a dire impediment to productivity.

While on-premises systems can benefit from nice fast LAN connectivity, with even 10gigE becoming a fixture in the server room and gigabit common elsewhere, cloud services generally have to make do with much scarcer bandwidth resources. For small and medium businesses, Internet bandwidth tends to be measured in the megabits, typically dozens, but sometimes in the single digits—especially for branch offices, retail premises, and other small sites.

That’s unfortunate, because a good case could be made that these are the sites that stand to gain most from cloud services, thanks to their lack of on-site IT staff. Even some of the less common services, such as virtual desktop infrastructure (VDI), can make sense in such environments: VDI means that the equipment on premises doesn’t need to store any data and so isn’t at such risk of theft.

Varying needs

Bandwidth requirements will of course depend on the service being used. Common services like e-mail tend to be fairly easy on the bandwidth. Some less common services, such as cloud-hosted virtualized desktops, by contrast, can place heavy per-user demands on an Internet connection, especially in deployments with high resolution desktops or multimedia.

Some tasks can be highly variable. Cloud storage services—whether straightforward file sharing such as Box and Dropbox or more complex document management like SharePoint—can end up using a little bit of bandwidth or a lot. Word documents and Excel spreadsheets are generally relatively tiny; photographs and videos can be huge. The occasional upload of a big file to cloud storage can be enough to clog up the Internet tubes, which on shared connections is a problem for everybody.

On top of bandwidth, there are also important latency considerations. Some applications, such as e-mail, are pretty latency insensitive. That’s not to say that lower latency isn’t better—it makes everything a little snappier—but desktop mail clients, and most Web-based applications, tend to be designed to be tolerant to a certain amount of latency.

Other applications aren’t so robust. Latency is absolutely killer for voice over IP. Short delays of a few tens of milliseconds are enough to make calls sound strange. Hundreds of milliseconds can render them almost unlistenable, with 150 milliseconds generally regarded as the limit for tolerable voice calls.

VDI is similarly sensitive: snappy desktop experiences demand low latency connections to service providers. In many ways, the issues with VDI can be the most acute. Web apps don’t generally place demands on the network once a page is loaded, so a lot of the time latency is masked. You don’t notice the slow network when you’re not actually using it. They also have clear, well-defined points at which you have to wait—the submission of a form or clicking of a link. VDI, however, places nearly constant demands on the network, and it can make you wait anywhere within an app. On slow links, even navigating with the mouse, clicking buttons and typing can be disrupted.

Moreover, if a single local or Web-based application is a bit slow, users can often switch to some other task temporarily. If all their software is delivered through VDI, however, there’s no such escape route. A choppy virtual desktop will make every application they use unusable.

And of course, these issues can be intertwined: as we saw in the examination of latency, the large buffers on modern routers mean that bandwidth shortages (or rather, mismatches between local bandwidth and upstream Internet bandwidth) can in turn cause massive latency of many seconds. When latency is this high, even latency-tolerant applications can become thoroughly unpleasant to use.

Figuring out just how much bandwidth you need—or conversely, which services you can practically use given the bandwidth you already have—can be non-trivial.

As a general rule of thumb, 100 kilobits per second per user is sufficient for a good range of Web-delivered cloud services such as mail, collaboration, and CRM. That’s only a ballpark estimate, and the exact amount of bandwidth necessary—and tolerance of deviations from that bandwidth—will depend a lot on the services being used.

Some services are easy to calculate. VoIP, for example, is straightforward. Each call will use a fixed amount of bandwidth that’s determined by the compression algorithm in use (typically between 5.3 and 64 kilobits per second) plus some amount of fixed overhead imposed by encapsulating the data within IP packets (typically 16 kilobits per second). The total bandwidth required is arrived at simply by adding the two numbers (data plus overheads) and multiplying by the number of lines in concurrent use.

Most other services, however, show much greater variation. There are no particular limits on how big an e-mail or a Web page will be, and accordingly, there’s no simple way of determining how much bandwidth a given service will demand. Latency insensitive services will also tend to tolerate bandwidth shortages to some extent; a slow page load in a Web app is acceptable in a way that a laggy phone call isn’t.

Some companies have bandwidth calculators that allow estimates to be made. Microsoft has one forExchange, including Office 365, for example. In many ways, this tool demonstrates the difficulty of the problem. As input, it requires numerous estimates of things like the number of e-mails received per day, the size of each e-mail, and the times of day that most e-mails are sent and received (to reflect that there’s often a barrage of traffic first thing in the morning and just after lunch). Fill all the data in, and it will provide an estimate of bandwidth use over the course of the day.

But this is just one service of many, and it’s arguably one of the more predictable ones. Perhaps as a result, many software-as-a-service providers essentially say that the way to know how much bandwidth you’ll need is to measure it. Usage patterns, from user to user and from company to company, vary so greatly that measurement is the only way to be sure.

In some ways, that’s perhaps just as well. Microsoft’s bandwidth estimator also crudely estimates the network latency that’s needed. It rather liberally asserts that 320 milliseconds is good enough for Outlook Web Access. While this is technically true, and OWA will certainly work, it will nonetheless feel slow and basically unpleasant to use with that kind of network speed. Providing more than this minimum standard will make for a much better experience.

Working smarter, not just faster

Having enough bandwidth isn’t the only important consideration—making good use of the bandwidth you have also matters. While one doesn’t want the latest YouTube video du jour to impede access to business-critical systems, many companies don’t want to place outright blocks on such sites. We’veseen before that the use of streaming media can be a significant bandwidth user, making that 100-kilobit-per-second ballpark estimate look very meager indeed. But keeping employees happy with the likes of Pandora radio and the occasional cat video may well be a worthwhile trade-off.

The standard answer to the problem of what to do when demands placed on bandwidth are greater than the actual bandwidth available is Quality of Service. Properly configured, network hardware can prioritize important traffic over unimportant traffic and guarantee that certain kinds of network activity get enough bandwidth.

VoIP is perhaps the biggest beneficiary of QoS, as its combination of strict latency requirements but relatively low bandwidth requirements makes it relatively easy to guarantee that it gets what it needs. However, even potential bandwidth hogs like VDI can be good candidates to have their traffic given a high priority due to their latency demands.

A simpler and perhaps even more robust alternative is to have separate Internet connections for different traffic. There’s no better way to ensure that your VoIP gets all the Internet it needs than to give VoIP its own connection and avoid competition with Facebook and Twitter.

Careful selection of protocols can also increase the efficiency of one’s Internet traffic. In the VDI space, there are many different protocols in wide use. While they all do broadly the same thing (present a desktop remotely), they’re not identical. Microsoft’s RDP has a reputation for being effective on low bandwidth, high latency connections, but other protocols, such as the PCoIP used by VMware View, can be better in high bandwidth situations. PCoIP can be used to provide high frame rates and good quality multimedia, but this comes at a price. It demands hundreds of kilobits per second of bandwidth per user.

 

With the greater dependence on network performance that using cloud services creates, it becomes more important than ever to monitor network performance. Spikes in latency and shortages in bandwidth at best mean that e-mails will trickle in a little more slowly. At worst, they’ll sap productivity or bring down the phone lines. Staying on top of the network condition is essential.

 

It’s also helpful to have a fallback position for those unfortunate times when a backhoe cuts through a fiber. This needn’t be expensive; we’ve heard of companies using 4G mobile hotspots, for example, to provide emergency connectivity. While this obviously won’t be a viable option for large sites with hundreds or thousands of people, it can certainly work at smaller locations.

Making it work even without the bandwidth

The most flexible solution is, of course, to provide as much bandwidth as possible with as low a latency as possible. With abundant low latency bandwidth, even complex applications and large amounts of data can be pushed into the cloud when it makes sense to do so.

Ultimately, however, not all bandwidth or latency problems are necessarily solvable at a reasonable price. Whether to move to the cloud in the face of these scarce resources becomes a harder decision in that situation. The cost and management advantages can still make the use of cloud services worthwhile. There’s an immense freedom from not having to handle all the day-to-day IT administrative tasks.

The advantages of systems like VDI can also be sold to users; there’s convenience in being able to see your exact same desktop whether at head office, a branch office, or working remotely from home,without having to carry a laptop around with you. VDI itself, especially in low bandwidth environments, might provide an environment that’s somewhat inferior to a real PC, but these advantages might be enough to make up for it.

This was posted originally on ArsTechnica.