Cloud performance
monitoring:
How to get the
most out of your
critical apps

Author: Paul Gillin

The importance of cloud performance monitoring should be evident to anyone who has had their work in progress vanish or their productivity disrupted by unreasonable load times or downtime. And the solution to these challenges may not always be obvious to the end user because they can't always see their source: Is the problem on your computer? In the browser? Somewhere in the network? On the backend server?

In most cases, it's impossible to tell without sophisticated cloud performance monitoring and management tools, which are typically used by network service providers. That's why it's so important that the companies that support your network have cloud computing performance evaluation tools and technology to maximize visibility and optimize performance.

Cloud computing performance evaluation: What determines cloud performance?

There are many network-related factors that affect performance. Here are some elements to watch in cloud computing performance evaluation and monitoring:

Latency

Latency is the time it takes for data to reach its destination across a network. Latency is directly related to distance; the farther away from the destination, the longer it takes electrical signals to reach it. But other factors influence latency as well, including the number of hops—or servers—packets must pass through on their way, congestion on backbone networks, dropped packets, and the speed of "last mile" connections such as Wi-Fi routers and cellular data signals.

Jitter

Jitter is related to latency but is less predictable. It's what happens when packets in a continuous stream of data are delivered at different speeds. For example, if the person you're speaking to on a videoconference connection suddenly freezes or their voice becomes garbled or fuzzy, it's most likely jitter caused by too much traffic on a shared connection. Reports of jitter from your organization's end users should be filed in your cloud performance monitoring efforts.

Bandwidth and throughput

Bandwidth is the maximum transfer capacity of a network. The more bandwidth there is available, the faster the performance.

Throughput refers to the maximum capacity of the transport medium, such as the backbone network. Throughput and bandwidth are not the same things. Bandwidth describes the potential of a network, whereas throughput deals with the actual volume of data delivered. A high-bandwidth network shared by many users may have lower throughput than a low-bandwidth network with just a few.

Caching

Caching is used by content delivery networks and website operators to store frequently accessed content in memory for quick retrieval. Overloaded caches can cause degraded performance or session timeouts.

Network congestion

Network congestion relates to the amount of traffic traversing the network at any given time. The more data and sources of data that are present, the higher the congestion and the lower the throughput. Your cloud performance monitoring should track the ebbs and flows of end users on your network.

Packet problems

Packet problems include situations in which data packets are lost, duplicated or need to be reordered. All three problems reduce performance or can manifest in conditions like jitter or screen freezes.

Selecting a network service provider for cloud performance monitoring

Your choice of network provider can significantly influence your vulnerability to these disruptions. Here are some features to look for in a provider's cloud performance monitoring toolbox:

Intelligent load-balancing

Intelligent load-balancing constantly monitors network traffic to use the fastest available resource for any request.

IP-based user routing

IP-based user routing is a form of intelligent routing that provides the most direct and unambiguous path between two points. Benefits include stronger security and better performance.

Content delivery networks

Content delivery networks (CDNs) are critical elements of internet infrastructure that act as exchange points between network nodes. CDNs store cached versions of major websites and conduct cloud performance monitoring for those receiving high traffic volumes at any given time to minimize the need for requests to complete the full round trip to the source server and thus reduce latency.

Points of presence

Points of presence (PoPs) bring internet connectivity and CDN functionality to major metropolitan locations where content can be served with minimal latency.

Not all PoPs are the same. Legacy PoPs built in the days of dial-up connections and basic HTML websites lack the capacity to serve today's data-intensive streaming applications and responsive webpages. Modern versions are located near primary internet exchange points and have massive amounts of computing and caching power.

Peering 

Peering connects public networks to each other to enable rapid data interchange and better load-balancing by shifting traffic to other networks. Peering also makes new data sources available to customers and enhances backup and disaster recovery options.

Network performance monitoring

Network performance monitoring ties everything together. It gives the operator full visibility into the network down to the device level with alerts that indicate impending performance issues often before users even notice a slowdown. Cloud performance monitoring enables operators to rebalance traffic loads or reroute traffic with little or no disruption.

End user performance monitoring

Effective cloud performance monitoring goes beyond the network and even application server layer, to help provide visibility into every aspect of the service delivery infrastructure end-to-end. After all, performance measurements are only effective if they ultimately provide a good digital end user experience. By using effective end user performance monitoring, your service provider can help improve visibility into employee productivity and ultimately user satisfaction.

Strong vendor relationships

Relationships with the major vendors of computing and networking equipment enable operators to negotiate lower costs and faster delivery times.

The future of cloud computing performance evaluation

Many network operators are now moving in the direction of full software-defined wide area networks (SD-WAN). Based on a virtualized foundation, SD-WAN simulates hardware such as servers, storage devices, routers, switches and firewalls in software. This enables them to be instantly reconfigured and all but eliminates the need for field maintenance or installation of new equipment.

SD-WAN also enhances cloud performance monitoring with better visibility into the network, allowing operators to pinpoint a slowdown at the level of a device such as a Wi-Fi router. And because the network is virtualized in software, the resolution to common issues can be automated.

SD-WAN also permits customers to connect to multiple cloud service providers easily and securely. New services can be created much more quickly, enabling more innovation and greater responsiveness when something unexpected happens.

The choice of a network operator matters. There are many factors that can cause that "page unresponsive" message to flash across your screen. Make sure the network isn't one of them.

Discover how Verizon's managed network services can help ensure consistent cloud computing performance.