- Organizations are becoming increasingly distributed, spanning remote offices, strategic partners, outsourced workforces, and ad hoc teams in order to take advantage of the most talent for the lowest expense
- Organizations are cutting headcount and trimming operational budgets, making employee productivity more important than ever before
- Networked applications are involved in nearly every aspect of business operations, including product design, production, sales, marketing, accounting, logistics, and customer service
It would seem prudent, then, for organizations to monitor the performance of their applications, since networked applications are 1) the tools most workers are using, and 2) at risk of reducing worker productivity through performance problems and application outages.
But many organizations don't systematically monitor application performance at all. This jarring revelation appears in a study about cloud computing just published by InformationWeek.
The study noted that 40% of respondents didn't have a system in place to monitor internal applications, let alone cloud applications. An integrator interviewed in the study remarked that fewer than 30% of his customers had application monitoring systems in place; in other words, more than 70% didn't monitor applications.
Given the rising popularity of video and voice applications, which require high-performance, low-latency network connections, and the rising popularity of cloud computing—in use or about to be in use at 27% of the organizations surveyed—the lack of application monitoring seems like trouble in the offing.
The InformationWeek article offers a number of helpful suggestions, including the use of WAN optimization for accelerating applications serving remote offices. Application performance monitoring solutions from companies such as Blue Coat, Fluke Networks, NetScout, and WildPackets can also be helpful. A new standard called Apdex, which attempts to measure the quality of service an application delivers, is gaining a following and also worth a look.
Disclosure: Blue Coat is a client.
1 comment:
John, I am really glad that someone picks up the ball on this subject. I have my own paradigm for this new way of NOT flying blind anymore with respect to Application Performance. I call this "Service Oriented Network Managment". The reason is obvious (I think), traditionally, networks and their assets have been managed from a device perspective. So whenever there was a quality issue on the wire, users would call the helpdesk and all sorts of people would start looking into their infrastructure devices to figure our where the problem came from. Syslogs, management systems, utilization, eventlogs etc. would be used to troubleshoot a problem. The result very often was, that nobody (no device) seemed to have a problem, yet the network or rather the service to the end user across the network was still slow... Protocol Analyzers wouldn't really do the job either because a) they are always pluggged in at the wrong spot or b) they are not plugged in at all.
So looking at the world from the end users perspective - monitoring the performance of networked applications - is the way forward. Taking that approach allows IT organizations to be proactively alerted on issues that reflect the quality of experience of the end user - the service quality so to speak. Not only that, if the system is smart enough, it will immediately point into the exact direction of the root cause of a performance problem and allow IT to much quicker resolve a problem and offer high quality services to the end users. Which again translates into higher work force efficiency and therefore lower cost/higher return on investment...
Post a Comment