New Webinar: Virtualization and Cloud Performance


cloud-performanceWhether you are virtualizing your IT environment or moving applications to the cloud, the dynamics and complexities of these IT transformations can cause significant performance and user experience issues that diminish the benefits of new IT service models and risk interruption of critical business processes.

Join us on Wednesday, March 12 at 2pm ET / 11am PT for the webinar Virtualized and Cloud Apps – Why a Process Oriented Approach Delivers Maximum Performance to explore how to create service improvement initiatives based on end-to-end services and achieve true IT Service Management, i.e. managing services instead of just processes or components.

Register now: https://www4.gotomeeting.com/register/800828335

Performance management experts John Worthington (Director of Consulting, Third Sky) and Bala Vaidhinathan (CTO, eG Innovations) will discuss:

  • The importance of performance management in virtualized, private/public cloud environments
  • Integrating Event Management with the Service Lifecycle to accelerate business value
  • The benefits of a process approach to performance management
  • The importance of end-to-end IT service visibility for enabling IT transformation

We look forward to seeing you online!

Automated Application Performance Monitoring: The ‘Easy’ Button for IT


By David Sims, TMCnet Contributing Editor.

Isn’t hitting the “Like” button on Facebook easy? Wouldn’t it be great if there were an “Easy” button when it comes to, oh, application performance monitoring?

Srinivas Ramanathan, founder and CEO of eG Innovations, recently pointed out that while it’s obvious whenever there is a problem with IT operations in a company, it’s far less obvious where and what exactly the problem is.

Bottom line is, business users just want their equipment and networks to work. They don’t want to get bogged down in discussions of “Well, it’s a middleware issue, that’s not really our department.”

Ramanathan outlines the problem, which is now being exacerbated by virtualization. As he explains, IT operations teams are organized and managed along tiers. So, databases, services and apps all have their own tiers, as well as a unique administrator and tools for administration and monitoring. For example, when a sales rep calls in complaining about how slow his computer is, the first step is to find whose tier has the issue.

This is usually where you want that “Easy” button. As Ramanthan says, “Often, troubleshooting is the most time-consuming and expensive step in the incident management process.”

A problem in one tier will affect performance across all tiers. Virtualization makes it more difficult, since now you have what Ramanathan calls “a new type of dependency,” and “a single malfunctioning VM can impact the performance experienced by all the other VMs.”

“Traditional static approaches to performance monitoring and management are blind to this new reality,” Ramanathan says, adding that “while management of business services that spanned multiple physical tiers was already a challenge, the introduction of virtualization has made the problem much, much harder.”

To address the problem, Ramanathan recommends automated performance management. It can monitor “every layer of every tier,” provide root-cause diagnosis and help determine baselines, so it can tell you when something is out of the ordinary.

Done correctly, Ramanathan says, automated management lets you know where to invest in your infrastructure, mainly by showing you where the existing bottlenecks are.

So really, it’s as close to an “Easy” button as you can get in IT today.

[This article was written by David Sims, TMCnet Contributing Editor and originally appeared on TMCnet online.]

Virtual Desktop Success with Performance Assurance (Part 2)


In part 1 of this article, we talked about how the current VDI deployment cycle is broken, often overlooking the new inter-dependencies and performance implications introduced by desktop virtualziation. To ensure VDI success, performance has to be considered at every stage of the VDI lifecycle because it is fundamental to the success or failure of the VDI rollout.

Understanding the performance requirements of desktops will also help plan the virtual desktop infrastructure more efficiently. For example, known heavy CPU using desktop users can be load balanced across servers. Likewise, by planning to assign a good mix of CPU intensive and memory intensive user desktops are assigned to a physical server, it is possible to get optimal usage of the existing hardware resources.

Taking this discussion one step further, it is interesting to draw a parallel with how server virtualization evolved and to see what lessons we can learn as far as VDI is concerned.

A lot of the emphasis in the early days was on determining which applications could be virtualized and which ones could not. Today, server virtualization technology has evolved to a point where there are more virtual machines being deployed in a year than physical machines, and almost every application server (except very old legacy ones) are virtualized fairly well. You no longer hear anyone asking whether this application server can be virtualized or not. From focusing on the hypervisor, virtualization vendors have realized that performance and manageability are key to the success of server virtualization deployments.

VDI deployments could be done more rapidly and more successfully if we learn our lessons from how server virtualization evolved. VDI assessment needs to expand in focus on just the desktop and look at the entire infrastructure. Attention during VDI rollouts has to be paid to performance management and assurance. To avoid a lot of rework and problem remediation down the line, performance assurance must be considered early on in the process and at every stage. This is key to getting VDI deployed on a bigger scale and faster, with great return on investment (ROI).

To learn more about VDI performance, join the on-demand webinar “Top-5 Best Practices for Virtual Desktop Success

Virtual Desktop Success with Performance Assurance (Part 1)


Very often, when an enterprise starts on the virtual desktop journey, the focus is on the user desktop. This is only natural – after all, it is the desktop that is moving – from being on a physical system to a virtual machine.

Therefore, once a decision to try out VDI is made, the primary focus is to benchmark the performance of physical desktops, model their usage, predict the virtualized user experience and based on the results, determine which desktops can be virtualized and which can’t. This is what many people refer to as “VDI assessment”.

One of the fundamental changes with VDI is that the desktops no longer have dedicated resources. They share the resources of the physical machine on which they are hosted and they may even be using a common storage subsystem.

While resource sharing provides several benefits, it also introduces new complications. A single malfunctioning desktop can take so much resources that it impacts the performance of all the other desktops. Whereas in the physical world, the impact of a failure or a slowdown was minimal (if a physical desktop failed, it would impact only one user), the impact of failure or slowdown in the virtual world is much more severe (one failure can impact hundreds of desktops). Therefore, even in the VDI assessment phase, it is important to take performance considerations into account.

In fact, performance has to be considered at every stage of the VDI lifecycle because it is fundamental to the success or failure of the VDI rollout. The new types of inter-desktop dependencies that exist in VDI have to be accounted for at every stage.

For example, in many of the early VDI deployments, administrators found that when they just migrated the physical desktops to VDI, backups or antivirus software became a problem. These software components were scheduled to run at the same time on all the desktops. When the desktops were physical, it didn’t matter, because each desktop had dedicated hardware. With VDI, the synchronized demand for resources from all the desktops severely impacted the performance of the virtual desktops. This was not something that was anticipated because the focus of most designs and plans was on the individual desktops.

In part 2 of this article, we will take a closer look at the parallels to server virtualization and what lessons we can learn for VDI.

Providing 360 Degree Visibility of Virtual Machines – Inside and Outside


If you are responsible for a virtual infrastructure, you’re probably already aware that metrics like CPU ready time, balloon memory, IOPS, etc. are important indicators of how the virtual infrastructure is performing. These metrics together provide what we refer to as the “outside view” of a virtual machine (VM). The virtualization platforms – whether VMware vSphere, Citrix XenServer, or Microsoft Hyper-V – provide metrics that can be used to construct this outside view of a VM.

The outside view of a VM reveals how the resources of the physical machine are distributed among the VMs.

The outside view of a VM focuses on the resources of the physical machine and how these resources are used by the different VMs. Using the outside view, you can answer questions such as “Which VM is responsible for the resource utilization of a machine” and “is the physical machine adequately sized – does it have sufficient CPU or memory resources to handle its workload”.

Time-based metrics within a VM can be inaccurate because of clock skews.

For long, there have been many who have suggested that the outside view of a VM is the only way of monitoring a virtual infrastructure. One of the main reasons for this is that time-based measurements made inside the VMs are likely to be inaccurate because when a timer interrupt is supposed to happen, a VM may not be running. As a result, metrics such as response time, requests per second, disk I/O per second, etc. taken from within the VM’s operating system are not absolute performance indicators.

The inside view of a VM is critical for additional diagnosis. The inside view reveals how applications running within the VM are using resources allocated to the VM.

From our practical experiences with virtual infrastructures, we have observed that the outside view of a VM alone is not sufficient for effectively managing a virtual infrastructure. With the outside view of a VM, an administrator can determine which VM is taking up excessive resources. However, the immediate next question is always “WHY is the VM taking up resources” – i.e., is it because of excessive load on the VM? is it because of a run-away process running inside the VM? To answer such questions, it is important to understand what is happening inside the VM. This is the “inside view of a VM”. Together, the inside and outside views provide 360 degree visibility into the VM.

 

eG Enterprise provides the inside and outside view of VMs using the same agent. The licensing is very simple too – a single monitoring license is required per physical machine and is sufficient to monitor all the VMs – both from the outside and from the inside. To understand more about eG Enterprise’s unique In-N-Out monitoring capabilities for VMs, check this new presentation. Click here >>>

eG’s In-N-Out monitoring of VMs addresses the limitations of time-based monitoring inside the VMs in the following ways:

  • Clock skews only impact metrics that are time-based. Many metrics collected within a VM are not time-based – for example, memory usage of processes, handle usage of processes, processes queued waiting for disk requests, etc. – yet, these metrics can reveal very useful information about activity within the VM.
  • Clock skews make metrics obtained from within a VM unreliable as “absolute” indicators of performance. Metrics from within a VM are still very good “relative” indicators. For example, by comparing the percentage of CPU usage or the disk I/O rates for each of the applications, one can determine which applications are responsible for the resource usage of a VM being high.

For more information on eG’s In-N-Out monitoring for virtual infrastructures, click here >>>

Management Technologies will Play a Central Role in Fulfilling the Promise of Cloud Computing and Virtualization Technologies


2011 is almost here and it promises to be an exciting and challenging year!  Here are my top 10 predictions in the monitoring and management space for 2011.

Virtualization and cloud computing have garnered a lot of attention recently. While virtualization has been successfully used for server applications, its usage for desktops is still in its early stages. Cloud computing is being tested for different enterprise applications, but has yet to gain complete acceptance in the enterprise. 2011 will be the year that these technologies become mainstream.

A key factor determining the success of these technologies will be the total cost of ownership (TCO). The lower the TCO, the greater the chance of adoption. By proactively alerting administrators to problems, pointing to bottleneck areas and suggesting means of optimizing the infrastructure, management technologies will play a central role in ensuring that these technologies are successful. With this in mind, I make the following predictions for 2011:

1. Virtualization will go mainstream in production environments. Very few organizations will not have at least one virtualized server hosting VMs. Enterprises will focus on getting the maximum out of their existing investments and will look to increase the VM density – i.e., the number of VMs for each physical server. In order to do so, administrators will need to understand the workload on each VM and which workloads are complementary (e.g., memory intensive vs. CPU intensive), so IT can use a mix and match of VMs with different workloads to maximize usage of the physical servers. Management tools will provide the metrics that will form the basis for such optimizations.

2. Multiple virtualization platforms in an organization will become a reality. Over the last year, different vendors have come up with virtualization platforms that offer lower cost alternatives to the market leader, VMware. Expect enterprises to use a mix of virtualization technologies; the most critical applications being hosted on virtualization platforms with the best reliability and scalability, while less critical applications may be hosted on lower-cost platforms. Enterprises will look for management tools that can support all of these virtualization platforms from a single console.

3. Enterprises will realize that they cannot effectively manage virtual environments as silos. As key applications move to virtual infrastructures, enterprises will realize that mis-configuration or problems in the virtual infrastructure can also affect the performance of business services running throughout the infrastructure. Further, because virtual machines share the common resources of the physical server, a single malfunctioning virtual machine (or application) can impact the performance seen by all the other virtual machines (and the applications running on them). If virtualization is managed as an independent silo, enterprise service desks will have no visibility into issues in the virtual infrastructure and, as a result, could end up spending endless hours troubleshooting a problem that was caused at the virtualization tier. Enterprise service desks will need management systems that can correlate the performance of business services with that of the virtual infrastructure and help them quickly translate a service performance problem into an actionable event at the operational layer.

4. Virtual desktop deployments will finally happen. VDI deployments in 2010 have mostly been proof of concepts; relatively few large scale production VDI deployments of VDI have occurred. Many VDI deployments run into performance problems, so IT ends up throwing more hardware at the problem, which in turn makes the entire project prohibitively expensive. Lack of visibility into VDI is also a result of organizations trying to use the same tools they have used for server virtualization management for VDI. In 2011, enterprises will realize that desktop virtualization is very different from server virtualization, and that management tools for VDI need to be tailored to the unique challenges that a virtual desktop infrastructure poses. Having the right management solution in place will also provide VDI administrators visibility into every tier of the infrastructure, thereby allowing them to determine why a performance slowdown is happening and how they can re-engineer the infrastructure for optimal performance.

5. Traditional server-based computing will get more attention as organizations realize that VDI has specific use cases and will not be a fit for others. For some time now, enterprise architects have been advocating the use of virtual desktops for almost every remote access requirement. As they focus on cost implications of VDI, enterprise architects will begin to evaluate which requirements really need the flexibility and security advantages that VDI offers over traditional server-based computing. As a result, we expect server-based computing deployments to have a resurgence. For managing these diverse remote access technologies, enterprises will look for solutions that can handle both VDI and server-based computing environments equally well and offer consistent metrics and reporting across these different environments.

6. Cloud computing will gain momentum. Agility will be a key reason why enterprises will look at cloud technologies. With cloud computing, enterprise users will have access to systems on-demand, rather than have to wait for weeks or months for enterprise IT teams to procure, install and deliver the systems. Initially, as with virtualization, less critical applications including testing, training and other scratch-and-build environments will move to the public cloud. Internal IT teams in enterprises will continue work on public clouds and ultimately a hybrid cloud model will evolve in the enterprise. Monitoring and management technologies will need to evolve to manage business services that span one or more cloud providers, where the service owner will not have complete visibility into the cloud infrastructure that their service is using.

7. Enterprises will move towards greater automation. For all the talk about automation, very few production environments make extensive use of this powerful functionality. For cloud providers, automation will be a must as they seek to make their environments agile. Dynamic provisioning, automated load balancing and on-demand power on/power off of VMs based on user workloads will all start to happen in the data center.

8. Do more with less will continue to be the paradigm driving IT operations. Administrators will look for tools that can save them at least a few hours of toil each day through proactive monitoring, accurate root-cause diagnosis and pinpointing of bottleneck areas. Cost will be an important criterion for tool selection and, as hardware becomes cheaper, management tool vendors will be forced away from pricing per CPU, core, socket or per application managed.

9. Enterprises will continue to look to consolidate monitoring tools. Enterprises have already begun to realize that having specialized tools for each and every need is wasteful spending and actually disruptive. Every new tool and introduced carries a cost and adds requirements for operator training, tool certification, validation, etc. In 2011, we expect enterprises to look for multi-faceted tools that can cover needs in multiple areas. Tools that can span the physical and virtual worlds, that can offer both active and passive monitoring capabilities, support both performance and configuration management will be in high demand. Consolidation of monitoring tools will result in tangible operational savings and actually work better than a larger number of dedicated element managers.

10. ROI will be the driver for any IT initiative. In the monitoring space, tools will be measured not by the number of metrics they collect but by how well they help solve real-world problems. IT staff will look for solutions that excel at proactively monitoring and issuing alerts in advance of before a problem happens, and how they can help customers be more productive and efficient (e.g., by reducing the time an expert has to spend on a trouble call).

Reposted from VMBlog – http://vmblog.com/archive/2010/12/09/eg-innovations-management-technologies-will-play-a-central-role-in-fulfilling-the-promise-of-cloud-computing-and-virtualization-technologies.aspx

Why Does Your Monitoring System Need to be Virtualization-Aware?


You have management software that you’ve used for your Linux or Windows servers. Can’t you just deploy a Linux agent and monitor a VMware vSphere/ESX server, or a Windows agent to monitor a Microsoft Hyper-V server?

This is a very very common question that comes up in any discussion on virtualization management. After all, when a VMware ESX server boots, the administrator gets to a Linux login prompt and can login to a Linux operating system. Likewise, for Hyper-V, the administrator accesses a Windows 2008 server console.

The answer to the above question though is a resounding NO! A monitoring agent designed to monitor Linux cannot monitor a VMware ESX server, and an agent for Windows OS cannot monitor Hyper-V. The Linux OS that you get to when VMware ESX boots is the so called Service Console. The Service Console is a vestigial general purpose operating system most significantly used as the bootstrap for the VMware kernel, vmkernel. Like other virtual machines on the ESX server, the service console is also another VM.

ESX architecture
VMware vSphere/ESX architecture. The service console (console OS) is used to bootstrap the virtualization platform

Any Linux operating system commands you execute on the service console only monitors activities in the service console VM. For example, when you run the “top” command on the service console, you are monitoring the top processes running in the service console. To monitor the VMware ESX hypervisor or the VMs running on the server, you need to run the “esxtop” command which is the management interface for the ESX server.

From the above discussion, it should be apparent that when you install a Linux monitoring agent on the service console, you are only monitoring the service console’s Linux OS and the applications running inside the service console. The virtualization platform is a much more complex system and if your monitoring system is not virtualization-aware, you will not get visibility into different aspects of the virtualization platform’s performance.

The case of Hyper-V is very similar. The root-partition of Hyper-V (the equivalent of the VMware ESX service console) runs a Windows 2008 operating system. A Windows monitoring agent installed on the root-partition can monitor this VM but will not be able to collect metrics about the hypervisor and the other VMs.

A virtualization-aware monitoring system should be able to monitor:

  • The server hardware (fan, power, temperature, voltage, etc.)
  • The hypervisor and its CPU, memory usage
  • The datastores that provide storage for the VMs
  • The underlying storage devices (LUNs) that support the datastores
  • The network interfaces on the server and their bandwidth usage
  • The virtual switches and networks that allow communication between VMs on the server
  • The VMs registered and powered on, and the relative resource usage levels of the VMs
  • Server clusters and live migration of VMs between servers in the cluster

eG Enterprise offers a comprehensive virtualization-aware monitoring solution, supporting 7 different virtualization platforms including VMware vSphere, Citrix XenServer, Microsoft Hyper-V, Microsoft Virtual Server 2005, Solaris LDoms, Solaris Containers, and AIX LPARs.

No matter what virtualization platform(s) you choose to adopt in your infrastructure, we have the monitoring solution for it (them)!