Citrix XenDesktop Performance Management – Optimizing VDI User Experience While Enhancing IT Productivity and ROI

eG-EnterpriseWe would like to invite you to join the upcoming eG Innovations live demonstration Citrix XenDesktop Performance Management – Optimizing VDI User Experience While Enhancing IT Productivity and ROI on August 21, 2014 at 2pm ET | 1pm CT | 11am PT | 7pm UK | 8pm CET.

See first-hand how to address performance-related challenges in your business-critical VDI environment. Register here:

Desktop virtualization expert Bala Vaidhinathan (CTO, eG Innovations) will demonstrate how eG Enterprise helps you manage your virtual desktop environment to:

  • Proactively identify and diagnose the root cause of issues and resolve them before users notice
  • Rapidly resolve user complaints such as “my desktop is slow” with automated identification of bottlenecks and their root cause
  • Achieve real-time oversight and performance management to meet SLA and performance objectives
  • Gain a holistic end-to-end view of the virtual desktop infrastructure, understand usage trends and bottlenecks and plan effectively for growth
  • Customize real-time dashboards and create targeted reports to deliver timely, concise information that is relevant and valuable to each technical and management team member

Register Now:

Title:  Citrix XenDesktop Performance Management – Optimizing VDI User Experience While Enhancing IT Productivity and ROI

Date:  August 21, 2014 at 2pm ET | 1pm CT | 11am PT | 7pm UK | 8pm CET

Presenters: Bala Vaidhinathan (CTO, eG Innovations), Holger Schulze (VP Marketing, eG Innovations)


We look forward to seeing you online!

New Webinar: Performance Assurance for Virtualized Citrix XenApp Environments

As companies migrate to Citrix XenApp 6.5 for more efficient application delivery, they are increasingly taking advantage of virtualization platforms (such as Citrix XenServer, VMware vSphere and Microsoft Hyper-V) to increase efficiency, enhance flexibility, and reduce hardware cost of XenApp server farms.

Virtualization Management Challenges

However, XenApp virtualization introduces new and dynamic inter-dependencies because multiple applications are running on virtual machines that share the same hardware. This increased complexity makes managing performance and user experience of virtualized XenApp infrastructures more challenging, costly, and time-consuming.

Many companies fly blind without complete performance visibility into the components of their XenApp environment and the dynamic inter-dependencies. Yesterday’s reactive, manual and fragmented approach to performance management severely limits performance visibility and diagnosis of performance issues. It is no longer sufficient for today’s dynamic IT environments.

eG Innovations Performance Assurance

eG Innovations solves this big challenge by radically simplifying XenApp performance management. Only eG delivers pre-emptive, automated and integrated performance assurance for today’s dynamic, mission-critical Citrix XenApp environments. This unique approach enables companies to ensure XenApp virtualization success by delivering on the promise of exceptional performance, flexibility, and ROI.

Having won numerous awards for our Citrix and Virtualization performance management and monitoring solutions, eG Innovations is the clear choice for organizations wanting a best-of-breed solution to manage their combined virtualized infrastructures:

  • Get complete performance visibility and automated performance correlation across all virtual and physical components – network, storage, virtualization, application and database
  • Automate and accelerate discovery, diagnosis and resolution of XenApp service performance issues
  • Pre-emptively detect and resolve performance issues before users notice
  • Identify bottlenecks and right-size your XenApp infrastructure with powerful reporting and analytics for maximum ROI
  • Automatically correlate all performance events from both the physical and virtual tiers of your XenApp Service and auto-diagnose the cause of any performance problem
  • Discover trends and details of user sessions and user/application resource consumption for effective workload planning and infrastructure management to reduce cost

Join our live solution tour to learn more:

Live Demo –
Performance Management in Virtualized Citrix XenApp Environments

Date & Time: May 3, 2012 @ 2:00 pm ET | 11:00 am PT | 7:00 pm UK | 8:00 pm CET

Register Now:

For additional demo presentations, visit

We look forward to seeing you online!

VDI Performance Assurance: How to Deliver Virtual Desktop Success

Desktop virtualization is a hot topic. In fact, a recent IDC study showed that 45 percent of CIOs polled indicated that virtualization of the desktop is their number one concern and interest in 2012. But despite the interest and many attempts at deployment, many VDI rollouts fail due to performance and user experience issues. Why?

As organizations move from VDI test and pilot stages to production, they are realizing that the “traditional” approach of treating performance as an afterthought and addressing it in a reactive fashion does not scale. Too often, performance issues surprise VDI project owners during and after rollout, when everything worked just fine during the (often over-provisioned and less complex) pilot.

Focus on the Desktop Often Neglects Backend Infrastructure
Very often, when an enterprise starts on the virtual desktop journey, the focus is on the user desktop. This is only natural; after all, it is the desktop that is moving ‒ from being on a physical system to a virtual machine. Therefore, once a decision to try out VDI is made, the primary focus is to benchmark the performance of physical desktops, model their usage, predict the virtualized user experience and, based on the results, determine which desktops can be virtualized and which can’t. This is what many people refer to as VDI assessment.

One of the fundamental changes with VDI is that the desktops no longer have dedicated resources. They share the resources of the physical machine on which they are hosted and they may even be using a common storage subsystem. While resource sharing provides several benefits, it also introduces new complications. A single malfunctioning desktop can drain resources to the point that it impacts the performance of all the other desktops.

Whereas in the physical world, the impact of a failure or a slowdown was minimal (if a physical desktop failed, it would impact only one user), the impact of failure or slowdown in the virtual world is much more severe (one failure can impact hundreds of desktops). Therefore, even in the VDI assessment phase, it is important to take performance considerations into account and to assess and optimize the entire backend infrastructure supporting virtual desktops.

Consider Performance Assurance Early
In fact, performance has to be considered at every stage of the VDI lifecycle because it is fundamental to the success or failure of the VDI rollout. The new types of inter-desktop dependencies that exist in VDI have to be accounted for at every stage. For example, in many of the early VDI deployments, administrators found that when they just migrated the physical desktops to VDI, backups or antivirus software became a problem. These software components were scheduled to run at the same time on all the desktops. When the desktops were physical, it didn’t matter, because each desktop had dedicated hardware. With VDI, the synchronized demand for resources from all the desktops severely impacted the performance of the virtual desktops. This was not something that was anticipated because the focus of most designs and plans was on the individual desktops.

Understanding the performance requirements of desktops may also help plan the virtual desktop infrastructure more efficiently. For example, known heavy CPU-using desktop users can be load balanced across servers. Likewise, by planning to assign a good mix of CPU intensive and memory intensive user desktops to a physical server, it is possible to get optimal usage of the existing hardware resources.

Lessons from Server Virtualization
Taking this discussion one step further, it is interesting to draw a parallel with how server virtualization evolved and to see what lessons we can learn as far as VDI is concerned. A lot of the emphasis in the early days was on determining which applications could be virtualized and which ones could not. Today, server virtualization technology has evolved to a point where there are more virtual machines being deployed in a year than physical machines, and almost every application server (except very old legacy ones) are virtualized fairly well. You no longer hear anyone asking whether an application server can be virtualized or not. From focusing on the hypervisor, virtualization vendors have realized that performance and manageability are key to the success of server virtualization deployments.

Table above: Lessons that enterprises deploying VDI can learn from the server virtualization experience of the past

VDI deployments could be done more rapidly and more successfully if we learn our lessons from how server virtualization evolved. VDI assessment needs to expand its focus from the desktop alone to the entire infrastructure. Attention during VDI rollouts has to be paid to performance management and assurance. To avoid a lot of rework and problem remediation down the line, performance assurance must be considered early on in the process and at every stage. This is key to getting VDI deployed on a bigger scale and faster, with great return on investment (ROI).

Managing VDI Performance Issues – Best Practices
When VDI performance issues show up, how do you solve them without just throwing more hardware at the problem, killing budgets as well as return on investment (ROI)? When a user calls IT about slow applications, how do you pinpoint the true service performance bottleneck? Is it the network? The profile server? The web? The desktop virtualization platform? Storage?

Some of these issues can be addressed if we look at the lessons learned from server virtualization. Below are some best practices, insight and predictions for VDI deployment success:

Avoid costly issues and remediation downstream – Performance assurance processes affecting the VDI infrastructure need to be built in early in order to avoid costly issues and re-mediation downstream, and to mitigate the risk of VDI failure during deployment. When deploying VDI on a large scale, it is key to avoid slow, manual ad-hoc processes that impact performance. It is imperative that IT considers inter-desktop dependencies from the very beginning.

Move beyond the silo – Today, service delivery is more demanding than ever. Companies require 360-degree VDI service visibility with virtualization-aware performance correlation across every layer and every tier ‒ from desktops to applications and from network to storage. Administrators need deep insights into the causes of VDI service performance issues in order to detect and fix root-cause problems. It is no longer useful to monitor individual silos because of the complexity of today’s infrastructures. There are just too many opportunities for problems.

Engage in Best Practices – Monitor VDI performance, not silos; right-size for ROI; engage in preemptive detection and alerting; monitor users, not only VMs; and have deep visibility into sessions. It is best to approach VDI from this perspective in order to get more return out of VDI investments.

The key to a successful VDI deployment is the ability to automate monitoring and management of the entire VDI service across every tier of the infrastructure stack – from the underlying hardware, network and storage, to the virtualization platform and self service front-end applications. If that end-to-end automated approach is taken, user performance issues can be diagnosed and fixed more rapidly with fewer resources – and even proactively, before users notice.

For more information on how to ensure VDI performance, visit

Management Technologies will Play a Central Role in Fulfilling the Promise of Cloud Computing and Virtualization Technologies

2011 is almost here and it promises to be an exciting and challenging year!  Here are my top 10 predictions in the monitoring and management space for 2011.

Virtualization and cloud computing have garnered a lot of attention recently. While virtualization has been successfully used for server applications, its usage for desktops is still in its early stages. Cloud computing is being tested for different enterprise applications, but has yet to gain complete acceptance in the enterprise. 2011 will be the year that these technologies become mainstream.

A key factor determining the success of these technologies will be the total cost of ownership (TCO). The lower the TCO, the greater the chance of adoption. By proactively alerting administrators to problems, pointing to bottleneck areas and suggesting means of optimizing the infrastructure, management technologies will play a central role in ensuring that these technologies are successful. With this in mind, I make the following predictions for 2011:

1. Virtualization will go mainstream in production environments. Very few organizations will not have at least one virtualized server hosting VMs. Enterprises will focus on getting the maximum out of their existing investments and will look to increase the VM density – i.e., the number of VMs for each physical server. In order to do so, administrators will need to understand the workload on each VM and which workloads are complementary (e.g., memory intensive vs. CPU intensive), so IT can use a mix and match of VMs with different workloads to maximize usage of the physical servers. Management tools will provide the metrics that will form the basis for such optimizations.

2. Multiple virtualization platforms in an organization will become a reality. Over the last year, different vendors have come up with virtualization platforms that offer lower cost alternatives to the market leader, VMware. Expect enterprises to use a mix of virtualization technologies; the most critical applications being hosted on virtualization platforms with the best reliability and scalability, while less critical applications may be hosted on lower-cost platforms. Enterprises will look for management tools that can support all of these virtualization platforms from a single console.

3. Enterprises will realize that they cannot effectively manage virtual environments as silos. As key applications move to virtual infrastructures, enterprises will realize that mis-configuration or problems in the virtual infrastructure can also affect the performance of business services running throughout the infrastructure. Further, because virtual machines share the common resources of the physical server, a single malfunctioning virtual machine (or application) can impact the performance seen by all the other virtual machines (and the applications running on them). If virtualization is managed as an independent silo, enterprise service desks will have no visibility into issues in the virtual infrastructure and, as a result, could end up spending endless hours troubleshooting a problem that was caused at the virtualization tier. Enterprise service desks will need management systems that can correlate the performance of business services with that of the virtual infrastructure and help them quickly translate a service performance problem into an actionable event at the operational layer.

4. Virtual desktop deployments will finally happen. VDI deployments in 2010 have mostly been proof of concepts; relatively few large scale production VDI deployments of VDI have occurred. Many VDI deployments run into performance problems, so IT ends up throwing more hardware at the problem, which in turn makes the entire project prohibitively expensive. Lack of visibility into VDI is also a result of organizations trying to use the same tools they have used for server virtualization management for VDI. In 2011, enterprises will realize that desktop virtualization is very different from server virtualization, and that management tools for VDI need to be tailored to the unique challenges that a virtual desktop infrastructure poses. Having the right management solution in place will also provide VDI administrators visibility into every tier of the infrastructure, thereby allowing them to determine why a performance slowdown is happening and how they can re-engineer the infrastructure for optimal performance.

5. Traditional server-based computing will get more attention as organizations realize that VDI has specific use cases and will not be a fit for others. For some time now, enterprise architects have been advocating the use of virtual desktops for almost every remote access requirement. As they focus on cost implications of VDI, enterprise architects will begin to evaluate which requirements really need the flexibility and security advantages that VDI offers over traditional server-based computing. As a result, we expect server-based computing deployments to have a resurgence. For managing these diverse remote access technologies, enterprises will look for solutions that can handle both VDI and server-based computing environments equally well and offer consistent metrics and reporting across these different environments.

6. Cloud computing will gain momentum. Agility will be a key reason why enterprises will look at cloud technologies. With cloud computing, enterprise users will have access to systems on-demand, rather than have to wait for weeks or months for enterprise IT teams to procure, install and deliver the systems. Initially, as with virtualization, less critical applications including testing, training and other scratch-and-build environments will move to the public cloud. Internal IT teams in enterprises will continue work on public clouds and ultimately a hybrid cloud model will evolve in the enterprise. Monitoring and management technologies will need to evolve to manage business services that span one or more cloud providers, where the service owner will not have complete visibility into the cloud infrastructure that their service is using.

7. Enterprises will move towards greater automation. For all the talk about automation, very few production environments make extensive use of this powerful functionality. For cloud providers, automation will be a must as they seek to make their environments agile. Dynamic provisioning, automated load balancing and on-demand power on/power off of VMs based on user workloads will all start to happen in the data center.

8. Do more with less will continue to be the paradigm driving IT operations. Administrators will look for tools that can save them at least a few hours of toil each day through proactive monitoring, accurate root-cause diagnosis and pinpointing of bottleneck areas. Cost will be an important criterion for tool selection and, as hardware becomes cheaper, management tool vendors will be forced away from pricing per CPU, core, socket or per application managed.

9. Enterprises will continue to look to consolidate monitoring tools. Enterprises have already begun to realize that having specialized tools for each and every need is wasteful spending and actually disruptive. Every new tool and introduced carries a cost and adds requirements for operator training, tool certification, validation, etc. In 2011, we expect enterprises to look for multi-faceted tools that can cover needs in multiple areas. Tools that can span the physical and virtual worlds, that can offer both active and passive monitoring capabilities, support both performance and configuration management will be in high demand. Consolidation of monitoring tools will result in tangible operational savings and actually work better than a larger number of dedicated element managers.

10. ROI will be the driver for any IT initiative. In the monitoring space, tools will be measured not by the number of metrics they collect but by how well they help solve real-world problems. IT staff will look for solutions that excel at proactively monitoring and issuing alerts in advance of before a problem happens, and how they can help customers be more productive and efficient (e.g., by reducing the time an expert has to spend on a trouble call).

Reposted from VMBlog –

Five Myths of Virtualization Management

The virtualization market has seen a dramatic growth in the last few years. Many recent industry surveys have pointed that a majority of enterprises are over 30% virtualized. Monitoring and management has become crucial as virtualization further penetrates the enterprise. In this article, we consider five of the most common misconceptions about virtualization monitoring and management.

  • Myth 1: Virtualization makes monitoring easier because there are fewer physical servers involved.
    An earlier article discussed why this is not really the case.
  • Myth 2: Virtualization offers ways to reserve resources. Therefore, you can just reserve resources for your VMs and they will not interfere with each other.
    Two reasons why i say this is a myth. First, not all resources can be completely reserved (yet) with most virtualization platforms. This is especially true for network and storage resources. Secondly, if you just go about statically reserving resources, you are not really benefiting from resource sharing. So most environments do not use static resource reservations.
  • Myth 3: Virtualization technology allows VMs to be provisioned rapidly on-demand. The virtualization team’s job is  to provide VMs and they can operate independent of the enterprise IT operations teams.
    This is a myth because as critical applications are deployed on virtual machines, it will be important to determine when a problem happens, where is the root-cause: is it the network? database? web server? middleware? or it is the virtual machine or is it the physical machine? or storage? If the virtualization team works in isolation, as a silo, it will not be possible to get end-to-end, top-to-bottom visibility into all the components involved in supporting a critical business service. This in turn will result in ineffective and inefficient operations management.  Click here to view a presentation that explains why you should not be monitoring virtualization as another infrastructure silo.
  • Myth 4: Virtualization platforms offer a lot of metrics. These metrics reveal all I need to know to monitor my VM infrastructure.
    Yes, you get a lot of metrics from the virtualization platform – from these metrics, for example, you can find out which VM is taking up a lot of CPU. However, you cannot find out from these metrics why the VM is taking up CPU – is it a specific runaway process? is it load? etc. Hence, you need monitoring tools that go beyond just showing the virtualization platform metrics in another graphical interface.
  • Myth 5: Virtual desktops are just VMs. So the same tools used for monitoring virtual servers can be used for monitoring virtual desktops.
    This is a topic i have covered earlier. In a VDI environment, it is critical to monitor users,  not just VMs. This is because the workload of a VM depends on which user is logged in to the desktop and what  applications he/she is using.

If you are interested, click here to view an on-line presentation goes into more details on these five common myths of virtualization management.

Performance Management Challenges in Virtual Desktop Infrastructures – Webinar

Join eG Innovations, The 451 Group, and Computer Sciences Corporation for this webinar next week – Nov 4th 2010, 12 noon to 1pm ET. Details of the webinar are available here >>> If you have deployed or are deploying VDI, be sure to sign up!

Citrix Blog on the eG Innovations-Citrix Partnership

Ed Hubbard from Citrix writes in his recent blog post:

Steak & Sizzle, that’s what makes a great program announcement for Citrix and it’s what our best partners concentrate on as they are planning future product integrations and the corresponding announcements. Let’s drill down a little here on what makes for great steak and great sizzle:

■ Steak – With a great product integration, extension or complementary solution built around Citrix products, we will naturally want to support these with more vigor in our programs, shows and in the marketplace. Making your solutions specific to their Citrix counterparts is paramount in both delivering a combined solution that is polished and one that provides outstanding joint value for customers. Bringing a standard, undifferentiated product to the table is fine, but realize you’re bringing low-grade hamburgers to a cookout where other partners have brought fillets.
■ Sizzle – No matter how good the steak is you bring to the cookout, we know the sizzle is what’s going to bring customers to the table. Once you’ve done the homework, and built an excellent integration with Citrix products, you want to grab the attention of Citrix, our channel partners, our marketing and sales organizations, and our joint customers. To do this you need to think through a program like eG’s free offer for XenDesktop customers, special pricing, a bundle promotion or other programs that would make sense for your product or solution that really draws customers into considering our joint solutions.

Click here to read Ed’s entire post –,+The+Sizzle