Citrix XenDesktop Performance Management – Optimizing VDI User Experience While Enhancing IT Productivity and ROI


eG-EnterpriseWe would like to invite you to join the upcoming eG Innovations live demonstration Citrix XenDesktop Performance Management – Optimizing VDI User Experience While Enhancing IT Productivity and ROI on August 21, 2014 at 2pm ET | 1pm CT | 11am PT | 7pm UK | 8pm CET.

See first-hand how to address performance-related challenges in your business-critical VDI environment. Register here: https://www4.gotomeeting.com/register/916239343

Desktop virtualization expert Bala Vaidhinathan (CTO, eG Innovations) will demonstrate how eG Enterprise helps you manage your virtual desktop environment to:

  • Proactively identify and diagnose the root cause of issues and resolve them before users notice
  • Rapidly resolve user complaints such as “my desktop is slow” with automated identification of bottlenecks and their root cause
  • Achieve real-time oversight and performance management to meet SLA and performance objectives
  • Gain a holistic end-to-end view of the virtual desktop infrastructure, understand usage trends and bottlenecks and plan effectively for growth
  • Customize real-time dashboards and create targeted reports to deliver timely, concise information that is relevant and valuable to each technical and management team member

Register Now: https://www4.gotomeeting.com/register/916239343

Title:  Citrix XenDesktop Performance Management – Optimizing VDI User Experience While Enhancing IT Productivity and ROI

Date:  August 21, 2014 at 2pm ET | 1pm CT | 11am PT | 7pm UK | 8pm CET

Presenters: Bala Vaidhinathan (CTO, eG Innovations), Holger Schulze (VP Marketing, eG Innovations)

 

We look forward to seeing you online!

eG Introduces VDI Performance Assessment Service – Move VDI Deployments from Test to Best


There is to a growing trend where VDI pilots go well until virtual desktops are being rolled out in production environments with thousands of users, when unexpected performance and cost overrun problems start to occur. Users start to complain about sluggish applications and ask for their laptops back.

Attempts at fixing these performance problems by using traditional virtual desktop planning tools and by throwing more hardware at the problem quickly cause cost overruns, kill ROI and typically don’t resolve the performance issue.

We are proud to announce a new groundbreaking VDI performance assessment service, eG Perform™, to help companies pre-empt and overcome virtual desktop performance issues so they can deliver on the desktop virtualization promise of flexibility, scalability and end-user satisfaction. eG Perform fills a huge gap in the market by identifying VDI bottlenecks and helping companies restore performance and deliver on the promise of desktop virtualization.

In the early stages of VDI deployment, focus was on desktop virtualization viability. Pre-deployment assessments were used to determine the virtualization readiness of physical desktops. As the technology matured, the focus is shifting to performance assurance, right sizing and optimization of large scale deployments. Consequently, performance assessment has had to evolve from just focusing on physical desktops to a complete analysis of the virtual desktop infrastructure.

VDI Performance Assessment Overview

eG’s new VDI performance assessment service leverages eG Innovations’ patented and award-winning cloud-based performance assurance platform that delivers pre-emptive, automated, and integrated performance assurance for dynamic IT environments. eG Perform deploys a unique methodology and patented technology to provide VDI project managers with:

  • Actionable insight and guidance to quickly diagnose and resolve performance bottlenecks, and deliver on the promise of exceptional performance, user productivity, and ROI;
  • Complete end-to-end performance visibility and diagnosis across every tier, every layer of the virtual desktop service – Citrix, VMware, Network, Active Directory, Storage, Applications – so you know what’s working and what’s not; and,
  • Detailed reports and powerful analytics to right-size and optimize the virtual desktop infrastructure and increase ROI, complete with actionable insight into hardware bottlenecks, top users, top apps, critical dependencies, etc.

Integrated with eG’s on-demand cloud-based performance assurance platform eG-on-Tap, the eG Perform assessment service allows enterprises to baseline their current VDI performance, understand critical bottlenecks, and identify how they can optimize their virtual desktop infrastructure for peak performance and cost efficiency.

Free Assessment Offer

Enterprises interested in trying out the service risk-free can get a complete eG Perform analysis for up to five servers free of charge for a three-week period. The service will deliver a comprehensive report documenting the current performance of the virtual desktop infrastructure complete with identified performance bottlenecks and areas for optimization.

Additional Resources

VDI Performance Assurance: How to Deliver Virtual Desktop Success


Desktop virtualization is a hot topic. In fact, a recent IDC study showed that 45 percent of CIOs polled indicated that virtualization of the desktop is their number one concern and interest in 2012. But despite the interest and many attempts at deployment, many VDI rollouts fail due to performance and user experience issues. Why?

As organizations move from VDI test and pilot stages to production, they are realizing that the “traditional” approach of treating performance as an afterthought and addressing it in a reactive fashion does not scale. Too often, performance issues surprise VDI project owners during and after rollout, when everything worked just fine during the (often over-provisioned and less complex) pilot.

Focus on the Desktop Often Neglects Backend Infrastructure
Very often, when an enterprise starts on the virtual desktop journey, the focus is on the user desktop. This is only natural; after all, it is the desktop that is moving ‒ from being on a physical system to a virtual machine. Therefore, once a decision to try out VDI is made, the primary focus is to benchmark the performance of physical desktops, model their usage, predict the virtualized user experience and, based on the results, determine which desktops can be virtualized and which can’t. This is what many people refer to as VDI assessment.

One of the fundamental changes with VDI is that the desktops no longer have dedicated resources. They share the resources of the physical machine on which they are hosted and they may even be using a common storage subsystem. While resource sharing provides several benefits, it also introduces new complications. A single malfunctioning desktop can drain resources to the point that it impacts the performance of all the other desktops.

Whereas in the physical world, the impact of a failure or a slowdown was minimal (if a physical desktop failed, it would impact only one user), the impact of failure or slowdown in the virtual world is much more severe (one failure can impact hundreds of desktops). Therefore, even in the VDI assessment phase, it is important to take performance considerations into account and to assess and optimize the entire backend infrastructure supporting virtual desktops.

Consider Performance Assurance Early
In fact, performance has to be considered at every stage of the VDI lifecycle because it is fundamental to the success or failure of the VDI rollout. The new types of inter-desktop dependencies that exist in VDI have to be accounted for at every stage. For example, in many of the early VDI deployments, administrators found that when they just migrated the physical desktops to VDI, backups or antivirus software became a problem. These software components were scheduled to run at the same time on all the desktops. When the desktops were physical, it didn’t matter, because each desktop had dedicated hardware. With VDI, the synchronized demand for resources from all the desktops severely impacted the performance of the virtual desktops. This was not something that was anticipated because the focus of most designs and plans was on the individual desktops.

Understanding the performance requirements of desktops may also help plan the virtual desktop infrastructure more efficiently. For example, known heavy CPU-using desktop users can be load balanced across servers. Likewise, by planning to assign a good mix of CPU intensive and memory intensive user desktops to a physical server, it is possible to get optimal usage of the existing hardware resources.

Lessons from Server Virtualization
Taking this discussion one step further, it is interesting to draw a parallel with how server virtualization evolved and to see what lessons we can learn as far as VDI is concerned. A lot of the emphasis in the early days was on determining which applications could be virtualized and which ones could not. Today, server virtualization technology has evolved to a point where there are more virtual machines being deployed in a year than physical machines, and almost every application server (except very old legacy ones) are virtualized fairly well. You no longer hear anyone asking whether an application server can be virtualized or not. From focusing on the hypervisor, virtualization vendors have realized that performance and manageability are key to the success of server virtualization deployments.

Table above: Lessons that enterprises deploying VDI can learn from the server virtualization experience of the past

VDI deployments could be done more rapidly and more successfully if we learn our lessons from how server virtualization evolved. VDI assessment needs to expand its focus from the desktop alone to the entire infrastructure. Attention during VDI rollouts has to be paid to performance management and assurance. To avoid a lot of rework and problem remediation down the line, performance assurance must be considered early on in the process and at every stage. This is key to getting VDI deployed on a bigger scale and faster, with great return on investment (ROI).

Managing VDI Performance Issues – Best Practices
When VDI performance issues show up, how do you solve them without just throwing more hardware at the problem, killing budgets as well as return on investment (ROI)? When a user calls IT about slow applications, how do you pinpoint the true service performance bottleneck? Is it the network? The profile server? The web? The desktop virtualization platform? Storage?

Some of these issues can be addressed if we look at the lessons learned from server virtualization. Below are some best practices, insight and predictions for VDI deployment success:

Avoid costly issues and remediation downstream – Performance assurance processes affecting the VDI infrastructure need to be built in early in order to avoid costly issues and re-mediation downstream, and to mitigate the risk of VDI failure during deployment. When deploying VDI on a large scale, it is key to avoid slow, manual ad-hoc processes that impact performance. It is imperative that IT considers inter-desktop dependencies from the very beginning.

Move beyond the silo – Today, service delivery is more demanding than ever. Companies require 360-degree VDI service visibility with virtualization-aware performance correlation across every layer and every tier ‒ from desktops to applications and from network to storage. Administrators need deep insights into the causes of VDI service performance issues in order to detect and fix root-cause problems. It is no longer useful to monitor individual silos because of the complexity of today’s infrastructures. There are just too many opportunities for problems.

Engage in Best Practices – Monitor VDI performance, not silos; right-size for ROI; engage in preemptive detection and alerting; monitor users, not only VMs; and have deep visibility into sessions. It is best to approach VDI from this perspective in order to get more return out of VDI investments.

The key to a successful VDI deployment is the ability to automate monitoring and management of the entire VDI service across every tier of the infrastructure stack – from the underlying hardware, network and storage, to the virtualization platform and self service front-end applications. If that end-to-end automated approach is taken, user performance issues can be diagnosed and fixed more rapidly with fewer resources – and even proactively, before users notice.

For more information on how to ensure VDI performance, visit http://www.eginnovations.com

eG Innovations and RES Software Join Forces to Transform Enterprise Desktop Environments


eG Innovations and RES Software, the proven leader in dynamic desktop solutions, today announced a strategic partnership. The collaboration takes eG Innovations’ expertise in delivering performance assurance for VDI environments, and combines it with the flexibility of the desktop personalization and context aware capabilities of RES Software.

When pooled together, these two solutions relieve challenges associated with multi-location performance, improve IT efficiency and reduce costs of managing today’s complex hybrid desktop environments that often include a mix of devices and delivery platforms.

By combining technology from RES Software and eG Innovations, enterprises will benefit from:

  • A more predictable, reliable and secure user experience
  • Maximum user productivity due to consistent performance of VDI environments
  • Improved management of dynamic desktop environments
  • Personalized and compliant desktops for users across multiple devices, providing more flexibility for mobile workers
  • A framework to quickly resolve VDI issues that impact both performance and usability

“As enterprises investigate VDI as a viable technology for their organization, the impact on the user experience and overall performance are two key critical elements considered,” said Tony Falsone, virtualization practice director at FusionStorm, a leading IT services provider. “Many times, introducing new technologies creates a hybrid desktop environment that can be even more challenging for IT to manage. Together, RES Software and eG Innovations give enterprises the confidence to know that users won’t experience disruption by the move to virtual desktops, and they can fully take advantage of the flexibility and mobility offered by VDI. We see tremendous value in this joint offering, and our customers stand to benefit greatly from pairing these two technologies as they initiate upcoming desktop transformation projects.”

Virtual Desktop Success with Performance Assurance (Part 2)


In part 1 of this article, we talked about how the current VDI deployment cycle is broken, often overlooking the new inter-dependencies and performance implications introduced by desktop virtualziation. To ensure VDI success, performance has to be considered at every stage of the VDI lifecycle because it is fundamental to the success or failure of the VDI rollout.

Understanding the performance requirements of desktops will also help plan the virtual desktop infrastructure more efficiently. For example, known heavy CPU using desktop users can be load balanced across servers. Likewise, by planning to assign a good mix of CPU intensive and memory intensive user desktops are assigned to a physical server, it is possible to get optimal usage of the existing hardware resources.

Taking this discussion one step further, it is interesting to draw a parallel with how server virtualization evolved and to see what lessons we can learn as far as VDI is concerned.

A lot of the emphasis in the early days was on determining which applications could be virtualized and which ones could not. Today, server virtualization technology has evolved to a point where there are more virtual machines being deployed in a year than physical machines, and almost every application server (except very old legacy ones) are virtualized fairly well. You no longer hear anyone asking whether this application server can be virtualized or not. From focusing on the hypervisor, virtualization vendors have realized that performance and manageability are key to the success of server virtualization deployments.

VDI deployments could be done more rapidly and more successfully if we learn our lessons from how server virtualization evolved. VDI assessment needs to expand in focus on just the desktop and look at the entire infrastructure. Attention during VDI rollouts has to be paid to performance management and assurance. To avoid a lot of rework and problem remediation down the line, performance assurance must be considered early on in the process and at every stage. This is key to getting VDI deployed on a bigger scale and faster, with great return on investment (ROI).

To learn more about VDI performance, join the on-demand webinar “Top-5 Best Practices for Virtual Desktop Success

Virtual Desktop Success with Performance Assurance (Part 1)


Very often, when an enterprise starts on the virtual desktop journey, the focus is on the user desktop. This is only natural – after all, it is the desktop that is moving – from being on a physical system to a virtual machine.

Therefore, once a decision to try out VDI is made, the primary focus is to benchmark the performance of physical desktops, model their usage, predict the virtualized user experience and based on the results, determine which desktops can be virtualized and which can’t. This is what many people refer to as “VDI assessment”.

One of the fundamental changes with VDI is that the desktops no longer have dedicated resources. They share the resources of the physical machine on which they are hosted and they may even be using a common storage subsystem.

While resource sharing provides several benefits, it also introduces new complications. A single malfunctioning desktop can take so much resources that it impacts the performance of all the other desktops. Whereas in the physical world, the impact of a failure or a slowdown was minimal (if a physical desktop failed, it would impact only one user), the impact of failure or slowdown in the virtual world is much more severe (one failure can impact hundreds of desktops). Therefore, even in the VDI assessment phase, it is important to take performance considerations into account.

In fact, performance has to be considered at every stage of the VDI lifecycle because it is fundamental to the success or failure of the VDI rollout. The new types of inter-desktop dependencies that exist in VDI have to be accounted for at every stage.

For example, in many of the early VDI deployments, administrators found that when they just migrated the physical desktops to VDI, backups or antivirus software became a problem. These software components were scheduled to run at the same time on all the desktops. When the desktops were physical, it didn’t matter, because each desktop had dedicated hardware. With VDI, the synchronized demand for resources from all the desktops severely impacted the performance of the virtual desktops. This was not something that was anticipated because the focus of most designs and plans was on the individual desktops.

In part 2 of this article, we will take a closer look at the parallels to server virtualization and what lessons we can learn for VDI.

Management Technologies will Play a Central Role in Fulfilling the Promise of Cloud Computing and Virtualization Technologies


2011 is almost here and it promises to be an exciting and challenging year!  Here are my top 10 predictions in the monitoring and management space for 2011.

Virtualization and cloud computing have garnered a lot of attention recently. While virtualization has been successfully used for server applications, its usage for desktops is still in its early stages. Cloud computing is being tested for different enterprise applications, but has yet to gain complete acceptance in the enterprise. 2011 will be the year that these technologies become mainstream.

A key factor determining the success of these technologies will be the total cost of ownership (TCO). The lower the TCO, the greater the chance of adoption. By proactively alerting administrators to problems, pointing to bottleneck areas and suggesting means of optimizing the infrastructure, management technologies will play a central role in ensuring that these technologies are successful. With this in mind, I make the following predictions for 2011:

1. Virtualization will go mainstream in production environments. Very few organizations will not have at least one virtualized server hosting VMs. Enterprises will focus on getting the maximum out of their existing investments and will look to increase the VM density – i.e., the number of VMs for each physical server. In order to do so, administrators will need to understand the workload on each VM and which workloads are complementary (e.g., memory intensive vs. CPU intensive), so IT can use a mix and match of VMs with different workloads to maximize usage of the physical servers. Management tools will provide the metrics that will form the basis for such optimizations.

2. Multiple virtualization platforms in an organization will become a reality. Over the last year, different vendors have come up with virtualization platforms that offer lower cost alternatives to the market leader, VMware. Expect enterprises to use a mix of virtualization technologies; the most critical applications being hosted on virtualization platforms with the best reliability and scalability, while less critical applications may be hosted on lower-cost platforms. Enterprises will look for management tools that can support all of these virtualization platforms from a single console.

3. Enterprises will realize that they cannot effectively manage virtual environments as silos. As key applications move to virtual infrastructures, enterprises will realize that mis-configuration or problems in the virtual infrastructure can also affect the performance of business services running throughout the infrastructure. Further, because virtual machines share the common resources of the physical server, a single malfunctioning virtual machine (or application) can impact the performance seen by all the other virtual machines (and the applications running on them). If virtualization is managed as an independent silo, enterprise service desks will have no visibility into issues in the virtual infrastructure and, as a result, could end up spending endless hours troubleshooting a problem that was caused at the virtualization tier. Enterprise service desks will need management systems that can correlate the performance of business services with that of the virtual infrastructure and help them quickly translate a service performance problem into an actionable event at the operational layer.

4. Virtual desktop deployments will finally happen. VDI deployments in 2010 have mostly been proof of concepts; relatively few large scale production VDI deployments of VDI have occurred. Many VDI deployments run into performance problems, so IT ends up throwing more hardware at the problem, which in turn makes the entire project prohibitively expensive. Lack of visibility into VDI is also a result of organizations trying to use the same tools they have used for server virtualization management for VDI. In 2011, enterprises will realize that desktop virtualization is very different from server virtualization, and that management tools for VDI need to be tailored to the unique challenges that a virtual desktop infrastructure poses. Having the right management solution in place will also provide VDI administrators visibility into every tier of the infrastructure, thereby allowing them to determine why a performance slowdown is happening and how they can re-engineer the infrastructure for optimal performance.

5. Traditional server-based computing will get more attention as organizations realize that VDI has specific use cases and will not be a fit for others. For some time now, enterprise architects have been advocating the use of virtual desktops for almost every remote access requirement. As they focus on cost implications of VDI, enterprise architects will begin to evaluate which requirements really need the flexibility and security advantages that VDI offers over traditional server-based computing. As a result, we expect server-based computing deployments to have a resurgence. For managing these diverse remote access technologies, enterprises will look for solutions that can handle both VDI and server-based computing environments equally well and offer consistent metrics and reporting across these different environments.

6. Cloud computing will gain momentum. Agility will be a key reason why enterprises will look at cloud technologies. With cloud computing, enterprise users will have access to systems on-demand, rather than have to wait for weeks or months for enterprise IT teams to procure, install and deliver the systems. Initially, as with virtualization, less critical applications including testing, training and other scratch-and-build environments will move to the public cloud. Internal IT teams in enterprises will continue work on public clouds and ultimately a hybrid cloud model will evolve in the enterprise. Monitoring and management technologies will need to evolve to manage business services that span one or more cloud providers, where the service owner will not have complete visibility into the cloud infrastructure that their service is using.

7. Enterprises will move towards greater automation. For all the talk about automation, very few production environments make extensive use of this powerful functionality. For cloud providers, automation will be a must as they seek to make their environments agile. Dynamic provisioning, automated load balancing and on-demand power on/power off of VMs based on user workloads will all start to happen in the data center.

8. Do more with less will continue to be the paradigm driving IT operations. Administrators will look for tools that can save them at least a few hours of toil each day through proactive monitoring, accurate root-cause diagnosis and pinpointing of bottleneck areas. Cost will be an important criterion for tool selection and, as hardware becomes cheaper, management tool vendors will be forced away from pricing per CPU, core, socket or per application managed.

9. Enterprises will continue to look to consolidate monitoring tools. Enterprises have already begun to realize that having specialized tools for each and every need is wasteful spending and actually disruptive. Every new tool and introduced carries a cost and adds requirements for operator training, tool certification, validation, etc. In 2011, we expect enterprises to look for multi-faceted tools that can cover needs in multiple areas. Tools that can span the physical and virtual worlds, that can offer both active and passive monitoring capabilities, support both performance and configuration management will be in high demand. Consolidation of monitoring tools will result in tangible operational savings and actually work better than a larger number of dedicated element managers.

10. ROI will be the driver for any IT initiative. In the monitoring space, tools will be measured not by the number of metrics they collect but by how well they help solve real-world problems. IT staff will look for solutions that excel at proactively monitoring and issuing alerts in advance of before a problem happens, and how they can help customers be more productive and efficient (e.g., by reducing the time an expert has to spend on a trouble call).

Reposted from VMBlog – http://vmblog.com/archive/2010/12/09/eg-innovations-management-technologies-will-play-a-central-role-in-fulfilling-the-promise-of-cloud-computing-and-virtualization-technologies.aspx