Editor’s note: Virtual data centers are extremely complex, especially in a cloud provider’s environment. VMTurbo’s control system provides a fundamentally different method for operating workloads and data center and cloud infrastructure. CTO and founder Shmuel Kliger discusses trends in cloud infrastructure strategies, how to achieve lower TCO from cloud providers and what’s ahead in achieving greater value from infrastructure investments in internal clouds.
SandHill.com: What emerged over the past 12 months as the top three concerns of potential buyers of cloud services that VMTurbo has heard in the market?
Shmuel Kliger: When we engage with new or potential customers their main concern is that they are wasting way too much time manually troubleshooting when alerts are sounded in the data center. There are so many variables to consider that it is nearly impossible to understand the cause of and be able to fix every issue.
CIOs complain that too much time is being devoted to “keeping the lights on” and not on more strategic activities. IT administrators complain that they are finding it difficult to keep pace with the environment and they wish they had a better solution than the legacy product from the “physical” era.
The two other issues we hear most are that organizations feel they are not effectively utilizing their infrastructure, and they are concerned with being able to ensure their business-critical applications have access to the resources they need so they may over-provision resources — thereby under-utilizing their infrastructure. It can be a vicious cycle.
SandHill.com: How does VMTurbo differentiate from competitors in the way it plans to meet these needs?
Shmuel Kliger: VMTurbo’s control system provides a fundamentally different method for operating workloads and data center and cloud infrastructure. Since virtualization provides all of the necessary controls through software, we’ve created a solution that intelligently, automatically and continuously adjusts resource configurations and workload placement to maintain the environment in a perpetually healthy state.
This unlocks huge efficiency gains since it’s preventive, automated and prescriptive. It offloads a significant burden from operational staff and eliminates the reactive firefighting typical in the data center.
Our approach is a departure from legacy management solutions that collect a lot of data, trigger alarms when policies are violated and then wait for a human to investigate and remediate. Our solution is more holistic. Its analytics are applied to the full life cycle of planning, deploying and ongoing management and control of the environment.
Lastly, our differentiation lies in our depth and breadth of coverage. Our solution controls all layers of the IT stack (hypervisors, cloud platforms, storage, fabric, compute, applications, etc.) and supports multiple vendors’ solutions from a single instance.
SandHill.com: Please explain how VMTurbo achieves the capability that is referred to in the following testimonial from Indiana University on your website: “What VMTurbo has over all the other vendors in this space that I’m aware of is their unique economic model, the way they look at things. Instead of just pulling raw data out of vCenter on CPU use or memory use, they actually create an entire model that drives intelligent actions to keep the environment healthy. …”
Shmuel Kliger: VMTurbo’s patented economic model is our secret sauce. In the early 2000s, I came across the concept of applying economic principles to manage IT resources, a concept studied in academia since the 1970s. Recognizing the power and purity of the concept, I took action when virtualization began to take hold.
I realized that virtualization is an inflection point where the traditional approaches to managing IT operations won’t suffice to meet the inherent challenges; at the same time, virtualization provides all the necessary controls through software. It was clear to me that applying the economic principles to meet the challenges of this new world is the perfect approach.
Basically, we abstract the virtualized IT stack into a service supply chain, or marketplace, of resources. These resources are commodities that are bought and sold by the entities that require them. Sellers price their commodities based on utilization rates, while buyers continuously shop for the best price for the resources they require.
Then we have a set of analytics that uses virtual currency to balance the supply and demand of services along this supply chain. It uses the “invisible hand of the market” to find the equilibrium point between supply and demand and drive the necessary actions to bring the environment into balance — and keep it there as demands fluctuate.
With high-priority versus lower-priority applications afforded greater budgets to shop for resources, and taking policies or constraints into consideration, we can automate actions to make sure that applications get the resources they need while optimizing the utilization of IT assets.
SandHill.com: What are the top three to five strategies that first-time cloud buyers can do to ensure lower TCO? Are there specific criteria that they need to seek when considering cloud providers?
Shmuel Kliger: The three most important things to consider are what type of cloud environment is best for the organization (i.e., public, private or hybrid cloud), performance and how to guarantee service levels for mission-critical apps, and getting a return on the investment. So when organizations are looking into cloud providers, they need to be clear on their own expectations in order to find the right cloud solution.
With regard to the cloud type, the question to ask here is: Where will my data and applications operate most effectively and at the right cost? For companies looking for a test-and-development environment, a public cloud provides a cost-friendly route and an easy deployment. But a public cloud may not be able to deliver performance service levels (SLAs) like a private cloud. If your organization is migrating mission-critical applications to the cloud, it is important to consider performance, cost and risk.
Secondly, when migrating to the cloud, companies need to consider performance as a key benefit. Assuring high performance of critical applications and overall quality of service is a primary reason organizations rely on the cloud. In order to reduce risk in a cloud environment, it is also important to ensure that SLAs are met.
Finally, many organizations underestimate the up-front costs associated with moving to the cloud — specifically for private or hybrid cloud deployments. Although the initial costs may seem like a drain on the IT budget, in the long run the payoff has the potential to be significant.
Companies need to determine the best cloud services available on the market that will assist them in achieving a maximum ROI. This is where intelligent workload management and capacity planning can play a key role, as they ensure the infrastructure is operating in an optimal state so that the customer gets as much return as possible out of its initial investment.
SandHill.com: What do you believe will be the two biggest changes in the cloud market and cloud provider landscape over the next two to three years?
Shmuel Kliger: Virtual data centers are extremely complex. Add in the kind of scale you would see in a cloud provider’s environment and you have an overwhelming number of moving parts to consider and manage. This type of environment will highlight the deficiencies that are found in the management tools that exist today.
In environments of scale, monitor-alert-diagnose approaches won’t work, and collecting metrics about the environment will never be meaningful given their shelf life versus the time to analyze and (manually) act on them.
We are already starting to see more of an emphasis being placed on receiving greater value from infrastructure investments in internal clouds. The key is obtaining the best ROI possible from virtualization and cloud adoption, and in order to do so, companies will focus on how to effectively utilize their infrastructure.
To reduce OPEX and drive a positive ROI, organizations will turn to cloud service providers that offer the intelligence and unified approach that will ensure their IT budget isn’t going to waste once they have deployed a cloud or virtualized environment.
Secondly, automation will become even more prevalent in IT operations management. IT operators will be freed from the ongoing task of troubleshooting, as solutions will require less manual intervention due to increased intelligence within today’s solutions and IT teams will be able to allocate their time more effectively.
This shift will provide a new and more comprehensive approach to addressing the challenges associated with managing and maintaining complex virtual and cloud environments.
VMTurbo was a collaborator in the 2013 Future of Cloud survey hosted by North Bridge Venture Partners, 451 Research and GigaOM. Click here to view the results of the survey.
Shmuel Kliger took on the role of CTO, focusing on product/technology strategy, after founding VMTurbo in 2009. Prior to VMTurbo, Shmuel joined EMC after the acquisition of SMARTS, where he was CTO and co-founder. At EMC he was VP of architecture and applied research in the CTO office, and CTO of the resource management software group. Earlier in his career, he developed middleware technologies at IBM. Follow Shmuel on Twitter @skojm.
Kathleen Goolsby is managing editor of SandHill.com.