“Private clouds need high levels of automation, scale, and policy control” – Willem Van Bilijon, cofounder and VP of Products at Nimbula
Nimbula is a Silicon Valley based cloud computing start-up which raised $5.75 million in Series A funding from Sequoia capital and VMware. The company was founded in early 2009 by ex-Amazon executives Chris Pinkham and Willem Van Bilijon. Chris Pinkham originally initiated and headed the Amazon EC2 project and Willem Van Bilijon led the EC2 development team to build the world’s first Infrastructure-as-a-Service (IaaS) solution and arguably helped launch the most successful commercial public cloud environment to date.
Nimbula’s software solution is squarely aimed at bringing Amazon-like scale, automation, security management, federation, and multi-tenancy etc. to enterprise infrastructure behind the firewall. In short, Nimbula offers a private cloud management software product that also allows for easy interoperability and federation with public clouds in a hybrid cloud environment. Nimbula is currently conducting beta trials of 0.1 version of their product with select customers. They are getting ready to release their public beta version in early fall and planning to have the general availability version ready by the end of the year. Feedback from customers has been positive particularly around high-levels of automation, scale, and policy control.
There is still a controversy in the industry about whether private clouds are “real” clouds. In a recent Debate in the Cloud Forum hosted by SandHill.com, I argued there is a business case to be made for private clouds since we heard firsthand from many CIOs that they are looking for enterprise-scale private cloud solutions.
A recent Gartner poll showed that by 2012 IT organizations will spend more on private clouds than public clouds. Only 4% of those polled didn’t have private cloud plans.
As we have documented in Leaders in the Cloud, our own research and survey data from 500+ IT executives on cloud deployment revealed that organizations are currently planning on utilizing private clouds the most. Hybrid clouds are expected to have the greatest growth potential over the next three years.
One of the findings of Saugatuck’s recent research report, Lead, Follow, and Get Out of the Way: A Working Model for Cloud IT through 2014, is that by the year 2014, at least 15 percent of organizations will use private clouds for daily business operations.
We believe public and private clouds (hybrids) will not only co-exist for a long time, but also will seamlessly interoperate and integrate with each other using many emerging technologies.
Every major technology vendor worth their salt—including IBM, Microsoft (See my previous blog post on Azure-in-a-box), BMC, CA, Cisco, VMware, even Oracle—and many emerging players—such as Eucalyptus systems, Cloud.com, RightScale, and enStratus—are getting into the game of turning enterprise data centers into highly scalable, cost efficient, and automated private cloud environments.
Clearly, there is a lot of competition in this space, so I spoke with Willem Van Bilijon, cofounder and VP of Products at Nimbula, about their solution, what is unique about it, and what differentiates Nimbula from their competition.
Willem shared valuable insights about why there is a real need for private clouds, what type of workloads makes sense to deploy into these clouds, and how their solution uniquely addresses this need in a highly scalable, automated, and secure manner.
Here are some excerpts from the conversation:
What are the driving factors behind the evolution of the IaaS industry and where do you see this market heading, particularly for private clouds?
Amazon EC2 effectively created the public Infrastructure-as-a-Service (IaaS) market and bought about a sea of change in how infrastructure can be made available in a scalable, self-service manner at very competitive price points. It also established a high bar for agility and cost efficiency that internal IT groups were forced to match and compete with. This naturally prompted enterprise IT teams to ask the question: In order to compete effectively, what do we need to do in our private infrastructure behind the firewall to match the agility and the very high levels of utilization (and corresponding cost saving) of public clouds?
A lot of the capabilities (e.g. agility, scaling, self-service, automation) you see in the public cloud are equally valuable behind the firewall. However, even capabilities like multi-tenancy, which one normally associates with a public service, actually make sense behind the firewall as well because large organizations with multiple departments have security policies between those departments. We believe that multi-tenancy of IaaS behind the firewall is absolutely critical because it enables policy-based permissions and security management. For example, you may want to restrict which virtual machines may communicate with which others or you may want to have the flexibility to determine which users may launch which virtual machines and so on.
For private clouds to truly take off, we need to bring enterprise-scale and levels of automation of public clouds to the enterprise infrastructure, but equally important we need to bring fine-grained security and governance policy management. Enterprises need to be able to control how cloud-based services (whether internal or external) are accessed, added, deleted, and altered through governance policies. The importance of being able to enforce and implement these policies in real time through a governance platform that creates, monitors, and controls the policies becomes obvious when you have services that extend beyond department as well as company boundaries to external cloud services. Enterprises also need the ability to monitor to make sure that everything is running as it should and they need the ability to meter usage for billing purposes (charge-back in the case of internal billing).
The traditional argument always is that the scale of public clouds allows vendors to reduce their service costs significantly. Of course, if your internal installation is large enough, you could achieve the same cost efficiencies. And even for smaller installation, it might not entirely be a TCO issue; the cost might be same either way, but you have to keep it internal for security, privacy, or regulatory reasons. What you definitely gain is operational agility and responsiveness, which has already been demonstrated—in public clouds—to be much better than your traditional infrastructure management. With our technology, we are bringing these same agility aspects, among other things, to the enterprise infrastructure.
What is your target market? What type and size of company benefits most from your solution?
Since we are an infrastructure product, we tend to segment our market around the characterization of customer workloads and not generally around industry verticals. The type of workload characterization we typically target tends to be favored by divisions of larger organizations (rather than SMBs). These customers are looking for higher agility and fine-grained controls on permissions and security for their R&D-oriented workloads. Many of these workloads also tend to change over time. For example, large financial service companies run large Monte Carlo simulation workloads during the night leaving the machines that run these workloads lying idle during the day. Then there are workloads, which have peak utilization during certain days or times of the month, or year and these tend to be web-facing applications from both large enterprise and SMB companies.
Customers don’t even need to have any virtualization footprint to use our solution. They bring their servers and network and we install from bare metal. Our approach is as follows: We first plug in a machine we call a seed machine (that runs our software) into the cluster that the customer wants to convert into a private cloud environment. Once the seed machine sets things up, the whole cluster bootstraps itself from bare metal. We lay down the base operating systems, the virtualization layer, and the control pane software which then installs itself in a distributed fashion across the whole cluster.
What is the difference between virtualization and cloud computing?
People normally think of virtualization in terms of running the hypervisor, i.e. VMware ESX, Citrix Xen, etc. The hypervisor only gives you the capability of slicing up a single physical machine into multiple virtual machines (VM). You can place workloads on those VMs and improve the utilization of the physical resources. However, that allocation of workloads to machines is still largely static and manual. You are still looking at these machines as if they are physical boxes. You still manage the network and system configurations in the same way as hardware and you still have an administrator sitting on a console and managing these configurations as workloads come and go. This is obviously a manually intensive and potentially error-prone process.
The cloud provides a much higher layer of automation and scale than the virtualization layer. For example, in a cloud environment, you could simply make an API call to create a virtual machine and place your workload automatically in a location where resources are available. You don’t need to specify a physical location or resource; instead, you specify the type of environment you want your workload to run, like the number of CPU cores, the amount of memory, the type of network, the kind of storage, and so on. The placement algorithms will find the appropriate resources that match the requirements and deploy the virtual machine(s) there, making sure that the VM executes are in the right environment and adheres to any specified security and governance policies. And all of this happens automatically. The API provides the capability to request additional resources for scaling up as the demand goes up, and for scaling down when the demand goes down.
Today, the abstraction we provide to an IaaS end-user is a virtual machine (an abstraction of the x 86 machines underneath). Of course, there are different levels of sophistication in how those VMs are managed and we differentiate ourselves in terms of the high degree of automation, scale, security, and governance policy management we provide to the user. We also have plans on our roadmap for other abstractions that applications can consume on top of the IaaS layer, including execution environments for Ruby, Python, or Java. At that point we will have offerings that get into the Platform-as-a-Service (PaaS) layer of the cloud stack.
What are the performance impacts of virtualization and the private cloud management tools?
People understand that virtualization introduces overhead and they also understand that the overhead is plummeting as hypervisors get more efficient and chips begin to support virtualization at the hardware layer. I think it’s no longer a showstopper but customers do look for quantification of what that overhead might look like and whether their workloads will perform adequately in a virtual environment.
We also found that because the cloud environment brings a lot of agility in terms of quickly bringing applications in and out, there are tremendous cost savings across the entire lifecycle of an application in terms of finding the resources, deploying the application, running the application, and then pulling the application out of the environment. The overhead of virtualization itself often pales in significance when compared to the benefits of agility over the entire lifecycle.
Furthermore, we allow attributes to be specified in what we call our “launch plan” of running an instance, where you can specify that you want an instance to run close to a fiber channel storage for high-bandwidth access, to have a certain number of CPU cores for faster number crunching, or to have access to a GPU for graphics processing and so on.
Finally, you can specify relationships between the workloads. In a single launch plan you can launch more than one virtual machine. For example, you can specify that you need five VMs of type A, three VMs of type B, and four VM’s of type C. You can then add specific additional constrains such that A and B will run on the same machine for performance reasons or B and C will not on the same machine because they are meant for redundancy and so on.
How does your solution tackle the issue of federation and interoperability between private and public clouds?
Today, customers don’t have a seamless way to federate between private and public clouds. There are different APIs, different identity management systems, different tools and management systems, and it is difficult to federate governance policies that customers use within their datacenters and extend them easily to the public cloud and so on. We have capabilities in our solution where customers can specify whether the user wants to run the workload in a private or a public cloud. Today, we have a gateway to Amazon EC2. We perform a number of governance checks before a user is allowed to launch a workload in a public cloud. One of the checks we perform is to verify whether the user credentials match the credentials that the organization has with EC2. This way we have the capability of preventing some arbitrary user from running a workload with an arbitrary credit card.
More importantly, all of this can be accomplished with just one API call and our EC2 adapter automatically maps that call to the appropriate EC2 API call so that the experience is completely seamless to the end user. No changes required to the user application. The user simply specifies the desired location (private or public) for the workload in the launch plan.
Any final thoughts for enterprise CIOs considering moving to private and hybrid clouds?
It is clear to us that private and public clouds will co-exit for the foreseeable future. It is critical that private clouds deliver on their promise by ensuring that they have the required scale and automation to achieve true reductions in cost. In addition, implementation of policy management systems within the private cloud will enhance interoperability with public clouds, creating a true hybrid environment.
Kamesh Pemmaraju heads cloud research at Sand Hill Group. He welcomes your comments, opinions, and questions. Drop in a line to email@example.com.