Skip to main content

Newvem Makes Analytics Simple for Driving Effective Use of the Public Cloud

By August 7, 2012Article

The pace of adoption of cloud computing is evidence not only of its powerful economic advantages but also how easy it is to get into the cloud. But many companies are finding that using the cloud effectively is not so easy. The classic assumption is that a workload in the cloud is the same workload — it’s just in the cloud. But that isn’t true. A company’s cloud footprint can cause resource fragmentation, complexity and lack of visibility behind the cloud’s “veil” to identify service level variations and other issues. Keeping control of the cloud footprint requires analyzing huge amounts of data from a multitude of sources and essentially becomes a Big Data problem. At Newvem, we’re helping companies take a different approach and become effective in their cloud usage.
It’s not unusual in large companies today to find that hundreds of engineers have opened up their own Amazon Web Services (AWS) accounts and the CIO has no handle on all of this cloud computing happening outside his purview.
As an example of the kind of problems that can occur from such “cloud sprawl,” we showed one of our customers their vulnerability. Their cloud use grew so fast that they didn’t do a full planned deployment as they turned more servers into the cloud; as a result, they inadvertently left a lot of their security ports open. They were completely unaware of this vulnerability until they used the Newvem analytic engine to get a pulse of what was going on with their computing resourced in the cloud.
The cloud (utility model) is a totally different world from the inefficient traditional computing model of having to buy more capacity than needed. But the new cloud world requires different metrics, different correlations and different tools that allow CIOs and companies to be successful and effective in their cloud use.
For instance, companies need to be able to detect high utilization and trending lows to make sure they don’t have elasticity breakage and can smooth out the growth of cloud use when they go from one usage level to another, as well as from one cloud plan to another such as for AWS, from “On Demand” to a “Reserved Plan.”
A large gaming company had one of its subsidiaries in the AWS cloud and was trying to decide whether to continue with its “peak business volume” strategy on AWS or to move the subsidiary into the company’s private cloud. They used our service to discover the actual utilization levels and make their decision.
How to get a cloud reality check
The Newvem solution provides customers with the pulse of what’s happening with their computing resources behind the veil of the cloud in a simple three-step process: analyze, reveal and solve. First AWS users provide us a read-only key for their AWS environment (we only read what’s going on and have no ability change what’s happening). We then aggregate the data pulses and analyze them against different metrics and usage patterns that we track, revealing problems such as costs, availability, security, outage risk and utilization.
Finally we start aggregating the data and, in a matter of hours, we’re already populating the dashboard and providing insights and actionable recommendations on what to do.
The high-level dashboard provides a 360-degree view of the aggregated views of usage issues including availability, utilization, scalability, security and spend. Underneath the aggregated views are line items identifying specific issues/actionable insights that we recommend be resolved. Users can also drill down and get a historic view of a computing resource.
Another area of the dashboard helps users identify how to find self-help resources or service providers with experts to resolve the issues.
Customers range from companies with up to 10 servers (or instances”) on AWS that need to understand how to manage a utility-based computing model, to companies with up to 100 servers that start running into scaling issues with typical monitoring tools), and those with up to thousands of servers, where another level of chaos occurs. Companies with 100 to thousands of servers have gone up the learning curve for being proficient in the cloud and use our service more like a watchdog facility to alert them of abnormal patterns and help them understand their behaviors in the cloud. Service providers also use our solution to manage dozens or hundreds of their clients.
One area where we see a lot of demand is with CIOs who need bottom-line visibility on risk analysis and cost, for instance getting a handle on the “rogue” usage of AWS, creating an inventory of all the cloud efforts in their company and to communicate it to the CFO and their peers by simply recommending that users begin using Newvem to ease their work.
The future of cloud usage analytics
In the future we hope to be the meta layer that allows people to compare cloud infrastructures. We will see the usage patterns and costs across all of them, and we’ll be able to recommend for a given workload or application what is the best cloud provider for that usage. That’s about four or five steps down the road for us right now, but we definitely see that happening.
Over the next few months, we’ll also begin developing a premium service for CIOs. For the past 10 years, CIOs have leveraged business service management, governance and asset management solutions to enable better ties with the business customers. But those solutions we built for a traditional capacity-computing model and now need to be built from scratch for the cloud’s utility model.
We’ll also provide more tools to help the CIO address compliance and security issues.
And down the road we intend to branch out beyond AWS to other major cloud platforms. When we launched Newvem, we intended to do this sooner. But we’re seeing such adoption of AWS; it’s mind blowing. It’s the public cloud standard right now. So we decided, instead of spreading ourselves thin across others, we first want to focus on and become really effective on AWS since it’s the leader and it’s growing so fast.
Democratizing cloud knowledge
As I said at the outset of this article, we’re tackling cloud operations and computing resource sprawl from a different angle. It not only involves different tools but also a life cycle approach.
A new world is being created in the cloud, along with a world of new knowledge that needs to be understood and learned and shared. In the old world, managing capacity-based computing originally meant buying IBM equipment, IBM manuals, IBM training and IBM services. It was a very one-to-many dissemination of information. Eventually the dissemination of information went to system engineers, and companies paid an arm and a leg for implementation of systems.
We recently created a learning center where we’re disseminating a lot of cloud knowledge that we’re gaining from our analytic engine. We even enable the community to suggest insights to us. In fact, one of the top world-renowned experts on AWS suggested an insight to us, and we incorporated it into our service within a week.
Sharing the information is an important component of our mission. Everyone at Newvem comes from the open source community, so we all know the power of the community and the value of tying a community of experts to the community of users who need to consume the information. The insights that we uncover and track are published in our knowledge center for the public to embrace, comment on and help show us the outliers so we can make our insights even better.
Zev Laderman, co-founder and CEO of Newvem, is an experienced executive in the enterprise and startup space. Newvem raised $4M in Series A funding with Greylock, Index Ventuers and Eric Schmidt’s Innovation Endeavors. Prior to Newvem, Zev successfully led and sold two companies including Aduva to Sun Microsystems and Tradeum to VerticalNet, and seeded and contributed to WIX. As an executive at Oracle, he built and managed successful business units.

Copy link
Powered by Social Snap