Skip to main content

The Top 5 Security Issues to Consider When Moving to Public or Hybrid Clouds

By October 9, 2012Article

More organizations are looking to move out of their datacenters and private cloud environments and into the dynamic world of public and hybrid cloud architectures. There are many reasons for this including cost savings with utility billing, shorter and easier provisioning time and the ability to spin up servers, when needed, to handle project-specific workloads and tasks.

Traditional datacenters and private clouds are static or semi-static in nature — just little changes day to day with regards to IP address and compute resource allocation. Granted, private cloud environments can be somewhat dynamic in nature but, often, organizations treat private cloud environments like they would an on-premises virtualized server infrastructure.

Public and hybrid cloud environments, on the other hand, are dynamic in nature, and there is no guarantee that the IP address you are allocated one day will follow your server through its entire life cycle.

There are, however, security issues to consider when deploying new, or migrating existing, servers in public and hybrid cloud environments. Here are the top five issues that keep coming up in conversations with customers, partners and peer organizations.

Your cloud server is now connected to the network edge

Remember how your organization had network firewalls, intrusion detection or protection systems and hardware-based VPN devices? In public and hybrid cloud environments these network devices simply do not exist. Private cloud providers rarely provide security controls, often leaving the security of the operating systems, applications and data to the users who own the cloud server instance.

Without network-based security controls to hide behind, organizations must once again look to the security of their servers, not as a last resort control point, but rather as the front line of defense. Host-based firewalls, host intrusion detection/prevention software, file integrity monitoring, data/disk encryption, application whitelisting and antimalware software have always been an important part of a complete defense-in-depth security architecture.

With perimeter controls no longer contributing defense (or depth) of the new security architecture, organizations must ensure that the controls used in cloud environments are capable of providing the same (or better) level of security without their network-based contemporaries.

Public cloud servers initialize unpatched

When you sign up for an Infrastructure-as-a-Service (IaaS) public cloud like Amazon Web Services or Rackspace, the provider makes a catalog of virtual machines (VMs) available for you to use. Unfortunately, these VMs typically start up completely unpatched. For example, choose a Windows 2008 service pack 2 server from the catalog, and it will spin up missing about three year’s worth of patches.

This creates two problems right out of the gate. First, you’ve just spun up a publically accessible server that is vulnerable to every exploit created since the product was released. Second, you will need to patch the server prior to using it, which increases deployment time as well as cost (you will need to pay for the CPU and network utilization required to patch the server).

The easiest way to address this is to leverage a cloud management tool to manage patch and configuration settings for your VMs in both public and private clouds. Along with expediting the deployment process, they also help to ensure that you maintain a consistent security posture.

Your existing tools may not work

Most security tools were built to operate in static and semi-static environments when the term “cloud” didn’t yet exist. As such, your existing tools may not be able to function in a dynamic cloud environment. For example, traditional firewalls with static rules may cease to function should the IP address of your server change. Similarly, your file integrity monitoring software may balk should you decide to increase or decrease disk space or memory allocation for your server.

The question of portability between different clouds also should raise concerns. Just because the tool works on servers running on one particular hypervisor does not guarantee that it will work on another.

Back in the days when servers were physical pieces of hardware on an internal network, server sprawl was not really a problem. There was a large up-front cost associated with deploying a server, which helped curtail how many would be brought online. Further, they took up physical space and would be assigned an IP address on the local network.

This made both physical searches and network scanning effective tools at identifying rogue systems. One of the benefits/drawbacks of cloud computing is that it is now easier to deploy a server than it is to manage it. With a few mouse clicks, you can quickly and easily deploy dozens, if not hundreds, of servers.

Of course, if you deploy hundreds of servers, keeping track of them all can be an administrative nightmare. For those who grew up in a time when Nmap could find every rogue server on your network, managing VMs in a cloud can seem difficult at best. Keeping track of the roles of each of these servers and whether they are still patched and configured per corporate policy, etc. can be a downright nightmare.

When evaluating if your existing tools are capable of operating in a cloud environment, ask your vendor a series of “what if” scenario questions and push them for proof or customer references that have already migrated their servers to cloud architectures.

You are now operating in a shared environment

Perhaps one of the biggest benefits of public or hybrid cloud is the cost savings of operating in a multi-tenant environment where the architecture is shared among all customers. This means that some look at public and hybrid clouds like the Wild West or, in some cases, a 1960s commune.

Even though most organizations may be comfortable sharing compute resources, few are as open to sharing access to their sensitive data. As such, organizations in public and hybrid cloud environments need to ensure that proper technical and operational controls are employed to protect their data.

Ensure that your cloud services provider can prove, either through internal documentation or third-party attestation, how they segment data between customer instances.

Organizations should also look to security tools to help obfuscate and logically segregate their data from other tenants using the same cloud platform. This includes, but is not limited to, data at rest, data in transit, applications and operating systems.

Vendor lock-in

Watch the evolution of any technology, and it typically starts off proprietary, with a gradual shift towards open standards. We’re seeing much of the same in public cloud space. Today, very few cloud providers are supplying tools to help you move VMs between your private cloud and their public cloud.

Even fewer are providing tools that will let you move your VM back from their public cloud to your private cloud. To the best of my knowledge, none of the public cloud providers are supplying tools that will let you move from their cloud to another public provider. This means the road to public cloud can be riddled with one-way streets.

Do your homework prior to choosing a public provider. Make sure you understand what tools are available. Also, third-party cloud management tools can help here to a point, but the provider must have the proper APIs in place for migrations to be successful.

What can we do?

Public and hybrid cloud computing requires a complete rethink of how we apply security. Today, servers are more like laptops. They can burst out to whichever cloud provider can best service our customers. This means that we can no longer count on servers being a stationary object. The tools we use to protect them must be as scalable as the servers themselves.

Another problem is that the network itself has become virtualized. Without physical cables carrying packets between systems, it becomes problematic to deploy deep packet inspection appliances. While it is not impossible to inspect the application layer of virtualized packets, doing so can increase overhead and complexity, as well as introduce single points of failure.

In private cloud environments, we can leverage hypervisor introspection to minimize overhead. In public cloud however, introspection can have a negative impact on both regulation compliance and segregation of duties.

Public and hybrid cloud servers need to be protected in a similar fashion to their laptop brethren. Plan on leveraging the operating system’s built-in firewall and focus on tools that will facilitate its management. Rather than focusing on performing intrusion detection and prevention on the wire, implement host-based solutions.

Concentrate on solutions that have been specifically written for cloud environments. For example legacy AV solutions can cause AV storms, thus inhibiting scale and increasing your operating costs.

A cloud-friendly solution that offloads the heavy lifting helps to ensure that you can continue to scale your architecture without compromising on security. The best solutions will offload the work to another hypervisor, thus ensuring scale and cost are not impacted.

Andrew Hay is the chief evangelist at CloudPassage, Inc., where he serves as the public face of the company and its cloud security product portfolio. He is a veteran strategist with more than a decade of experience related to endpoint, network and security management technologies across various sectors. For more information email andrew@cloudpassage.com

Copy link
Powered by Social Snap