As more organizations leverage the cloud for critical business applications, they are discovering one of the greatest challenges is combining existing internal controls with cloud protection efforts. Highly regulated business and government organizations in particular must maintain comprehensive security and compliance postures across these hybrid systems. Network World explores the issue in-depth with:
- Shawn Kingsberry, CIO of the Recovery Accountability and Transparency Board
- Craig Sutherland, principle architect and engineer, lead associate, Booz Allen Hamilton
- Mike Rothman, president, Securosis
- Ken Ammon, chief strategy officer, Xceedium
NW: Let's start with a basic question. When companies are building hybrid clouds, who is responsible for what when it comes to security? What are the pain points as companies strive to address this?
AMMON: I think what you end up with is a shared-security model. The cloud service providers are offering many security capabilities that don't cost anything, that come with the service, and it's in your best interest to take advantage of those capabilities. But you define your compliance requirements and if you can't get the necessary coverage you add your own overlay security architecture.
The challenge, of course, is you have to figure out how to instrument that capability and how to manage it. And of course it makes sense to do this on an enterprisewide basis, so that means developing an architecture that will span X + N cloud providers that will meet your policy and incident response requirements, give you access to the audit data you need, and simplify your implementation of policy across what may be an embedded security service within the cloud providers themselves.
ROTHMAN: A lot of folks think having stuff in the cloud is the same as having it on-premises except you don't see the data center. They think, "I've got remote data centers and that's fine. I'm able to manage my stuff and get the data I need." But at some point these folks are in for a rude awakening in terms of what the true impact of not having control over layer four and down is going to mean in terms of lack of visibility.
So I think people just figure -- "Hey, it's cheaper, but it's more of the same." And they don't take the steps to build a program office and really work through the little details of jurisdiction and incident response and the compliance impact, of not having control over what could be pretty sensitive and critical data.
SUTHERLAND: When deciding who is responsible for controls, the decisions need to take into account the service delivery and deployment model. The Cloud Security Alliance provides some great guidance in this area, and the cloud computing security working group is expanding all these models, and ultimately these responsibilities need to be contractually assigned during the procurement process. But the service-level agreements alone are not enough if the cloud provider is left with the option of modifying the agreements without warning, as happens on occasion.
But getting back to the original question about the security pain points in the hybrid cloud, I'd add that sometimes when you're looking at these new reference architectures and developing a new model for the cloud, certain enterprise teams may default to imposing legacy solutions onto a cloud environment, whether it's hybrid or just purely public.
And while everyone wants to use solutions they're familiar with, sometimes these controls do not scale for the cloud. In addition, you need to think about the controls at the appropriate level of abstraction, and consider and frame it in the context of the risk that the control is really addressing or mitigating. So this means understanding what is new and different in the cloud, and implementing a control that is appropriate for the cloud and allows the benefits to be realized.
NW: So my first desire is to extend all my legacy controls to this new environment, but, by the sounds of it, I'll always have to bolster these controls for the cloud.
SUTHERLAND: In this cost-sensitive environment you need to start with reusing existing infrastructure, and there's plenty that can be reused. The point is, just be aware of the nuances and limitations of certain tools.
AMMON: Let's ignore the security controls utilized by the cloud provider below the visibility of the customer, i.e., from the hypervisor to the concrete slab. You are left with three categories of security tools: 1) Tools provided within the cloud such as firewall and VPN; 2) Existing enterprise security tools which operate no different within a cloud environment; and 3) Additional security tools necessary to manage the unique environment presented by the cloud. Here are two examples of new requirements for cloud security tools: First, due to the nature of cloud elasticity, these tools must accommodate an auto-scaling architecture and be able to automatically discover new systems and apply policies. And second is the challenge presented by the advent of the all-powerful cloud API layer. Within the API layer users are implementing the practice of auto configuration ... basically machines building machines. That requires a new security paradigm to manage the unique challenge to auditing and controlling automated machine-to-machine privileged access. This will certainly require new cloud-specific security technology.
NW: Is it different when you're talking about SaaS in particular? Do you just have to take what they give you?
ROTHMAN: Where we've seen a lot of SaaS players open things up a bit is in the area of identity. So instead of having to manage all of these different authoritative sources and user lists, you can get around it through the magic of federation (some suppliers support standards like SAML to provide specific assertions and integration) so you don't have to use the SaaS players' identity model.
But when you start thinking about specific controls and managing access, most of that stuff happens within the auspices of the SaaS provider. So there isn't a lot of flexibility. Salesforce is one company that allows customers to use adjunct technology to encrypt some of the data you store in their environment down at the field level. They acquired a company called Navajo maybe two or three years ago to provide that capability.
But in terms of the continuum of who's responsible for what, when it comes to infrastructure as a service and even platform as a service, the customer is really responsible for pretty much everything that happens on the security side, whereas with SaaS the service provider or the cloud provider actually assumes all responsibility for control sets and auditing and all of those things.
SUTHERLAND: Even in the case of infrastructure providers, the cloud supplier's controls provide the base for any compliance solution, and the shared responsibility model involves selecting appropriate controls above the cloud service or management layer to combine with appropriate user-level controls, whether it's privileged identity management or just host-based and endpoint controls. So this can involve integrating vendor components to address any of your compliance objectives from security or privacy or operational risk, regulatory and legal requirements.
NW: Does the cloud service provider, whether it's SaaS or an IaaS supplier or whatever, want the buyer to assume as much control of the security environment as possible?
SUTHERLAND: It's ultimately the consumer's responsibility. But if you step down from SaaS to a platform to infrastructure as a service, the consumer is assigned more of the responsibility. As you have more flexibility, you also take on more responsibility for the security that's implemented. However, to develop a fully compliant or low-risk solution, you need to implement the user-entity controls, as some cloud providers call them. If implemented appropriately, along with your own controls above the service layer, you can really develop a secure solution.
KINGSBERRY: We recently interviewed roughly 30 leaders across industry and the federal government about cloud computing security and built our cloud hub addressing every one of the security issues. We migrated our mail and collaboration into Microsoft 365 as their first GovCloud customer, and in parallel migrated other key infrastructure components over to Amazon. All NetFlow flows through our Recovery Accountability and Transparency Board (RATB) Cloud Hub on-premises, even Microsoft 365 Web mail, meaning if you use Microsoft 365 to send an email it comes back through our cloud hub stack from a compliance perspective. And we have capabilities within our stack like Xceedium that help us manage access control between Microsoft 365 and Amazon.
So, in essence, we have the same level of visibility between software as a service and infrastructure as a service. It's a shared responsibility, but I have auditing and compliance. No Social Security numbers, for example, are going to leave our organization because it gets stopped by Proofpoint. And everything goes through our NetWitness infrastructure and our McAfee Data Loss Prevention. We have categorized the RATB Cloud Hub into six critical services: 1) Governance 2) Protection 3) Access Control 4) Monitoring 5) System Management 6) Failover. Each category has components that play key roles into the delivery of the RATB Cloud Service. Proofpoint, RSA NetWitness, and McAfee Data Loss Prevention Managers are only a few of the components making up our Cloud Hub stack. Now we can put workloads anywhere and it doesn't matter.
NW: Are your federal customers generally asking you to shoulder more responsibility?
KINGSBERRY: If you look at the Federal Data Center Consolidation Initiative, roughly 70% of all federal data centers are already outsourced. So federal CIOs are already having data centers delivered as a service. From a federal standpoint, it's all about the data. The classification of the data is what defines the level of security controls required (e.g., FISMA Low, Moderate and High). I think the federal government is past the point of asking the question, "Can I get the same level of information assurance leveraging cloud services"? Federal understands you can. Securing federal data is a shared responsibility between the federal agency and the provider. Roles and responsibilities will differ between agencies as FISMA is managing risk and each agency's view of risk is different.
NW: As Sutherland mentioned earlier, a lot of this has to be baked into the contract terms. Are there best practices that addresses how?
ROTHMAN: A lot has to do with how much leverage you have with the provider. With the top two or three public cloud providers, there's not going to be a lot of negotiation. Unless you have a whole mess of agencies coming along with you, as in [Kingsberry's] case, you're just a number to these guys. When you deal with smaller, more hungry cloud providers, and this applies to SaaS as well, then you'll have the ability to negotiate some of these contract variables.
So it's a matter of understanding what the agreements specify, understanding who's going to be responsible for what. But I haven't seen a lot of folks be overly successful getting better terms or negotiating special deals or doing any of that kind of stuff because, remember, the cloud and being a cloud provider is all about leverage. So if you've got a different deal for every one of your customers there's no way to really leverage that.
So it's a matter of understanding what you can do, what they're going to do, and looking at it from a threat-modeling standpoint -- we know we're not going to be able to amend the contract to any great deal, so where are our exposures, and what do we have to do to address or mitigate those exposures when making that decision?
KINGSBERRY: When we went to Amazon we were in negotiations for months. We literally had our general counsel talk directly to Amazon and they had to modify their terms or we were not going to migrate. Microsoft as well. We literally restructured the whole agreement. And right when we were at the place of agreeing to all the changes made, Microsoft GovCloud was released. They learned from us what the federal government needed, and then the terms and conditions were rolled into the GovCloud we know today. The government was not going to come in if they didn't remove language about the possibility of our data ending up in third-world countries.
NW: So there is still a lot of learning going on and people on both sides have to be adaptable.
ROTHMAN: It's really early days when you think about the fact that we haven't been through a cycle of litigation and precedent, and that could take years. Until that happens, all this stuff is reasonably academic.
NW: How about the maturity of the cloud security tools themselves? Are they where they need to be?
ROTHMAN: You'll walk around the RSA Conference and everybody will say their tools don't need to change, everything works great and life is wonderful. And then after you're done smoking the RSA hookah you get back to reality and see a lot of fundamental differences of how you manage when you don't have visibility. How do you enforce network policies when you're restricted to security groups and you only have the ability to open up certain protocols? And you have access through APIs that may be gamed to terminate or reconfigure instances on the fly, without requiring administrative access to the cloud instance. You've also got different cryptographical hierarchies that are required to provide access to those instances. If the management tools are not built specifically to provide consistent access to cloud resources, wherever they are, things can go downhill pretty quickly.
So again, the idea of consistency is critical. But it's a management problem before it's a security problem. So now you have the ability to, within minutes, provision all sorts of servers. OK. But that creates an issue in terms of configuration management, in terms of patch management, etc. So on one hand the tools really have to be mature to overcome and instrument your lack of visibility in a cloud type of environment, but there's still a lot of blocking and tackling needed in terms of just the basic operational disciplines.
KINGSBERRY: From our perspective, federal agencies are always going to have something on-prem and then they're going to want to offload workloads. So if you turn it into a network problem, an information assurance problem, and everything is based on NetFlow, you're going to get full visibility. You can control things in a different way. And when it's infrastructure as a service, it's really no different than having a physical server on-prem. In essence, I have full control of all services running on that box, which means I can connect in enterprise management tool sets to ensure I can manage it.
AMMON: Many of the new security options will actually improve your agility and reduce your costs. An example of that would be a typical machine shutdown and forensics if you had an exploit. With the cloud you can copy a suspected server image to your forensic tool kit, fire up a brand-new replacement image and do all this through the click of a mouse as opposed to deploying employees to data centers. With cloud, experience really matters. Customer can greatly benefit by contracting with proven cloud architects who can help them figure out how to take advantage of the power of the cloud while avoiding cloud supplier lock-in or overly complex management of desperate security tool sets. Customers should implement centrally managed security if they want to maximize reduction in expense and complexity. A piecemeal cloud strategy may leave you with a collection of cloud islands operated and controlled through disconnected security tool sets.
That's actually a problem we are just starting to see in the privileged identity management arena, something we call islands of identity, where organizations are using a different tool on each platform -- cloud, virtual, etc. -- to manage privileged identities. We address this with a privileged identity management solution that reduces the risks that privileged users and unprotected credentials pose to systems and data. With Xsuite, customers can implement secure privileged identity management across their entire hybrid cloud. It vaults privileged account credentials, implements role-based access control, and monitors and records privileged user sessions. And our unified policy management enables Xsuite to deliver the seamless administration of security controls across systems, whether they reside in a traditional data center, a private cloud, on public cloud infrastructure, or any combination thereof.
KINGSBERRY: You mentioned using cloud for forensic work ... we had a similar business requirement. If something like that happens, we leverage Amazon to roll those VMs into a enclave that already has all the forensics tools. So we have snapshots of the compromised VM and all the tools ready and it's locked down so no network traffic can take place. So I'm using the cloud for what its best for.
SUTHERLAND: I think the tools are making progress. We've deployed for customers decentralized protection architectures that allow for the virtual resource instances to protect themselves rather than relying solely on centralized protection architectures. So, for example, utilizing IDPS or intrusion detection/prevention at the instance level, the instance is able to protect himself in-depth against attacks that may originate from inside the perimeter. And then this combined with integrity monitoring at the instance level and in the application layer provides real time reporting on malicious or unexpected changes to configuration system files or data access.
NW: Ammon, you once said identity is becoming the new perimeter. Can you expand on that?
AMMON: All security exploits involve two steps, gaining access and elevating rights/privileges. The combination of both mobility and cloud has resulted in the erosion of the traditional security boundary. Managing risk calls for a more granular approach to the process of granting, controlling and containing access. With identity as the new perimeter, system owners should demand a separation between identification/authentication and authorization. Granting unfettered access to an entire network segment or all features within a cloud management console incurs unnecessary risk. System owners should also take advantage of federating privileged identity to reduce management complexity and improve accountability.
SUTHERLAND: Just to add to that, when you're using shared privileged accounts, being able to separate that authentication authorization in our experience is very important and it is critical to be able monitor and control and perform forensics. So a privileged [accessible] system can allow this policy to be enforced for each user, even when using shared privilege accounts and provide the full attribution of the user activities on the user level or the privileged user level.
ROTHMAN: Right. This is another thing that I don't think the compliance hierarchies and auditors and assessors have clued in on, in terms of common console access. PCI for the last seven or eight years has had this concept of unique ID and kind of being able to control things down to a specific individual to be able to wrap changes back to. But again, cloud breaks that model for a lot of different reasons. So this gets back to the idea that we just don't know what we don't know quite yet.
If identity is a new perimeter and the perimeter has disappeared, then we'll all be kind of zombies in the future, because again, it's very hard to track privileged access back to a unique user, as required by the compliance mandates. This gets back to why consistency is critical. Whether it's happening within your own data center or it's happening out in the cloud data center, whether you've got resources that go back and forth or burst or a lot of what [Kingsberry] was talking about having a set of policies and a control set that can be leveraged consistently, regardless of where your data happens to be. That's really where stuff has to go and we are in early days, like diaper time. We're not even toddlers yet.
NW: Rothman mentioned visibility as being a problem. How big a problem is it?
ROTHMAN: Oh, it's a terrible problem. There is the option [Kingsberry] described which, from my perspective, is unique, of running all of your traffic through a choke point, but that starts pushing on the balance between the performance you get with cloud computing and the reality of what you need to do in order to control these environments. It creates difficulty. And what that means is you can't do things like capture network traffic with tools like NetWitness in traditional cloud architectures.
But if you're going to route everything to a choke point that kind of breaks the architectural constructs that make cloud computing interesting in the first place. So what you see are folks climbing the stack from the perspective of instrumenting their applications, instrumenting their databases, instrumenting their instances to a much greater degree because they don't have the ability to do that at the network layer.
KINGSBERRY: That's why I say business drives technology. For federal agencies to feel comfortable with the cloud, we had to take that approach. If we were a larger agency it would be architected slightly differently to address performance issues. Right now, however, for our agency, there is no performance hit. We're a small agency and our network pipes are larger than what's really required
NW: Any closing thoughts?
KINGSBERRY: When you look at where we are today and where we're going, the opportunities are through the roof. There are lots of opportunities today to do all types of things. I can tell you when we migrated to Microsoft 365 we ended up paying roughly 30% less than if we had to do it on-prem. And It enabled us to stand up something that we hadn't stood up ... Microsoft AD FS 2.0, which gave us an added level of security. So cloud gave us some interesting opportunities to do some really cool stuff.
Find your next job with techworld jobs