As a part of our ongoing effort to showcase industry thought leaders here in the blog, I was able to sit down with Raj Samani, Vice President and Chief Technology Officer for McAfee EMEA, for a series of talks on the topic of critical infrastructure. Raj previously worked at the Chief Information Security Officer for a large public sector organization in the U.K., and during this podcast, we discuss cloud computing for critical infrastructure.
Raj, thanks so much for dialing in from London today. Let’s just jump right into cloud computing for critical infrastructure. I mean, just saying these words together, they sound like opposing forces. What’s the connection here in what I know is a very controversial topic?
In fact, I gave a presentation just last week where I was talking about the use of cloud computing within the petrochemicals industry. To say that the crowd was slightly hostile to begin with would be somewhat of an understatement. What I was referring to was in fact the potential use of a public cloud within a critical infrastructure perspective.
Now, I’m not saying that an organization should put every piece of data with a third party out in the cloud. But I think it really comes down to the work, in terms of trying to understand the data that you have, the value of that data, the regulatory obligations you have with regard to that data, i.e., can you send that data out of country or outside a specific geographic location. Once you understand the value of that data, then you can consider potentially using cloud computing.
So, you’re actually talking about public cloud. Knowing what you know about cloud and about critical infrastructure organizations, is it realistic for them today to really embrace a public cloud?
We have a fairly large public sector conference in the United Kingdom, and I remember I had a similar conversation and a similar presentation talking about the potential use of public cloud within public sector organizations. In fact, we published a report when I was working in the public sector, really talking about the potential use of public cloud computing for public sector organizations. About two to three years ago, we didn’t really see that this was viable, but I’ve actually started to see some public sector organizations start to embrace public cloud computing.
Now, let me give you an example. Just this past Christmas, there was a press release where a local authority had basically gone to one of the big three cloud providers. This public authority also had a connection to what we call the GSI, which is the secure Internet for government systems. So this is working at impact level three, IL3.
Now, typically speaking, you’d say, how can an organization have a connection to a secure, restricted network, but also utilize public cloud computing? That seems like somewhat of an oxymoron. But if you have strong governance in your organization, if you have strong governance in terms of knowing and understanding the data that you have, if you have a strong data classification policy, but more importantly you can enforce strong, protective monitoring to ensure that restricting material or intellectual property doesn’t go into containers which don’t have the right level of security, then I think it’s feasible.
When I’m hearing you explain this, some of the things that come to my mind are, leveraging DLP, both perhaps network and host-based. So, not just limiting, but potentially tagging or tokenizing information as it enters the cloud, and leveraging encryption capabilities to the same end. Are those some of the technologies that you’re talking about when deploying an architecture to support this type of design?
Actually, it probably goes further back than DLP. Certainly, DLP is one mechanism that you would use to enforce. But the key thing first and foremost is, you need to have a policy. By a policy, this is not just a straightforward accept our use policy, although that is part of it. But the first fundamental thing that you need to do is try to understand the level of risk that you’re willing to tolerate. And then obviously, ensuring that you have a data classification policy is a key and critical component of DLP.
Then we can start to look at some of the technology. So, in one particular example, I know an organization that has a network DLP system that basically sits on their perimeter and ensures that information which is tagged and has a classification of, let’s say X, is not allowed to go up into the public cloud.
So yes, DLP is certainly an element of it, but I would argue that, first and foremost, you need to have the policy in place. You need to have the appropriate processes in place, and people need to understand the value of the data and what the classification means.
What we should also consider is that DLP can be used as a sort of hard control in terms of enforcing, stopping or preventing. But also, I know of organizations that are using DLP in conjunction with data classification technologies as a means for cultural change as well. For example, if you write a Word document, the user forced to select the classification for that given document. When they assign that classification for that given document, the appropriate rules are then applied, so they may not be able to forward it to people outside of the organization, and so on and so forth.
I like how you’ve connected those dots, essentially saying technology should augment the policy and not the other way around. Don’t deploy DLP and write your policy then to take advantage of those capabilities. It’s very well stated.
I know you’re aware there are just so many organizations that don’t really have their hands around their data. Where is it? What’s sensitive, what’s not? Who has access? Shifting gears a bit, let’s say a critical infrastructure group wants to move forward with public cloud. They believe they’ve got the right policies in place. They believe they’ve evaluated the right types of technology to support it. What about regulatory obligations, especially as it’s related to things like, data cannot reside outside of a finite geographical area? So, in practice, would the regulations get in the way of embracing the cloud? Or do you think it helps frame their ability to do this?
So I guess I have to kind of be careful. And probably the first thing I should say is that I’m not a lawyer, so I am allowed to get things wrong. [laughter]
You’re absolutely right. I guess that’s one of the challenges that organizations face today. I think when we refer to whether or not you actually transfer data outside of a finite geographical area, invariably if you’re talking about personal data, and certainly if you’re in Europe or the U.K., yes, you are restricted with what you can do.
Of course, there are mechanisms in place, so if you want to utilize a U.S. organization, then you may be in a position to start looking at things like safe harbor agreements and so forth. But all of the legal and regulatory obligations should be part of your ongoing process.
I’m going to be very boring here and refer to textbooks, but it is a case of identifying your risks. Once you’ve identified what your risks are, then you need to go through a risk management process and consider things like legal and regulatory obligations before you start to apply the methods to manage the risks.
You may say, well, this data, for example, personal data – we don’t want that to be hosted outside of a geographical area. Or we may say, actually, we don’t even want it to be hosted by a third party. Our crown jewels, as it were. We may want that to be stored inside. You really can’t get into that discussion until you know the data that you have, which requires a strong degree of governance. Then you can start to consider what you can do with the data.
As we wrap up here, one last question. What about the issue of assurance?
[laughs] Well, you promised me this would be the last question, and it’s probably going to be the question I’m going to spend about an hour talking about. That really is a can of worms, right?
In fact, I gave a presentation recently, and I was talking about one of the largest cloud service providers. I said, this cloud service provider has 15 million customers, and a hand went up. He says to me, no, I work at that company; I’m their technical director. In fact, we have about 40 million customers. That’s quite mind -boggling.
When we think about it, having 40 million customers also puts the third party at a slight disadvantage. Because if you went out and gave the right to audit, realistically, can you give the right to audit to 40 million customers? Well, no, because it’s simply unsustainable.
So typically, what we’re seeing are cloud service providers that utilize standards to provide customers with that degree of assurance. Now, previously we had the Type 2/70 sorts of audits that were out there, and of course, that’s now changed. But we’re starting to see a new series of controls and standards coming out that would try to address some of the concerns a customer may have.
I do a lot of work with the cloud security alliance, and we’ve launched the cloud controls matrix. Of course, you have the style program that allows customers the ability to be able to go to a centralized place and identify those providers that would adhere to the CCN.
What we’ll begin to see is more standards that can provide that degree of transparency to the customer. Really, what I would anticipate is we’ll see the approach of allowing the cloud service provider to go ahead and do some degree of assessment and share their results with all of their customers.
I think that the bespoke arrangements of a one-to-one relationship just aren’t sustainable. In fact, we’re also seeing things such as FedRAMP in the United States being used within the federal space as well. I anticipate that we’ll see more standards come together, and I’d also anticipate that we’ll see some sort of harmonization within the whole standard space.
Raj, thank you so much for your time. And thanks to our listeners for joining us.