A forensic IT study by a U.K. security consultancy found that some multi-tenant public cloud providers have "dirty disks" that are not fully wiped clean after each use by a customer, leaving potentially sensitive data exposed to other users.
Last year, officials at Context Information Security conducted a study to determine if they could access data from other customers within public cloud environments of four providers. "We were quite surprised," says Michael Jordan, research and development manager at Context. "Using a pretty straightforward test we were able to view data that had been there a pretty long time."
DO IT YOURSELF: How to hack your own Wi-Fi network
Context officials, who conducted the study with the permission of the cloud providers, performed a series of disk analysis tests on virtual machines running in the public clouds. The theory was that if the hypervisor is not architected to clear storage disks after each use by a customer, the data can remain on the disk and be accessed by subsequent users. Sure enough, when Context researchers prompted the virtual machines to read the raw data on the disk, they found remnants of previous customers' data.
In one test Context researchers found references to applications that had previously been installed on the disk, while in other cases they found more potentially sensitive data, such as fragments of a website's customer database and logs showing where the data came from. "The remnant data was randomly distributed and would not allow a malicious user to target a specific customer," Context officials wrote in a report describing their testing. "A malicious user who discovered the vulnerability could, however, exploit it to harvest whatever unencrypted data he came across: e.g. personal information, credit cards or credentials."
Context officials tested cloud service providers Amazon Web Services, Rackspace, VSP.net and Gigenet and found that Rackspace and VSP.net had the vulnerability. Rackspace worked with Context for more than a year to update its system and said it has "fully resolved" the vulnerability and notes that it knows of no customer data being breached. "We have ensured that all data is wiped effectively whenever disk space moves from one customer to the next. And we have cleaned up all fragments of remnant data," a statement from Rackspace reads. VSP.net notified Context that it had patched its system, but provided no additional details. The company did not respond to request for comment from Network World.
VSP.net uses technology from OnApp to run its cloud platform, and officials with that company say after they were alerted of the issue by VSP.net they created a patch that cloud service providers can choose to install that will automatically zero out all disks after use by a customer. Carlos Rego, chief visionary officer for OnApp, says he has not tracked how many of the company's service provider customers have installed the add-on functionality.
"It looks to me like some providers are trading off security for performance," says Bharath Sridhar, director of cloud infrastructure and technology at Zoho, which has Platform-as-a-service and Software-as-a-service offerings. Ideally, virtual machines should not be able to access the root disks, but if that security provision is in place, it can slow the performance of the compute system because each disk has to be overwritten and zeroed out as opposed to just writing over the old data. Virtualization and cloud technologies need to mature to allow for better performance while maintain that security, and the lack of that so far points to the "growing pains" of the industry, Sridhar says.
He recommends customers have as much knowledge as possible about the complete life cycle of their data in the public cloud, the specific data retention and data management policies of their public cloud providers and what protections there are to segregate the data once it's in the cloud.
Rego, with OnApp, recommends that customers perform zeroing out processes themselves, even if their provider also zeros out the disks. By running a simple command line in Linux and Windows, customers can zero out the disks, he says.
Others are not surprised by the results. "It's kind of to be expected," says J.B. O'Kane, managing principal at Vigilant, a security and risk consultancy. "It's the same problem we would encounter independent of the cloud related to data governance and what processes and due diligence are in place to protect against vulnerabilities." The situation outlined by Context, he says, is not all that different from data not being zeroed out on a managed hosting, private cloud deployment, or even an old computer that is being thrown away. In all of these contexts, the hard drives should be wiped out after each use. Apparently, he says, it's just not commonplace for that to be done in the cloud yet.
Network World staff writer Brandon Butler covers cloud computing and social media. He can be reached at [email protected] and found on Twitter at @BButlerNWW.