Microsoft pushing hard on privacy in the cloud

Whether due to clever marketing objectives or to its stated commitment to making privacy a core consideration for its products and services, there’s no denying Microsoft is emphasizing privacy across multiple dimensions. Taking center stage this week was a recommendation to Congress (articulated in a speech given January 20 at a Brookings Institute policy form on Cloud Computing) that new legislation is needed on cloud computing security and privacy. Microsoft went so far as to propose a name — the Cloud Computing Advancement Act — for the new legal framework the company says is needed, as well as to advocate revisions to existing privacy legislation including the Electronic Communications Privacy Act and the Computer Fraud and Abuse Act. The speech offered justifications for explicit cloud computing regulation in the form of survey (commissioned by Microsoft) indicating a large majority of business leaders and consumers — even those enthusiastic about cloud computing’s potential — are concerned about data security and privacy in the cloud. Microsoft is also recommending a “truth in cloud computing” provision that would mandate more explicit disclosures by cloud service providers about the security and privacy measures they have in place. Cloud computing is currently the primary area of emphasis in Microsoft’s privacy advocacy directed at government officials and policymakers. Microsoft’s efforts illustrate one way in which private sector vendors with a stake in cloud computing are moving ahead on privacy, while in contrast federal government efforts to date have largely focused on clarifying definitions of cloud computing services and examining ways to securely use those services. Whether or not Congress decides to take Microsoft’s recommendations to heart, some additional direction to NIST to address privacy in the cloud might be reasonable.

Reminder: not everything you read on the web is accurate

In a post a few days ago meant to highlight the recent attacks on Google and many other companies as a textbook example of the advanced persistent threat, we cited zero-day exploits in Microsoft and Adobe software programs (in addition to really well-crafted phishing attacks) as evidence of the complexity and sophistication of the attacks. Not 12 hours later we received a very polite (really) email from Adobe pointing out that security vendor iDefense had withdrawn its initial assertion that the attacks used PDF file payloads to exploit vulnerabilities in Adobe Reader, and asking that we edit the post. We did, primarily because the last thing we want to do in this forum is convey inaccurate information, and in this case, the source of the information itself provided the retraction. However, in an article on the Google attacks in the January 11 issue of Information Week, writer Kelly Jackson Higgins quotes Mikko Hypponen of F-Secure, who claims that PDF files were sent to phishing attack victims, and when these attachments were opened they used a zero-day exploit in Adobe Reader to install a Trojan horse on the vicitims’ computers. F-Secure has also posted copies of subsequent phishing emails that use the attack incident itself as a subject to get recipients to open the malicious PDF attachment (the vulnerability in question has been around for about a month and was patched last week by Adobe).

The point of the post that originally referred to sources mentioning Adobe exploits was not meant to criticize the company or its products (many of which we at SecurityArchitecture.com use every day), and the point of this post is not to suggest that Adobe was right or wrong to object about their products being associated with the Chinese attacks (although our post was written several days after Adobe published its security bulletin on the vulnerability). What this situation highlights is that it’s hard to know how to make sense of potentially conflicting information on the Web, even when you leave bloggers and Twitterers out of the mix and look to reputable security vendors and media sources.

Trustworthiness (or more specifically perceived trustworthiness) of information is a constant theme online, whether in the context of social media or electronic equivalents of conventional news media and personal communications. The level of personalization reported with the Chinese attacks is remarkable in this regard. Even with heightened security awareness and sensitivity to phishing, spyware, and malware attack attempts, it’s not hard to imagine how these victims were compromised. These were not the shotgun-approach mass emails purporting to be from EBay or Bank of America; the attackers harvested names, contact information, and email addresses from individuals and organizations with which the victims were already familiar, and crafted fake emails using subjects and content personally relevant to the recipients. How many of us would think twice about opening a PDF attachment (not a .zip or and .exe or a .vbs mind you) seemingly on a directly relevant topic and apparently coming from a known business or personal associate? Formal models exist to manage information flows between different levels of trust, most notably the Biba integrity model, adaptations of which are used in Microsoft Windows and Google Chrome as well as many other systems. Of course, formal integrity models like Biba basically say you shouldn’t rely on any information where the trustworthiness of those who write the information can’t be confirmed (think Wikipedia). More practically, fundamentals of security awareness tell us not to open files received from unknown or untrusted sources, but as the spear-phishing attacks demonstrate, that’s not always as easy to do as it sounds.

Not everything Google does is related to China dispute

Without questioning the severity or significance of the Chinese attacks on Google and other companies, the huge attention focused on this incident seems to be influencing all coverage of Google, whether or not the topic in question has anything to do with the attacks.

It still seems a bit coincidental that Google’s stated intention to stop censoring search results on Google.cn was sparked by the recent attacks. Perhaps the hacks were just the straw that broke the camel’s back, but as long as three years ago Google’s executives had publicly questioned the wisdom of the company’s decision to support the content censorship requirements demanded by the Chinese government when Google first entered the China market. The conventional wisdom on Google’s decision to end its censorship program is generally positive, apparently even if it means Google will have to cease business operations in China. There is a vocal minority however that is suggesting Google’s decision is driven more by conventional business factors (market share, growth potential, etc.) than by moral or ethical principles.

The situation so dominated coverage of Google and other affected companies over the last several days that in reporting other significant actions or announcements made by Google, many in the IT press can’t seem to help drawing associations to the attacks even where none exist. A news summary distributed by IDG News Service yesterday is a good example: in the summary of an article about Google’s plan to propose that the European Union’s Article 29 committee create a security and privacy panel, the China attacks were mentioned as a driver for the proposal:

Google says that the recent hack of its Chinese operation shows why it needs to retain user search data and will this week call on the Article 29 Working Party to establish a privacy and security panel to encourage productive dialogue on the proper use and protection of such data, PCWorld reports. “You can’t discuss privacy in a vacuum,” said Google global privacy counsel Peter Fleischer. Google retains search users’ full IP addresses for nine months. “We find it incomprehensible that a company would throw away useful data when holding it poses no privacy threat,” Fleischer said.

The above version of the summary was included in the January 20 Daily Dashboard of the International Association of Privacy Professionals. The following day’s edition included a note that IDG News Service had modified the story because, as IDG explained it, “Due to a misunderstanding with a source, the story posted linked Google’s stance on retaining search data with unrelated attacks on its corporate infrastructure.” It’s hard to fault anyone in the trade press for having the China attacks on their minds whenever they hear “Google” but perhaps the members of the media would do well to remember the maxim from statistics: correlation ≠ causation.

IronClad’s “PC on a stick” could be a benefit or a threat

Defense contracting giant Lockheed Martin announced the general availability of its IronCladTM secure USB drive as a fully self-contained PC, containing operating system, applications, and data all within a flash drive form factor that presents the ultimate in portability. This is the latest innovative use of the Ironkey secure USB device, which to date has been positioned in the market largely as a highly secure portable storage device. The IronClad “PC on a stick” is designed to let a mobile user plug in to any client computer platform to leverage the I/O and connectivity of the host while bypassing the host’s hard drive. Lockheed suggests that this optimizes mobile connectivity by turning any borrowed PC, workstation, or computer kiosk into a secure personal computing platform. Because no access to the host hard drive is needed, the company also claims that no evidence of IronClad’s use will be left behind.

To be clear, Lockheed does specify the minimum requirements necessary for IronClad to use a host computer, notably including a BIOS that supports booting from USB, and presumably organizations that have implemented USB device blocking or port restrictions will not be at risk for IronClad users gaining unauthorized access. However, to the extent that USB drives already present a security risk as an mechanism for data theft, it seems that be able to carry a fully functioning PC on a flash drive (instead of just storage capacity) raises the bar substantially in terms of potentially needing to guard against the use of these devices. IronClad appears targeted to enterprise users as an alternative to some routine laptop uses, including remote device management and security administration functions including remote destruction of flash drive contents. There is no reason to assume that an IronClad user would be any more able to gain unauthorized access to a network using USB device than someone with a laptop — access to a connected host computer is still required, so the only practical difference with IronClad is you appropriate a USB port instead of borrowing a network cable. It is less readily apparent however if an individual user of the device might be able to configure it to help gain access to “guest” network environments. The product marketing information most directly emphasizes using IronClad in a way that turns a public or shared computer into a secure virtual desktop, but the company’s emphasis on “leaving no trace” should sound attractive to attackers who value stealthiness. Presumably the device’s built-in remote management features and ability to use physical network connectivity of its hosts would also result in the sort of data stream that an IDS, event log monitor, or SEIM tool would be able to identify. In this context the potential attempted unauthorized use of an IronClad device is no different as a security event than any conventional use of third-party client computers, and should be monitored and guarded against in the same way.

Healthcare providers missing the mark on risk assessments

As the comment period continues for the recently published proposed rules and draft certification criteria and standards associated with “meaningful use” of electronic health records, it appears that a large proportion of healthcare providers are not prepared to comply with the one meaningful use measure related to security and privacy that has been proposed as a requirement for 2011. In comments reported last week, members of the Health IT Policy Committee working with the Office of the National Coordinator at HHS cited a survey that found 48 percent of responding health providers do not perform risk assessments. The stage 1 (2011) measure associated with the health outcomes policy priority for privacy and security (“Ensure adequate privacy and security protections for personal health information”) found in the Notice of Proposed Rulemaking says simply that EHR users must conduct or review a security risk analysis and implement security updates as necessary. This and other measures demonstrating meaningful use must be met by providers to receive incentive payments for adopting electronic health records.

At first glance the security and privacy bar appears to have been set quite low (there is a separate list of security functionality that a certified EHR system must be able to perform), especially since risk analysis is something covered entities like providers are already required to do under HIPAA rules. Among the requirements of the HIPAA Security Rule — which went into effect in 2003 and with which compliance has been required by covered entities of all sizes for almost four years — is an Administrative Safeguard for risk analysis: “Conduct an accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information held by the covered entity” (45 CFR 164.308(a)(1)(ii)(A)). Without getting to the heart of why so many providers have yet to implement the practices to meet this HIPAA requirement, practically speaking this means they now have another 18 months or so to get their security houses in order. There’s been some discussion among ONC’s advisory committees as to what specifically a compliant risk analysis must entail, and there is not as yet a corresponding standard to provide that specificity. From a government perspective agencies likely have to look no further than NIST and its Special Publication 800-66, “An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule,” which addresses conducting risk assessments using a process adapted from NIST’s own risk assessment process documented in Special Publication 800-30, “Risk Management Guide for Information Technology Systems.” This guidance might be less comprehensive than risk assessment practices found in other security management or IT governance frameworks, particularly as 800-66 constrains the risk assessment to consider only the risks of non-compliance with the security standards general rules for electronic protected health information (PHI). Because the financial penalties for HIPAA security rule non-compliance are relatively minor, and the criminal penalties are rarely sought, the most relevant risks for a private-sector covered entity might be business consequences like negative publicity or the loss of customers.

The point is, healthcare providers and other covered entities have access to many available approaches, methodologies, and process standards for risk analysis, yet to date do not appear to be using any of them. In order to avoid falling short on meaningful use, these organizations need to set in motion the process of changing their security program operations to make routine risk analysis an integral component. Using the HIPAA Security Rule requirements as a precedent, the specifics of what a risk analysis must include may not be enumerated in exhaustive detail, so following just about any accepted risk analysis standard has a good chance of being compliant, whether choosing to go with NIST, or ISO 27005, or COBIT, or ITIL, or a comparable approach.