As cloud computing gains momentum, so does government attention to privacy and security

While still marked by more hype than tangible success, cloud computing remains an area widely viewed as inevitable in both commercial and public sector markets. Whether you accept the predictions of cloud service vendors or favor a more pragmatic take to this evolving market, the focus of the discussion has become “when” rather than “if” large-scale use of cloud services and technology will become pervasive. One of the factors reigning in some of the enthusiasm about the cloud is concerns over security and privacy, particularly the protection of data moved to the cloud. Against this backdrop there are calls from government leaders in both the United States and Europe to take proactive action to establish security and privacy requirements for cloud computing, and possibly even enact new legislation. In the U.S., the government-led Cloud Computing Advisory Council (sort of a re-focused IT Infrastructure Line of Business) has developed a cloud computing framework and this week announced its Federal Risk and Authorization Management Program (FedRAMP) that will develop a common set of security requirements in an effort to speed up the pace of adoption of cloud computing by federal agencies. This is just one of the most recent announcements of a slew of workgroups, initiatives, and government and public/private collaborations on cloud computing, which neither separately nor collectively yet cover all aspects of what the government thinks it needs.In Europe, the biggest focus area is data security and privacy, to such a degree that some are now calling for a global data protection law. It remains to be seen whether privacy and security standards and requirements can be harmonized enough to make such an ambitious proposal a reality, but as industry groups such as the Cloud Security Alliance routinely point out, that fact that the government approach to the cloud is as yet unclear — especially what the regulatory environment will look like — neither cloud service providers, technology vendors, or government organizations (or even commercial enterprises) are going to be comfortable moving forward aggressively with cloud computing.

Better access restrictions needed for medical information

A fair amount of attention is appropriately being focused on the need to maintain appropriate access controls on electronic health record systems and other sources containing personal health information. Among the HIPAA privacy provisions that were strengthened by the Health Information Technology for Clinical and Economic Health (HITECH) Act portion of the Recovery Act is the requirement that covered entities be able to provide an “accounting of disclosures” of personal health information to patients that request one. Prior to HITECH, the rules for recording disclosures included an exception for data disclosures associated with routine uses such as treatment and payment, meaning for instance that a provider didn’t have to record the fact a patient’s health record was being looked at in order to make a diagnosis or evaluate a treatment option, or to work out reimbursement details with an insurance provider covering the patient’s care. HITECH removed these exceptions so that now an accounting of disclosures must include those for all purposes. There remains some concern however that unless comprehensive record logging is used, that instances where a record is accessed (viewed) and merely read, rather than used in some type of transaction, may not be recorded. A big driver for concerns about incomplete tracking of accesses of patient data is the fear that personal information will be viewed by individuals other than the practitioners, billing administrators, or others who have a valid reason for accessing the records. Public opinion polls cited by health privacy advocates suggest that a majority of Americans are not confident that their health records will remain confidential if they are stored online.

What is lost in much of this discussion is that the problem of inappropriate access to personal health information is not only not limited to electronic forms of record keeping, but is just as relevant to paper-based records. BBC News reported this week the results of a British National Health Service (NHS) inquiry made by the privacy and civil liberties advocacy group Big Brother Watch, which suggested that more than 100,000 non-medical staff currently have access to personal medical records stored by the NHS trusts in the U.K. The records involved include those in both paper and electronic form, but the British Department of Health implied in its response to Big Brother Watch claims that the growing use of EHR systems will enable stricter access controls. It is a plausible argument, depending on the record-keeping environment in question, that by digitizing health records and applying access controls to the electronic systems, data can be more protected than if it is kept in paper form. For records maintained in used only in local provider environments, electronic access controls might be preferable to physical security mechanisms used to secure paper records. However, once an electronic records are put online or made available for health information exchange, the population of individuals potentially gaining access to the data in EHRs will far exceed the number of employees and other individuals who might feasibly gain physical access to paper records.

FTC settlement with Dave & Buster’s shows broad range of security failures

In a notice published yesterday, the Federal Trade Commission (FTC) announced the terms of a settlement to which entertainment chain Dave & Buster’s agreed stemming from FTC charges that the company failed to adequately protect customer credit card information, allowing hackers to compromise the credit card information of over 130,000 customers resulting in hundreds of thousands of dollars in fraudulent charges. The wording of the settlement statement faults Dave & Buster’s for its alleged failure to make use of “readily available” security measures to protect its network from unauthorized access or to take “reasonable steps” to secure personal information collected from customers. These charges are the latest in a series of more than two dozen cases involving faulty data security practices, where the administrative complaints lodged by the FTC provide relevant examples of the legal principle of “due care.” We touched earlier this week on the concepts of due care and legal defensibility, and FTC actions such as the one against Dave & Buster’s follow the nearly 80-year-old federal legal precedent established by the decision in the T.J. Hooper case (60 F.2d 737 (1932)), specifically that failure to use available protective measures translates into legal liability for any damages incurred.

Based on the FTC’s allegations and the fact that the compromised data was credit card information, it is entirely likely that Dave & Buster’s were also in violation of the Payment Card Industry Data Security Standard (PCI DSS). which includes specific requirements for cardholder data protection which must be followed by merchants accepting credit card transactions. The PCI Security Standards Council maintains the requirements framework for DSS and other security standards, while compliance with and enforcement of the standards is typically handled by payment card industry brands (Visa, MasterCard, Discover, American Express, etc.). Compliance (or the lack thereof) with PCI DSS or other security standards or regulations is outside the scope of FTC jurisdiction, so it remains to be seen if Dave & Buster’s will face any further sanctions. Under the terms of the settlement agreement, the company agreed not only to establish and maintain a security program to protect personal information, but also to biennial independent security audits for 10 years to monitor compliance with the settlement.

Federal information security focus shifting to next-generation FISMA, continuous monitoring

While we have seen perennial efforts in Congress to revise or replace the Federal Information Security Management Act (FISMA) and shift government agencies’ security focus off compliance efforts and reporting mountains of paperwork on their information systems, momentum appears to be building in both the legislative and executive branches to define the next generation of federal information security. The common theme surfacing out of all this activity is the government’s desire to move to a model of “continuous monitoring” as an improvement over the triannual point-in-time security evaluations that characterize federal agency security programs operating under FISMA.

Last month NIST released the final version of its revised Special Publication 800-37, Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach, the latest product of a Joint Task Force initiative coordinated by NIST (representing civilian agencies) and involving the collaboration of the Department of Defense, the intelligence community, and the Committee on National Security Standards (CNSS). The change in title alone is noteworthy (as originally published in 2004, 800-37 was called Guide for the Security Certification and Accreditation of Federal Information Systems), as it is the largely documentation-based C&A process that has lost favor, despite the heavy emphasis on systems accreditation in annual FISMA reporting. One of the fundamental changes in the revised 800-37 is the emphasis on continuous monitoring, which has always been an aspect of the C&A process, but which now includes a dedicated Appendix describing monitoring strategy, selection of security controls for monitoring, and integration of continuous monitoring with security status reporting and overall risk management activities. NIST Computer Security Division director Ron Ross provided an overview of this and other current and planned changes to security guidance and recommended practices at a meeting on March 22 of the ACT-IAC Information Security and Privacy SIG.

For its part, OMB released the FY2009 FISMA Report to Congress,  which provides the customary annual summary of federal agencies’ aggregate progress in cybersecurity, security incidents, security metrics, and privacy performance. The forward-looking section of the report spotlights plans to implement new security metrics for 2010 intended to provide real-time indications of performance and to improve situational awareness among agencies. OMB is also focusing on several key administration initiatives with an eye to their impact on security, including transparency and Open Government, health IT, and cloud computing. Federal CIO Vivek Kundra highlighted the same theme of shifting emphasis under FISMA towards continuous monitoring in a radio interview this week, and reiterated his key points while testifying before the House Committee on Oversight and Government Reform‘s Subcommittee on Government Management, Organization and Procurement at a March 24 hearing on “Federal Information Security: Current Challenges and Future Policy Considerations.” Others testifying included State Department CISO John Streufert, whose approach to security management beyond the requirements in FISMA is regularly held up as an example of where government agencies need to go, and several individuals who have been active in the development of the Consensus Audit Guidelines (CAG) and its 20 Critical Security Controls. The general consensus at the hearing seems to be that current government security laws are insufficient, and that FISMA in particular is due for revision.

Separately, both the House and Senate moved forward with draft information security legislation. The revised version of the Senate’s Cybersecurity Act of 2010 (S.773) was unanimously approved by the Senate Commerce, Science and Transportation Committee on Wednesday, while in the House, Rep. Diane Watson of California introduced the Federal Information Security Amendments Act of 2010 (H.R. 4900). The agency responsibilities enumerated in the House bill lead with continuous monitoring, penetration testing, and risk-based vulnerability mitigation, as part of information security programs that would be overseen and approved by the Director of the National Office for Cyberspace — a position created through another provision in the bill that would be a Presidential appointee subject to Senate confirmation.

How much security is enough and, is the answer the same in a courtroom?

One of the recurring questions in information security management is how much security is “enough”? For organizations that have adopted risk-based approaches to information assurance, the level of security protection they put in place is directly correlated to the value of the assets the measures are intended to protect, and to the anticipated impact (loss) to the organization if those assets are compromised. That’s all well and good from a management perspective, but the right risk-based answer may not be the right legal answer  in the sort of highly publicized data breaches, cyber attacks, and other security events that lead to losses not just by the organizations that suffer these incidents, but also by their customers, partners, or other stakeholders. If an organization suffers a breach that puts its customers at risk, what does the organization have to do to try demonstrate it has appropriate security measures in place, and therefore to minimize exposure to tort liability?

One answer to this question lies in the legal principle of due care (sometimes referred to as “reasonable care”), which is the effort a reasonable party would take to prevent harm, and which is a core tenet of tort law. The classic legal precedent for the standard of due care is the U.S. Appellate Court ruling from 1932 in the T.J. Hooper case, which held the Eastern Transportation Company liable for the loss of cargo being transported on a barge towed by the Hooper (a tugboat), because the crew of the Hooper failed to use a radio receiver that would have allowed them to hear locally broadcasted weather reports that warned of unfavorable condition. The court ruled that the loss “was a direct consequence” of the failure to use available safety technology, even thought at the time the use of such radios was far from pervasive. Bringing this precedent forward to the modern computing age, the standard of due care means that if an organization suffers a loss, and the means are available to have prevented the loss, the organization can be held liable for the loss due to its failure to use the available protective measures.

So what’s clear from a legal perspective is that organizations have to make appropriate efforts to secure their assets from harm. But once again, how much is sufficient to meet the standard of due care? We have no conclusive answer to this question, but were very pleased to see a discussion of “legal defensibility doctrine” from Ben Tomhave, which nicely integrates the related ideas of legal defensibility, reasonableness of security efforts, and practical acceptance of the inevitability of security incident occurrence. It also picks up on a theme expressed by others  that conventional risk management (at least as commonly practiced) may be insufficient to arrive at appropriate levels of security and therefore leave the organizations more legally vulnerable than they would like to be.