NIST answers to questions on continuous monitoring suggest no drastic change in approach

In the wake of the release of its updated Special Publication 800-37, Guide for Applying the Risk Management Framework to Federal Information Systems, which among other things calls for federal agencies to continuously monitor the security controls associated with their information systems, the Computer Security Division of the National Institute for Standards and Technology (NIST) today published a set of frequently asked questions (and answers to those questions) on continuous monitoring. In contrast to some initial interpretations of pending changes in the application of federal certification and accreditation processes, this guidance from NIST makes it quite clear that it envisions continuous monitoring as an additional component of security program procedures followed to authorize systems, not as a substitute for them. It would seem that NIST is positioning continuous monitoring simply as an additional, and very valuable, source of information for agencies making risk-based decisions about the security of their information systems. This positioning is consistent with language in a memorandum from OMB distributed to all department and agency heads in April, as is the point made in both documents that by performing continuous monitoring, agencies meet the periodic testing and evaluation requirement under the Federal Information Security Management Act (FISMA).

The consideration of continuous monitoring as an additive element to existing federal information security program practices inevitably raises the question of agency resources needed to comply with expanded obligations. In the context of arguing for the record that conventional certification and accreditation practices are expensive to follow and provide little value in terms of actually securing agency systems and environments, many federal agency officials have questioned the economic wisdom of continuing to authorize their systems using existing methods and approaches. While the Department of State has, so far, continued to conduct security authorization activities and produce accreditation package documentation in parallel with its relatively new automated risk-scoring approach to security posture assessment, other agencies appear to believe that current compliance approaches mandated under FISMA (and OMB Circular A-130) will be deprecated in favor of some other, yet-to-be determined mechanism, whether by executive agency action or by act of Congress. These agencies, notably including NASA, have sought to re-allocate resources away from authorization tasks in favor of standing up continuous monitoring capabilities.

While it may be hard to see continuous monitoring as a negative, even if it does not represent a real shift away from compliance-driven processes, making the newer requirements simply additive, rather than revisionary, seems to be a lost opportunity to pursue real enhancements in agency security postures. The current emphasis for OMB is on moving the government toward more streamlined and more frequent security reporting, via the CyberScope online reporting solution. For its initial rollout, and perhaps until the information being reported is revised significantly, changing the submission mechanism and frequency of reporting doesn’t get to the heart of the problem in federal security practices, which is too great a reliance on compliance exercises rather than real situational awareness. If agency CISO’s and Congress all agree that compliance with security guidance does not equal actual improved security, then more frequent compliance checks cannot be the answer. The potential remains, depending on how continuous monitoring is implemented among agencies, for agencies to settle on a more appropriate set of security metrics than those currently required for reporting under FISMA. If, however, there is no change in the level of documentation and procedural requirements associated with system authorization, then many agencies may not have the resources within their security programs to make a genuine and sustained effort on continuous monitoring.

Trust by whom, in whom, and in what context?

It’s fairly common to see trust mentioned as needed, desired, or required to achieve a given outcome. For example, in the case of health information technology adoption, trust is seen as an essential element for widespread adoption of health IT to succeed. Looking at health IT also provides a good illustration of how important it is to be specific when we talk about trust — it’s not sufficient to say “there has to be trust,” without identifying who or what is to be trusted, by whom, and in what context. Many scholarly conceptualizations of trust define it as a three-part entity, comprising a trustor, a trustee, and a relationship between the two that specifies the scope covered by the trust in question. The attributes of a trusting relationship are important not just to provide clarity, but also to help determine the appropriate basis of trust (that is, what is needed to engender the trust being sought) given the parties to the relationship and the relationship itself.

In the health IT environment, there is not a single discussion of trust, but instead the need for trust is reiterated at multiple levels. Distinct yet overlapping trust relationships that exist (or more specifically, need to exist) were highlighted in May at a panel discussion on e-health at a CIO Symposium held at MIT’s Sloan School of Management. That discussion pointed to trust by individuals (patients) in their doctors or other healthcare providers that their personal health information is being protected appropriately, and trust among providers involved in exchanging health information that patient data is being used for legitimate, authorized purposes. There is yet another type of trust involved for providers seeking to use health information in support of clinical care or medical decision making, which is trust that the data is accurate and its integrity is intact, so that it is actually useful as an input to decision making.

Part of the challenge in establishing the trust that everyone seems to agree is needed for health IT adoption is that the interests of the parties involved are not always aligned, and in some cases those interests may be directly in conflict. With health IT, a further complication arises from the fact that the actions and provisions that may do the most to engender trust among individuals in health IT (such as use of electronic health records and data sharing through participation in health information exchange) also happen to constrain the achievement of the intended objectives and policy outcomes sought through health IT initiatives. Perhaps the group most clearly pulled in multiple directions at once are the providers, who have a critical business interest (and professional obligation) in maintaining strong relationships with their patients, but who are also directly impacted by some of the intended results of health IT adoption, including improving quality of care, reducing costs, and supporting public health promotion and consumer safety. Any efforts made by the government or industry to encourage adoption of health IT has to consider the health care system as a whole and seek ways to make the system overall more trustworthy, otherwise different stakeholders will be hard pressed to reconcile the often competing interests that define their roles.

Update on Google Street View data collection: Congress asking for answers

While investigations on many aspects of Google’s wireless data collection practices are ongoing in the U.S. and many other European countries (most recently including the Czech Republic), the first major legal action against the company has come as part of a potential class action lawsuit in the Western United States.While at least four lawsuits have been filed already, early progress in a case in federal district court in Oregon prompted the judge to order Google not to delete any of the data it collected during the operation of the program, and to turn over copies of the data to the court. Following the initiation of a similar suit in Washington, D.C., three members of Congress sent a letter on May 26 to Google CEO Eric Schmidt asking for information about the nature and extent of the data the company collected, and also wanting to know what Google intended to do with the data. Google has of course said publicly that it never intended to gather the data in the first place, and had no plans for it, but it would seem Reps. Henry Waxman, Joe Barton, and Ed Markey — all of whom serve on the House Committee on Energy and Commerce, which Waxman chairs — remain unconvinced by Google’s statements to date on the matter. The letter from the Committee members mentions at least three U.S. federal laws that Google’s actions may have violated, and asked for responses to a dozen specific questions about the Street View program and the data Google collected, requesting a response by June 7.

Former acting cybersecurity czar provides legislative summary of bills in 111th Congress

Former acting cybersecurity czar Melissa Hathaway, who in early 2009 led the Obama administration’s 60-day review of cybersecurity policy and who is now a senior advisor working at the Harvard Kennedy School’s Belfer Center for Science and International Affairs, this month made public an overview of more than three dozen separate pieces of legislation pending in various statuses in both houses of Congress. The report provides brief highlights of the major cybersecurity implications of each of the bills, and identifies where each of them fits into one or more of seven categories of security functions:

  1. Organizational Responsibility
  2. Compliance and Accountability
  3. Data Accountability, Personal Data Privacy, Data Breach Handling and Identity Theft
  4. Cybersecurity Education, R&D and Grants
  5. Critical Electric-Power Infrastructure Protection and Vulnerability Analysis
  6. International Cooperation and Addressing Cybercrime
  7. Procurement, Acquisition, Supply Chain Integrity

Among the 41 pieces of legislation included in the review, Hathaway calls out nine in particular that bear watching, notably including the U.S. Information and Communications Enhancement Act sponsored by Sen. Tom Carper (S.921) and the Cybersecurity Act sponsored by Sens. Jay Rockefeller and Olympia Snowe (S.773), and bills in both the House and Senate on data breach notifications and data accountability. One of the most active recent House bills, the Federal Information Security Amendments Act (H.R.4900), is not one on the “legislation to watch” list, although it is included within the scope of Hathaway’s review.

Hathaway concludes her report (really structured more as a briefing) with three recommendations that might be appropriately directed to Congress or the administration’s cybersecurity coordinator for further action:

  • Need Congressional leadership to set the legislative priorities for cybersecurity
  • Need to clearly articulate the direction for cybersecurity private-public engagement and responsibilities
  • Need broad-based awareness and education campaign for the U.S. population and other like-minded nations

The third of these is an area being addressed in several of the draft bills under consideration, including the House Cybersecurity Enhancements Act legislation (H.R.4061), which among other provisions would direct additional funding to the National Science Foundation to pay for scholarships for students in exchange for two to three years of public service working in cybersecurity. The second recommendation is a general point of contention between government and industry and reflects an area that may or may not be explicitly resolved in whatever federal cybersecurity legislation actually gets enacted. The relevance of the first recommendation is amplified by the sheer volume of potential legislative actions under consideration; some agreement on priorities might facilitate the consolidation of some of these 40+ bills into a more manageable number that might also have a greater chance of passage. It seems there are enough competing priorities in Congress on numerous other fronts to constrain real progress on cybersecurity enhancements or reform, and this situation is only made worse with so many pieces of proposed legislation, many of which cover similar ground.

Hard to believe Google’s wi-fi data capture was accidental, may or may not be illegal

With momentum building in many countries for investigations into potential privacy violations and other possible transgressions by Google related to its practice of capturing unencrypted wireless network traffic as a part of its Google Street View program, there are two aspects of particular interest here:  first, how credible is Google’s claim to have gathered and stored the data by mistake, and second, is the interception of such traffic illegal? The company simultaneously made itself look bad and perhaps provided evidence for its claim that the Street View cars weren’t intentionally gathering this data, when it posted a public statement on April 27 that said in part, “Google does not collect or store payload data” and then two weeks later corrected itself, saying it had in fact been collecting and storing that data all along. The company claims that the practice resulted from the mistaken inclusion of software code in the Street View program that captured not only wireless access point SSIDs and MAC addresses, but also any data transmitted in the clear by the access points it identified. Data protection authorities in some European countries have questioned Google’s explanation, and the Irish government went so far as to demand that Google delete all data it had collected in that country (the company complied with the demand). Whether you agree with the general justification Google offers for wanting to gather the location of active wireless access points, it’s hard to imagine any leading online services company wouldn’t immediately grasp the privacy sensitivity of capturing unencrypted wireless traffic from private homes and businesses. If, as Google says, the Street View project leaders did not want payload data, then even if leaving in the software code to capture that data was an oversight, you would think that someone might have noticed all the additional data coming back with the Street View cars, yet it appears this practice has gone on for a year or more. Google says it “grounded” its fleet as soon as it became aware of the problem — it’s the failure to become aware for a such protracted period of time that is hard to fathom.

In terms of consequences, the legal prospects for Google in any investigations of its actions in this matter are, as might be expected, very different in the European Community and the United States. Government authorities in several European countries have announced their intention to open investigations into the Street View program and its data collection practices. The general consensus, echoed by the U.K. Information Commissioner quoted in a story last week, is that Google appears to be in violation of the European Union’s Data Protection Act (Directive 95/46/EC), which restricts the collection or processing of personal data without prior consent, and possibly of the EU’s Directive on Privacy and Electronic Communications (2002/58/EC). The excuse generally offered by Google has been that because the data was unencrypted and broadcast outside the confines of the homes or other buildings where the access points are located, the transmissions were “public” and therefore not subject to the data protection rules on personal data. A secondary line of defense might be that the collection of this data was unintentional, so no consent was sought because there was no plan to collect or use the data. It seems likely Google might offer to delete data it already has on hand, much as it agreed to do when asked by Irish authorities. These legal defenses would likely be put forth in the U.S. as well, where the most relevant law would seem to be the Electronic Communications Privacy Act. The statutory language in this law prohibits intentional interception or use of wire, oral, or electronic communications (18 U.S.C. §2511), so it’s not hard to imagine an argument in a U.S. court that since Google’s actions were unintentional, no violation of the law occurred. Legally, this line of reasoning would likely come down to semantic interpretations of “intentional” and “accidental”; it is undisputed that the traffic was captured because the Street View program included software code designed to do exactly that, so it’s not as if the data capture resulted from some sort of spectrum conflict or other unforeseeable situation. To the extent Google’s actions infringe on individual privacy, the data collection might be portrayed as contrary to the company’s current privacy policy, which if true might constitute unfair or deceptive trade practices under the authority of the Federal Trade Commission Act (15 U.S.C. §45). The current privacy page for Google Street View emphasizes that all the images Google collects are public, but there is no mention of any wireless network detection. To see how this actually plays out, in courtroom settings or in negotiations obviating the need for lawsuits, we will need to watch the way the investigations unfold, but a good indication of the legal avenues likely to be pursued in the U.S. are detailed in a letter sent last week from the Electronic Privacy Information Center (EPIC) to the chairman of the Federal Communications Commission, suggesting that Google’s actions are clearly in violation of federal wiretapping laws.