Earlier this month, the National Institute of Standards and Technology issued a request for comments on a draft set of proposed security metrics that OMB is considering using for agencies’ annual reporting as required under the Federal Information Security Management Act (FISMA). The comment period runs through January 4, 2010, giving all interested parties, including members of the public, the chance to point out aspects of information security management that OMB and NIST may be overlooking. Taking a quick read through the draft recommended metrics (easy to do since they are presented in bullet-point form in a 22-page slide presentation) can provide a sense of where the government is evolving its thinking on information security, and also gives and indication of some of the technologies and practices that OMB thinks agencies should be adopting, even where formal recommendations (in the form of a memorandum) have not been issued.
In the past, information security reporting by government agencies has focused on historical perspectives produced at relatively infrequent (annual or quarterly) intervals, so one interesting theme in the proposed metrics is the emphasis on real-time reporting capabilities in general, and automated capabilities for achieving situation awareness in particular. OMB proposes asking agencies whether they can provide real-time feeds about system, hardware, and software inventories; external connections including Internet and remote access channels; the number of employees and contractors with log-in credentials, security awareness training, and significant information security responsibilities; and integrated security status and monitoring. In every category, the questions as currently worded allow for the possibility that a given agency does not have real-time or even automated capabilities for reporting the requested information, but in most cases, if no capability exists agencies are asked to provide a date by which they will have such capabilities in place. This language implies a recommendation or expectation that certain practices and technologies be implemented, at least to facilitate reporting (an online reporting tool called CyberScope went live in October). Moving in the direction of continuous monitoring and reporting is a consistent trend from NIST, seen most recently in the revisions to its Special Publication 800-37, which among other things announce an intention to move away from tri-annual certification and accreditation and towards more continuous monitoring of security controls for information systems.
One possible way to interpret some of the questions in OMB’s proposal is that agencies may be expected in the near future to acquire and implement more technical measures to help enforce information security policies, regulations, and obligations that already exist. For example, questions under hardware inventory ask about agency abilities to detect and block the introduction of unauthorized hardware to any device on agency networks, and under software inventory similar questions ask about the ability to prevent unauthorized software from being installed on network-connected devices. These capabilities are most often associated with technical security measures such as network access control, end-point security, and monitoring of USB ports and other workstation I/O channels. Many agencies have policies in place forbidding, for instance, the use of USB thumb drives or other removable storage media, but not all have implemented the corresponding technical controls to monitor and enforce compliance with such policies. Similar disconnects between policy and enforcement exist at many agencies where third-party or even personal computers can be connected to government networks. In some cases agencies rely on employee and contractor execution of rules of acceptable use or rules of behavior agreements, rather than technology to monitor network connections, scan clients attempting to connect, and alerting when violations occur. The proposed FISMA reporting questions also ask about use and validation of standard configurations for computing platforms, presumably to determine to what extent agencies are following the Federal Desktop Core Configuration (FDCC) mandated beginning in early 2008 or similar secure configuration guidelines.
In proposed questions about incident detection, the wording may indicate a shift, however subtle, about expectations for agency practices and the need to include those in FISMA reports. For example, in the OMB draft metrics, the language presumes that agencies are conducting controlled network penetration testing. This has always been a requirement under FISMA, but FISMA reporting to date has limited questioning to incident detection tools in use, and has never asked specifically about agency penetration testing. In a format similar to previous FISMA report questions on incident detection and response, the proposed metrics include a category for data leakage protection, asking agencies what technologies (if any) are used to prevent sensitive information from being sent outside agency network environments. Aside from a directive issued in 2006 (OMB Memorandum 06-16) that instructed agencies to encrypt agency data stored on laptops and other mobile devices, no comprehensive guidance or requirement has been issue for federal agencies regarding data leakage protection (or as more common seen in the security market, “data loss prevention”), although it has been a popular topic in government security policy discussions by the Information Security and Privacy Advisory Board (ISPAB) and other bodies debating government information security priorities.
On balance, the new metrics proposed by OMB appear to be a small step forward in reporting information more representative of agency security posture than previous FISMA report requirements, although they are likely to be insufficient to support some of the more significant revisions to FISMA that have been proposed in by Sen. Tom Carper and others in Congress over the past 18 months. To their credit, NIST and OMB do appear to be positioning to leverage relevant government-wide initiatives, such as HSPD-12 credentials and the consolidation of agency external connections under the Trusted Internet Connection program. Practically speaking, the intended benefit from information addressed in the proposed metrics will not be realized until a greater proportion of agencies take action to implement the capabilities and security best practices NIST recommends.
While the technical infrastructure required to support information sharing don’t really change from context to context, security and privacy requirements applying to senders and receivers of information do vary quite a bit depending on the domain. In the health information exchange arena, these differing requirements and the inability to reconcile them have served to slow participation in health information exchange initiatives such as the Nationwide Health Information Network (NHIN). One of the information sharing solutions often held up as a model for other domains is the federal Information Sharing Environment (ISE), developed as a trusted infrastructure for sharing information about terrorist threats among federal, state, and local intelligence, law enforcement, defense, homeland security, and foreign affairs organizations. Concerns over establishing and maintaining appropriate protections for the data shared in the type of information exchange envisioned for the ISE have resulted in less actual sharing of information than was intended, a problem the administration is now trying to address.
Noting the proliferation of distinct and often incompatible data classification schemes by different organizations possessing relevant information, the administration in May directed an interagency task force to review procedures on classifying sensitive-but-unclassified data and make recommendations on ways to standardize classification guidance to facilitate the exchange of this information. The recommendations were released last week, and emphasized the importance of greater information sharing with such priority that the lack of consistent or comprehensive security controls should not stand in the way of greater levels of information sharing. This finding might seem counter-intuitive at first glance given the sensitivity normally associated with terrorism data, but the recommendation is actually an excellent example of risk-based decision making on security. Simply put, the value of having more of this data available to those needing it to protect the nation from terrorist threats outweighs the risk from the potential disclosure of this information beyond its intended audience. There is certainly an implied expectation that security will continue to be addressed and more robust security controls will be applied to information exchanges as agencies can come to agreement on the technologies and procedures that will be used, but in the mean time, the report determines that the anti-terrorism mission should not be constrained by insufficient sharing of sensitive but unclassified information.
The Supreme Court last week agreed to hear arguments in a case on employee privacy and the extent to which government agencies can monitor the content of personal communications made by their employees while using government-owned equipment. The case involves a police sergeant on the city of Ontario, California SWAT team who routinely used his city-issued pager to send and receive personal messages, many of which were found to be sexual in nature. The case (Quon v. Arch Wireless) is only partly about the appropriateness of the content, or the fact that most of the pager usage by the individual in question was personal, rather than business-oriented. The central issue is whether the city violated a Constitutional right to privacy (under an interpretation of the Fourth Amendment’s protection against unreasonable search) by inspecting the content of the text messages sent by the sergeant. The conclusion of the Ninth Circuit Court of Appeals was that the city did in fact violate the sergeant’s privacy, so it is the city that appealed the decision to the Supreme Court. While the issues at the heart of the case are the subject of considerable disagreement by legal theorists and privacy advocates, the particularities of this specific case may present the Court with an opportunity to settle the dispute without establishing a broad or significant precedent about privacy in the workplace.
In the United States, the general rule is that employees have almost no right to privacy in the workplace when using employee-owned equipment such as phones, computers, and other communications devices, as long as employees have been notified by their employer that monitoring of their communications is taking place. (The situation is drastically different in the European Community and other foreign locations, but of course the Supreme Court’s jurisdiction does not extend beyond the U.S.) There are distinctions in U.S. law regarding whether the monitoring constitutes “interception” — such as listening in on calls or inspecting email in transit — in which case the U.S. Wiretap Act (for telephone calls) and the Electronic Communications Privacy Act (for electronic communication such as email) generally prohibits monitoring. Both of these laws contain exceptions for situations where monitoring is for ordinary business use and when prior consent to monitoring has been given by employees. In the Ontario case, the police department had a formal policy in place asserting a right to monitor electronic communications by employees, and employees were told explicitly that they had no expectation of privacy. That might have been the end of the story had not a somewhat contradictory informal policy been adopted by the SWAT commander to whom the sergeant reported, under which officers were told that if they paid for pager usage in excess of a 25,000 character monthly limit, their messages would not be inspected. Legal counsel for the sergeant argued, and the Ninth Circuit panel agreed, that the informal policy overrode the official one, and therefore the sergeant’s Fourth Amendment rights had been violated under the provisions of the Stored Communications Act (18 USC 121 §§2701-2711). While the larger issue at stake is the extent to which government employees can expect their workplace communication to remain privacy, the Court may choose not to weigh in on this as part of this case. Their consideration is also likely to be limited to workplace privacy for government employees, although most of the relevant privacy laws also apply to private sector organizations.
Leaving aside for the moment the lascivious nature of the sergeant’s personal communications (which would violate the acceptable use policies of many private and public sector organizations), the Supreme Court may choose not to make a legal interpretation on employee privacy in the workplace because the law in that area is already clear. The situation may have been more likely to come up in a government setting, given that not all private sector organizations have the same rules or practices involving stored communications and saving electronic messages that make the review of the sergeant’s text messages possible. Employers have been fairly consistent in asserting their rights to monitor employee usage of employer property, but it is also common practice to allow occasional or incidental personal use of employer property, which makes it hard to draw a legal line between what constitutes appropriate use and what is too much. The content of the messages in this case make the conduct seems more egregiously inappropriate, but the Ninth Circuit panel at least thought it was unfair for the Ontario Police Department to tell its employees their communications wouldn’t be inspected and then change its mind. In this context the take-away from this case may be less a reinterpretation of employee privacy rights in the workplace than a reinforcement of the need for employers to create, make employees aware of, and follow explicit acceptable use and privacy policies.
Following the disclosure in November that employees at University Medical Center of Southern Nevada (UMC) have been sending patient information outside the hospital to personal injury lawyers and other outsiders, the FBI opened a criminal investigation into the systematic leads of patient data. According to reports in the Las Vegas Sun, one or more UMC insiders have been selling the daily patient registration forms from the hospital, — including names, birth dates, social security numbers, and medical condition information — so that personal injury lawyers could solicit clients. With the high level of scrutiny on UMC after the leaks became public, it seems the hospital has a less than stellar record complying with privacy laws, particularly including HIPAA.
In an interesting take on the issue, a more recent article in the Sun suggests UMC shouldn’t be too concerned about the breach, noting the extreme rarity with which HIPAA violations have been punished in the years since the HIPAA Privacy Rule went into effect. While HIPAA enforcement history is a matter of public record and there is no question that the imposition of harsh penalities has been the exception, rather than the rule, among the provisions of the HITECH Act passed in February was the strengthening of penalties for HIPAA violations. These stronger provisions are noted in the Sun article, but the prospect of criminal prosecution isn’t considered to be very likely. What this analysis overlooks is the specific language on HIPAA enforcement in the HITECH Act, which both requires a formal investigation and mandates the imposition of penalties in cases of “willful neglect” (HITECH Act Subtitle D, §13410). It’s not trivial for investigators to show willful neglect, particularly proving that non-compliance was both known and ignored or insufficiently remedied in the past, but the early public information on this investigation suggests a long-term pattern of HIPAA non-compliance despite widespread awareness of HIPAA requirements by UMC staff. It seems it is cases just like this that the improved enforcement provisions of the law were intended to address.
The Wall Street Journal published an article on December 17 reporting that the U.S. military has discovered that wireless video feeds from unmanned Predator drones operating in Iraq are often intercepted by enemy insurgents. The ability of insurgents to capture the wireless data is apparently facilitated by the fact that the video transmissions are not encrypted, allowing anyone in the geographical vicinity of the drones to intercept the video feeds using inexpensive commercially available wireless sniffing software. You might think that encryption would be an operational requirement for the wireless transmission of such sensitive intelligence data gathered in the field, but statements from defense and intelligence officials suggest that other functional priorities — such as transmission over large distances with potentially limited bandwidth — may have trumped security considerations. Most surprising is the acknowledgment by the military that the vulnerability exposed by using unencrypted transmissions has been known for nearly 20 years, yet still hasn’t been mitigated, in part because U.S. military officials “assumed local adversaries wouldn’t know how to exploit it.”
This scenario exposes what must be a glaring weakness in the security posture for unmanned drones in terms of risk assessment, as any characterization of the threat environment in Iraq and other operational theaters appears to be underestimating the knowledge and technical capabilities of the adversaries representing threat sources to U.S. military operations. The military is now moving to upgrade the network infrastructure involved to add encryption to its wireless transmissions, although in a report from the Air Force that has drawn the ire of Congressman Jim Langevin and others, the work to add encryption to video transmissions from drones is not expected to be completed until 2014.
While the U.S. military places a great emphasis on information assurance and is often held up as an example of robust security practices, the long-term vulnerability with its video surveillance operations is reminiscent of widely publicized wireless data breaches in the commercial retail sector. Way back in 2002, large retailers began to implement security measures for wireless network communication within their stores. Short-range wireless transmissions without encryption were common practice at the time, for purposes such as communicating transactions between computerized cash registers and back-office financial management and inventory control systems. When retailers such as Best Buy discovered that hackers were intercepting customer credit card data by sniffing wireless traffic sent from point-of-sale terminals, they quickly moved either to encrypt their wireless transmissions, or (like Best Buy) opted to stop using wireless cash registers altogether.
More recently, TJX suffered an enormous data breach at its TJ Maxx stores, reported in 2007 but starting as early as 2005. The severity of the breach was attributed in part to the company’s persistent storage of unencrypted customer data (in violation of the Payment Card Industry (PCI) Data Security Standard), but the attack was also enabled by the company’s use of ineffective wireless security, including the use of Wired Equivalent Privacy and, in some cases, no encryption at all. The industry’s response to TJX’s breach has been to revise and strengthen PCI requirements and to adopt stronger wireless encryption where sensitive or personal information and transactions continue to be transmitted using wireless networks.
What all these cases have in common is a failure — made blatantly obvious only after attacks succeeded — to identify and implement appropriate security controls commensurate with the risk resulting from existing known threat sources and existing known vulnerabilities. It also seems likely that in all cases the failure in the risk analysis was mischaracterization or underestimation of threats, rather than an undervaluation of the impact associated with a breach. This type of mistake was acknowledged explicitly in the case of the U.S. military and its Predator video feeds, and is implied by Best Buy, TJ Maxx, and other retailers choosing not to use encryption to protect their wireless transmissions. The lesson here is simple: don’t overlook any threat sources when assessing risk, and don’t underestimate the capabilities of the threats that are identified.