The National Institute of Standards and Technology (NIST) released an updated guide to its Risk Management Framework (RMF) in December when it published the final public draft of Special Publication 800-39. Among several areas where the document has changed substantially from the previous draft version is its treatment of trust and trustworthiness, with a revised section on trust and trustworthiness of both organizations and information systems, and a newly added appendix describing several trust models or approaches to establishing trusted relationships between organizations. The choice of which model (or models, as they are generally not mutually exclusive) to use depends on a variety of factors about the context in which trust is sought and the nature of the entities that need to trust each other. By separately addressing trust between organizations and trust between systems, SP 800-39 illustrates the different types of factors used to assess the trustworthiness of other entities and, perhaps unintentionally, highlights some of the limitations inherent in trusted computing models that can lead to unintended gaps in security.
No recent incident highlights this problem more effectively than the leak of hundreds of thousands of State Department cables and other documents, apparently downloaded and exfiltrated without detection by an authorized user of the Net-Centric Diplomacy database. The system holding the classified information was deployed on a secure, access-controlled military network, with no individual user-level authentication and authorization mechanisms, and no capabilities to monitor activity by users accessing the database. Considering the system from the perspective of a computer security assurance model like ISO/IEC 15408 (Common Criteria), even in this vulnerable implementation, the system satisfies one of the two properties used to establish assurance levels for a system: the security functionality provided by the system works as specified. Where the system appears to come up short on assurance is with respect to the second property: the system cannot be used in a way such that the security functionality can be corrupted or bypassed. In hindsight it seems fair to suggest that the security requirements for this system were not well thought out; an argument can be made that since the deficiencies in its security posture result from the failure to implement some types of controls rather than any malfunction or evasion of the controls that were implemented, the system might actually qualify as “trusted” under at least lower evaluation assurance levels. The most relevant weakness in the system as implemented is not about the trustworthiness of the system, but of the users authorized to access it.
Authorized use of the Department of Defense’s secure internet protocol router network (SIPRnet) is limited to users with security clearances sufficient to access classified information. Receiving such a security clearance requires a fairly extensive investigation, and the result is that individuals deemed qualified for a secret (or higher) clearance are considered to be trustworthy. Few systems that contain or process sensitive classified information rely only on a user’s security clearance for authentication and authorization, but for the Net-Centric Diplomacy database, it would seem that if a user could log on to SIPRnet, the user could get access to information stored in the database. The trustworthiness of a system envisioned for this kind of use should be directly tied to the trustworthiness (or lack thereof) of the authorized users of the system, but in this case, either no such personnel-level evaluation occurred, or the assumption of trustworthiness associated with clearance holders resulted in a gross underestimation of the threat posed by authorized insiders. The consequences are now readily apparent.
As NIST defines in SP 800-39, trustworthiness “is an attribute of a person of organization that provides confidence to others of the qualifications, capabilities, and reliability of that entity to perform specific tasks and fulfill assigned responsibilities.” When considered for people or organizations, trustworthiness determinations may take into account factors beyond competency, such as reputation, risk tolerance, or interests, incentives, or motivation of the person or organization to behave as expected. There is no expectation of course that every situation requires entities to demonstrate trustworthiness, or to be trusted at the same level across different contexts or purposes. With respect to information systems, among the factors NIST cites that contribute to determinations of trustworthiness are the security functionality delivered by the system and the assurance or grounds for confidence that security functionality has been implemented correctly and operates effectively. Unsurprisingly, this perspective fits nicely with the concepts of minimum security controls requirements and standard control baselines that NIST also uses, in Federal Information Processing Standard 200 and Special Publication 800-53, respectively. IT trust frameworks focused on system assurance defined in terms of predictability, reliability, or functionality tend equate assurance with trustworthiness, characterizing trusted systems in a way that dissociates the system from those that use, operate, or access it. Such an approach ignores the constraints on trustworthiness that might be applied if the system was evaluated in concert with the non-system actors (organizations and people) that have the capability to influence the system’s behavior or the disposition of the information the system receives, stores, or disseminates. A highly trusted system (in the common criteria sense) that is designed to be used in a particular way can and sometimes is misused. Data exchanged with trusted organizations or trusted systems is only as secure or private as the authorized users within the organization or that have access to the system choose to keep it. This is why the provision of organizational capabilities to monitor user actions with trusted systems should be an essential prerequisite to establishing trust relationship, particularly when those relationships are negotiated only at the organization-to-organization or system-to-system level.
There is no shortage of post-hoc analysis on how a quarter million State Department cables and other documents were acquired and sent to WikiLeaks, how a recurrence of such an incident might be avoided, or on the security implications for government agencies and commercial enterprises alike. Where these points converge is in the area of granting trusted individuals access to sensitive information — a topic that is already (or should be) on the minds of managers in public and private sector organizations in just about every industry. A closer look at the situation illustrates the risks inherent in many types of information sharing, and of relying on knowing and trusting your users instead of locking down access to the systems that contain the data you want to protect.
As Joby Warrick explained in a New Year’s Eve article in The Washington Post, the Net-Centric Diplomacy database in which the classified documents were stored was built with information sharing as a priority, not on access control or monitoring user-level activity. Instead, the system relied to some extent on security-by-obscurity and, because it was deployed on the DoD’s SIPRnet classified network, the database in theory was only available to trustworthy users — that is, people who had been granted security clearances by the government. Under the system’s security model, anyone possessing an appropriate clearance could get to the information stored in the database, which by some estimates would suggest that close to a million government employees, contractors, and military personnel might have access. What the system lacked was security controls to monitor potentially unauthorized or inappropriate activities such as downloading large volumes of documents from the database. The lack of finer-grained access controls coupled with insufficient monitoring essentially left the door open to misappropriation of data by an authorized user, which is apparently exactly what happened in this instance.
Corrective actions for these security weaknesses might include implementing better access controls, or perhaps putting better audit logging or even data loss prevention mechanisms in place. The fact that the system was apparently designed and deployed with this security posture highlights the other key assumption that turned out to be false: that background checks and investigations done in order to grant security clearances are sufficient indicators of the trustworthiness of individuals. In a tacit acknowledgment of this problem, a January 3 memo from the Office of Management and Budget to all federal agencies asks, among several questions on deterring, detecting, and defending against unauthorized disclosures by employees, how agencies measure trustworthiness among employees, and specifically whether social or psychological factors are taken into account when assessing employ trustworthiness. A system with an access model that presumes anyone who can get access as an authorized user is trustworthy is clearly vulnerable to abuse of privileges, so organizations deploying systems in this way would do well to think about ways to safeguard information from misuse, and also might want to validate the means they use to grant access to users in the first place. By asking agencies to conduct a self-assessment of potential system vulnerabilities and weaknesses, officials at OMB seem to want not only to head off future leaks, but also to get agencies to take a closer look at their security measures with insider threats (like disgruntled yet authorized users) in mind.
As the Health IT Policy Committee’s Privacy and Security “Tiger Team” continues its work to provide recommendations and suggested policy guidance on health information exchange, there appears to be some concern among hospitals and other HIPAA-covered entities that the recommendations, if implemented in federal rulemaking, would go add to the security and privacy requirements already in effect under the HIPAA Security Rule and Privacy Rule, and would go beyond the strengthened federal regulations included in the Health Information Technology for Economic and Clinical Health (HITECH) Act. According to a report from Modern Healthcare‘s Joseph Conn, a representative of the Federation of American Hospitals raised such concerns during the public comment period of the Health IT Policy Committee’s November 19 meeting. The comments offered at the meeting related specifically to still-pending recommendations on patient privacy and, especially, consent. Previous briefings by Tiger Team leaders have both acknowledged the applicability of HIPAA and other relevant laws in specifying situations in which consent is not required, but in the face of potentially expanded health data sharing through the adoption of electronic health record (EHR) and health information exchange (HIE) systems, committee members have suggested that current regulations do not adequately reflect patient expectations about controlling the use and disclosure of their personal health information, especially when that information is shared via HIE. With so much attention focused on the government’s meaningful use incentive program to encourage EHR technology adoption, the lack (so far) of any privacy provisions in meaningful use rules and standards has prompted Tiger Team discussions about ways to do more to protect patient privacy rights.
During the November 19 meeting, Tiger Team co-chairs Deven McGraw and Paul Egerman presented a set of proposed recommendations for provider authentication in health information exchange, intended in part to address a perceived gap in the HIPAA Security Rule, which requires that HIPAA-covered entities implement policies and procedures to authenticate individual users or entities seeking access to protected health information (45 CFR §164.312(d)), but does not stipulate the means by which such authentication should occur. The Tiger Team advocates mandating the use of digital certificates for entity authentication for all entities involved in health information exchange. Adopting such an approach would require not only the technical capability to implement and use digital certificates among entities participating in HIEs, but also the establishment of a formal process for validating would-be participants (to ensure they are legitimate organizations) and for issuing credentials to approved entities. The authentication model for the Nationwide Health Information Network (NHIN) has long been envisioned to use a single, centralized certificate authority to issue credentials and manage certificate validity, revocation, and related maintenance processes. The most recent recommendations from the Tiger Team suggest that instead of relying on a central authority, the Office of the National Coordinator should establish an accreditation program (perhaps similar to the one used for accrediting EHR testing and certifying bodies under the meaningful use program) to authorize multiple certificate issuers. While a federated or distributed model for credentialing would almost certainly be more scalable than a single-issuer model, there remain unaddressed aspects of governance and oversight related to how ONC can ensure that organizations seeking approval as certificate issuers conform to all relevant technical and governance criteria, and are sufficiently trustworthy to handle entity identity proofing and issue authentication credentials.
Even though some changes to federal health data privacy and security regulations included in HITECH have not yet been implemented, HIPAA remains established law, and additional changes to the provisions in the Security Rule and Privacy Rule cannot be made simply through rulemaking by an executive agency like HHS. While specific changes to the law must be made by the legislature, the Office of the National Coordinator and the HHS Office for Civil Rights generally have the authority to impose additional criteria or specific requirements on HIPAA-covered entities associated with audit standard used to assess compliance, or as conditions for receiving federal funds, whether under meaningful use or one the various federal grant programs intended to promote or expand the adoption of health IT. There seems to be some renewed emphasis on HIPAA compliance, based in part on the expectation that OCR will soon begin to proactively audit covered entities and business associates for compliance with the Security and Privacy Rules. In this environment, it is perhaps understandable that covered entities would object in advance to the prospect of any further changes in the regulatory requirements for which these entities will be held accountable. The fact remains, however, that the law as enacted was not primarily intended to encourage health IT adoption, and if widespread use of such technology is going to be achieved, some of the regulatory areas that leave room for interpretation may need to be augmented with more specific guidance or requirements.
Reported findings from a recently released survey of federal government executives on Cybersecurity in the Federal Government suggest that the increased emphasis on information security and corresponding protective measures put in place by government agencies are negatively impacting productivity among those surveyed, particular with respect to access to information, computing functionality, and mobility. The survey, conducted in May 2010 via email by the Government Business Council in conjunction with Citrix and Intel, included 162 respondents selected from among subscribers to Government Executive. The focus of the survey was specifically to address issues of access, functionality, and mobility and the executives’ perceptions of the role of security as a help or hindrance to performing job-related functions.
It should be noted that the survey did not seek to evaluate the relative effectiveness of information security programs or cybersecurity initiatives in achieving any of their intended objectives, such as making government systems and information more secure, but instead only looked at the impact security measures have on routine business operations. Media reports of the survey results, such as a summary of responses to four questions printed in the October 25 issue of Federal Computer Week, seem to emphasize the negative impact many government executives reported due to security measures, but without any indication of whether such measures are succeeding according to any other metrics, it’s hard to identify a clear set of implications from the survey. The only substantive recommendation proposed in the survey report is to include additional considerations as factors that help determine security policy, with the largest proportion of respondents putting “agency’s mission” at the top of their list of priorities. This implies that many government executives believe that less restrictive security measures should be used where security inhibits productivity-enhancing behaviors, such as accessing data from home or other non-agency locations.
The tension between business objectives and security is not new, and CIOs, CISOs, and other information security program managers are continually challenged with arriving at the right balancing point between protection and productivity — too little security is likely to result in more frequent and more significant security incidents, while too much security pits workers against security officers and threatens to brand the security team as a barrier to business. Even with additional oversight and attention from the administration on security, agencies are still expected to apply risk-based management principles to their decisions about what security measures to implement. If risk-based decision makers only consider the potential impact due to security breaches and the proposed security controls’ ability to reduce those risks, they’re not looking at risk from an enterprise level. Any calculation of the anticipated reduction in the risk of bad things happening due to the implementation of security controls should take into account the loss in business productivity or efficiency as part of the cost of security. The over-simplified basis of security management is, don’t spend more protecting an asset than the loss or damage to the asset is worth to you. Reductions in productivity due to security-imposed obstacles to standard business practices should be explicitly included in the cost vs. benefit equation. In some cases it seems quite likely that mitigating the risk outweighs the loss of productivity, and where true that business determination should not be lost on executives. An interesting question not asked on the survey would be how much insecurity (i.e., greater risk of breaches, outages, or other incidents) government executives would be willing to trade for uninhibited access to information.
An article in today’s Washington Post tells the story of a Maryland couple who, after their home was burglarized, activated a GPS tracking service available from Sprint Nextel (their wireless carrier) and gave assistance to local police to find and arrest the man suspected of robbing them. The alleged burglar stole a cellular telephone belonging to the couple, and (somewhat helpfully it turns out) initiated a pattern of heavy usage of that cell phone within a few hours after he took it. The couple had called Sprint Nextel to report the cell phone stolen, but also inquired whether the carrier had some way to determine the phone’s location. Convenient, Sprint Nextel offers a Family Locator service that, among other features, allows subscribers to see real-time GPS location information on web-based satellite maps. Once the couple activated the service, they were able to identify the stolen cell phone’s location, and sent copies of the satellite images they viewed on their home computer to the police detective assigned to their robbery case.
Among the more interesting elements in this story is the fact that the couple, as private individuals subscribing to wireless cellular service, were well within their legal rights to track the suspect, his use of their cell phone, and his location while he carried it. This is not the case for the detective investigating the robbery or for law enforcement personnel in general, who are required under state and federal laws either to obtain subpoenas before they can gain access to tracking data. For example, Rule 41 of the Federal Rules of Criminal Procedure, which addresses search and seizure, stipulates specific requirements for law enforcement personnel to obtain a warrant for a tracking device (FRCP Rule 41(e)(2)(C)). Under provisions of the U.S. Code enacted under both the Stored Communications Act (Act) and the Communications Assistance for Law Enforcement Act (CALEA), there are some exceptions to the warrant requirement for obtaining subscriber records from electronic communications service providers like wireless carriers, but where explicit tracking information such as GPS data is concerned, no authoritative precedent has been established as to whether these exceptions apply to cellphone GPS data. The procedures available to law enforcement in 18 U.S.C. ยง2703(d) relax the standard of evidence required in order to grant a request for disclosure of subscriber record information, where instead of probable cause a government entity need only present “reasonable grounds to believe that the contents of a wire or electronic communication, or the records or other information sought, are relevant and material to an ongoing criminal investigation.”
While valid legal avenues do exist for law enforcement personnel to obtain records from wireless carriers such as the GPS location data sought in the Maryland robbery investigation, the process involved takes time, so the immediate access to GPS information that the cell phone subscribers were able to get greatly facilitated the police’s ability to find and arrest the suspect. Given how often private citizens are urged not to take the law into their own hands, it is perhaps ironic that the active involvement of the burglary victims in this case was the key to an expedient resolution of the investigation.