According to an article yesterday from Government Health IT, the Office of the National Coordinator is getting ready to address a more complete set of rules of behavior and other requirements for participation in the Nationwide Health Information Network (NHIN), and to establish the governance processes and capabilities to manage, monitor, and enforce them. While current participation in the NHIN Exchange is limited to federal agencies and federal contractors or grant recipients, the long-term vision for participation includes a wide range of state, regional, and federal government entities, commercial health care enterprises, and potentially researchers and other relevant organizations. For NHIN Exchange, there is a NHIN Coordinating Committee serving in an oversight capacity, with representation from ONC as well as each active participant, but as the number of participants grows from its current single-digit level of active NHIN-participating entities and the focus shifts from building the NHIN to managing an operational infrastructure, a different sort of model is likely to be needed. Formal governance procedures, not to mention a governing body (ONC personnel and documentation typically use the term “NHIN governing authority”) with fully specified roles and responsibilities, are needed initially facilitate the participation of entities that aren’t necessarily bound by the legal requirements that apply to current participants, to evaluate whether applicants to participate should be able to do so, and to oversee the monitoring of the NHIN that is implied in the Data Use and Reciprocal Support Agreement (DURSA) and other participant agreements. A key topic area that governance rules must address is the set of security and privacy provisions NHIN participants are able to support, including obvious security needs like secure communication, entity authentication and authorization, and audits, but also likely including practices like consent management.
The text of the DURSA provides several examples of expectations or obligations of its signatories that could be translated into an evaluation framework or standard set of criteria by which the security and privacy capabilities of prospective participant could be assessed. Under the DURSA, entities participating in the NHIN are responsible for maintaining the security of their own environments and to apply appropriate safeguards that protect the information the entity sends or receives using the NHIN. The point of reference for “appropriate safeguards” is the HIPAA Security Rule (45 CFR §160 and §164), which seems a sensible source for requirements, at least for participants who are HIPAA-covered entities or business associates. The challenge may be in making a default or standard or minimally acceptable specification of just what safeguards are appropriate, and in trying to apply such a standard uniformly to all organizations. The only category of security controls called out explicitly in the DURSA is protection against malware, a requirement which, much like the general security provisions, is focused on protecting the message contents that will be transmitted using the NHIN and on protecting entities against the possible introduction of security threats into their environments via the ostensibly trusted channel that the NHIN provides. A thornier problem may be the NHIN’s approach (viewed through the DURSA) of handling access controls for end users — NHIN Exchange uses digital certificates for entity authentication and authorization (and other purposes), but the certificates are bound to the organizational identity, not to any individual users that might access NHIN-connected systems and initiate queries, requests, or data exchanges. Specifically, section 7 of the DURSA on System Access Policies (in its entirety) stipulates that:
Each Participant shall have policies and procedures in place that govern its Participant Users’ ability to access information on or through the Participant’s System and through the NHIN (“Participant Access Policies”). Each Participant acknowledges that Participant Access Policies will differ among them as a result of differing applicable law and business practices. Each Participant shall be responsible for determining whether and how to respond to a Message based on the application of its Participant Access Policies to the information contained in the assertions that accompany the Message as required by the NHIN Performance and Service Specifications. The Participants agree that each Participant shall comply with the applicable law, this agreement, and the NHIN Performance and Service Specifications in responding to messages.
One way to interpret this language would be that participating entities are required to have access control policies and controls for its own systems, but that a given entity shouldn’t expect the policies and controls of another participant to be comparable to its own. Nevertheless, when receiving a message via the NHIN, participants are expect to use their own access control policies to determine whether and how to respond — this despite the fact that it seems highly unlikely that any identifying information about a requester (other than his or her entity affiliation) can be much use in making authorization decisions, since there is no reason to expect users of other participants’ systems to be known to the entity that receives the request or other message. The assumption seems to be that authorization of participating entity users is implied by their employment or other relationship to the entity whose certificate validates the assertions in the messages sent via the NHIN. However, for a given participant to have the confidence it needs to accept the validity of the individual initiating a request, it would seem enormously helpful to have some idea either of what access control policies are applied and enforced by the requesting entity, and also how those controls and other security and privacy measures were evaluated by someone in authority with the NHIN in order to decide that the controls were adequate for the intended uses of the NHIN to be supported.
ONC will engage the public and all interested stakeholders in the process of developing NHIN governance rules and capabilities, beginning with a request for information to be issued later this summer, and then through a comment period on a draft rule to be published early next year. From a practical standpoint, some of the areas that will need to be addressed in any governance framework will be functions and processes already in place for NHIN Exchange, but for which formal criteria or standards have not yet been developed. For example, part of the “on-boarding” process for a new applicant is to apply for a digital certificate (this actually occurs twice, as a temporary cert is issued to be used for validation and testing, and then a production version), something that is not supposed to happen until the prospective participant’s application has been received and the participant has been approved for membership in NHIN Exchange. The decision to approve a prospective NHIN participant is a core governance function, but to date this process has been handled on a case-by-case basis, so to scale to a production-capable process, formal governance rules and standards are certainly needed, not to mention decision criteria. There are a number of functional areas ONC is working to support, but most of these also presume the existence of some sort of governing authority. ONC went so far as to issue a request for proposals in late January to award a contract for NHIN Operational and Infrastructure Support, with a variety of tasking that either presume or directly depend on the existence of a NHIN governing authority. These tasks included administering and operating technical infrastructure supporting the NHIN (”infrastructure” in this context means the certificate authority, directories, and network infrastructure), implementing a support center to provide assistance to participating entities throughout the process of joining the NHIN and of participating once they are on board, and creating and maintaining the on-boarding process itself.
To move forward with a larger-scale NHIN that still leverages some of the core features of NHIN Exchange, it is essential for the governance processes and criteria associated with the NHIN (and with ONC, if ONC will own the governance function in the future) to be robust and transparent enough to give entities the confidence they need to participate. With the central governance model and single multi-party legal agreement used to date with the NHIN, participants theoretically have no need to trust each other, as long as they have confidence in the central authority that approves applicants for participation, and in the criteria used to make those approval decisions. This means that the key relationship for each NHIN participant is with the NHIN governing authority, since the NHIN asks participants to set aside their own judgment about other participants, and substitute the NHIN’s judgment instead. Even with a robust governance function in place, this task is likely to prove very challenging, but without effective governance in place, it’s not even feasible.
A research report released by the Direct Marketing Association (a U.K.-based entity) in cooperation with fast.MAP confirms that privacy concerns related to disclosing personal information continue to weigh on consumers’ minds, but also suggests that consumers are more willing to provide information to marketers if a relationship has been established between them. The extent to which a consumer’s relationship with a company is a trusting one appears more important in getting consumers to share personal information than financial incentives or material offers in exchange for the data.
In an interesting contrast to the information about marketers, the DMA Data Tracking study also found a large proportion of consumers (29% of more than 2,027 people surveyed) do not trust their banks to properly store or use the information they have about their customers, and an even greater percentage said they do not trust government agencies or political parties to handle personal data appropriately. Chris Combemale, DMA’s executive director, explained that people are understandably more protective their personal information, given the well-publicized instances of identify theft, data breaches, and other losses and unauthorized disclosures. He said that organizations who want to gather personal information from individuals have to overcome this justifiable resistance, both when collecting the data and once they get it: “There has to be a clear trade-off in benefits to the consumer in doing so. Companies must also respect the privilege of being handed this data, or else they face the prospect of losing customers.” Considered with the findings about data collection by marketers, it would seem that the most valuable benefit to individuals is trust.
The survey findings and high-level analysis of the results reinforce a theory of interpersonal trust that says that — while there are some mechanisms such as assertions of reputation that might instill some confidence in one party with respect to the trustworthiness of another party unfamiliar to them — trust between two parties in a given context is relational, and trust is developed through repeated acts and occurrences over time. Once such trust does develop, the relational basis is more important to maintaining trust than outside information, as suggested by survey findings that only 3 out of 10 people stopped trusting entities or brands due to negative press reports or even reported data breaches. Overall, the key factors promoting consumers’ trust in companies include direct knowledge of the company, overt security and privacy practices such as website security controls and published privacy policies, and the longevity of the company. All of these factors have a clear correlation with increasing consumer confidence and, more relevant to trust, reducing consumer’s perceived risk in choosing to share personal data with a given entity. This too is well-supported by contemporary research on and conceptions of trust as the willingness to take risks based on the expectation that the trusted party will behave in a way desired by the truster, whether or not the truster has any ability to monitor the trusted party to act as expected.
The Office of Management and Budget (OMB) today released a new memo to all heads of executive departments and agencies, “Guidance for Agency Use of Third-Party Websites and Applications,” that lays out a set of general principles for the use of such non-agency sites and resources, and specifically sets new requirements for privacy with respect to these external sites. The memo acknowledges the potential value of social media, interactive online tools, and, by implication, Web 2.0 technologies in general, all of which support the spirit of “transparency, public participation, and collaboration” embodied in the administration’s Open Government Directive.
The new memo applies to all federal agencies and their use of government or contractor third-party websites or applications used to engage with the public. The general message is, agencies may use third-party sites and applications, but when they do so, they must comply with the new privacy requirements in the memo as well as any existing requirements. General guidance is offered in five areas:
From a privacy perspective, the June 25 memo reminds agencies of their continuing obligations under the Privacy Act, and updates previous guidance issued to agencies on federal website privacy policies and on implementing the privacy provisions (largely in Title II, but including some portions of FISMA too) of the E-Government Act of 2002. Among the most significant new requirements is the need for agencies to perform an adapted Privacy Impact Assessment (PIA) for third-party websites; update their privacy policies to make sure they provide information about the use of third-party sites and applications; and post privacy notices on the third-party sites noting the agency’s association with the site, but also clearly stating that the third-party sites is not owned or controlled by the government.
With new federal agency FISMA reporting requirements taking effect in November, several agencies are taking steps now to get ahead of the requirements and anticipate some additional security metrics likely to be added in the near future. As reported by Federal News Radio, the Department of Veterans Affairs expects to have monitoring capabilities in place for all desktop computers by September 30, in addition to ongoing efforts to augment network, server, and systems monitoring capabilities. In a widely reported shift in policy and practice, NASA announced its intention to abandon conventional system re-authorization processes in favor of focusing on the new reporting requirements. In addition, the Nuclear Regulatory Commission is evaluating its current tools and monitoring functions to try to determine how to meet the new monitoring requirements. As these and other agencies explore alternative methods and mechanisms for meeting new monitoring requirements, many look to the State Department’s risk scorecard model, which draws data from vulnerability scans, configuration checks, and network management sensors to produce and frequently update an overall score for State’s security posture.
Instructions in a memo sent on April 21 from OMB to all heads of executive departments and agencies gave notice about the new FISMA reporting approach, which in addition to requiring electronic submission of data feeds from agency FISMA tools to the government-wide Cyberscope online application, also will involve the establishment of government-wide benchmarks on security and agency-specific interviews with officials responsible for security management. Should the administration’s Cybersecurity Coordinator be given budgetary approval authority over agency investments — as proposed in several pieces of security legislation introduced in Congress — these benchmarks may take center stage as agencies not only report on systems security, but also try to justify the effectiveness their information security management programs. Continuous monitoring is among the many new provisions called for in the House of Representative’s proposed Federal Information Security Amendments (FISA) Act that were included via amendment in the defense authorization bill the House passed on May 28, and is a core process in the revised Risk Management Framework and system certification and accreditation process detailed in NIST Special Publication 800-37 Rev. 1.
Among the areas announced as federal cybersecurity research priorities for the Networking and Information Technology Research and Development (NITRD) program is an initiative intended to promote development of “tailored trustworthy spaces” that not only reflect different requirements associated with different computing uses and contexts, but do so in a flexible way so that the functional and technical provisions for a given trust domain can change or adapt as its underlying requirements change. The basic idea is to describe and express (presumably in electronic, machine-readable form) the security requirements applicable to a given situation and the policies and provisions in place to meet those requirements, so that a prospective user of the space can compare what the space provides against what the user needs in order to “trust” the space.
As noted previously, the connotation of the word trusted in a computing context is somewhat different and more limited than in personal, organizational, or sociological contexts, since the idea of a trustworthy system tends to focus on the reliability of the system itself (meaning it performs in the ways it is expected to) and perhaps security protections applied to the environment in which it operates. Major technology industry and vendor initiatives such as Microsoft’s Trustworthy Computing reflect this emphasis (although to be fair, Microsoft does acknowledge the importance of business practices) in the sense that what they purportedly strive to deliver is software and systems that offer sufficient security, privacy, and reliability to avoid reducing the trust in a given operational domain through the introduction of vulnerabilities or poorly-performing systems. A broader scope might actually be useful when talking about decisions whether or not to trust a given system, including what the would-be truster knows about the provider, host, manager, or other users of the system. The shortcomings of this emphasis on the trustworthiness of the system completely independent of whatever entity is running the system (or on whose behalf it is being run) are increasingly relevant for current technical paradigms like cloud computing and in industry-specific contexts such as health information technology. Without specifically addressing the limitations of conventional connotations of trust in computing, the recommendations published by NITRD and its Cyber Security Information Assurance (CSIA) Interagency Working Group look to tailored trustworthy spaces to “establish trust between systems based on verifiable information that test the limits of traditional trust policy articulation and negotiation methods, raising the bar for highly dynamic human understandable and machine readable assured policies.” One way to “test the limits of traditional trust policy” would be to add assertions about the trustworthiness of the providers of a system, presumably with a structured way to express both the assertions (claims) related to trustworthiness and the basis of trust required for different users in different contexts.
It’s fair to pose questions such as whether this and other “game-changing” research priorities are really the best use of billions of federal information security dollars, but the potential impact from extending the technical ability to establish and manage computing across multiple trust domains is much broader than systems security and privacy protections alone. As the government and industry wrestle with challenges like how to get health care providers to adopt, and get the public to trust their use of, electronic health records and related technology, whether government agencies and commercial companies can trust public cloud providers, and how to satisfy varying international privacy laws while improving anti-terrorism efforts, the vision for tailored trustworthy spaces seems to offer a lot of potential. For addressing these and other challenges, the most interesting of the areas identified for future research include “trust negotiation tools and data trust models to support negotiation of policy” and “data protection tools, access control management, monitoring and compliance verification mechanisms to allow for informed trust of the entire transaction path.” Concerns about the ability to perform such negotiation, monitoring, and verification activities (at least in efficient ways) are routinely cited in contemporary online communication and information exchange contexts, and as these increasingly involve multi-lateral interactions among different types of organizational entities with different needs, biases, and risk tolerances, the absence of effective mechanisms to evaluate trustworthiness and negotiate acceptable parameters of trust relationship will remain a barrier to success.