A lawsuit filed last month by the Rhode Island chapter of the American Civil Liberties Union (ACLU) against the state Department of Health charges that the rules the state issued regarding its planned health information exchange (HIE) provide insufficient patient privacy protection and, in particular, fall short of requirements contained in a state law, the Rhode Island Health Information Exchange Act of 2008, and also violate the provisions of the state’s Administrative Procedures Act because they fail to fully address the implementation and enforcement of the provisions in the HIE Act. This is a relatively unusual instance where it is not the privacy provisions in the law itself that are being challenged, but instead the rule-making process on which the implementation of those provisions depends. In the complaint, the Rhode Island ACLU argues that the Department of Health’s rulemaking process was flawed, specifically insofar as questions and concerns raised in detailed comments submitted to the DOH by the RI-ACLU’s executive director were not addressed. Apparently the RI-ACLU also believes that the decision by the Department of Health to issue policies — not rules or regulations — to address some of the HIE Act’s provisions, is insufficient to meet the DOH’s obligations under the Administrative Procedures Act.
The provisions of the HIE Act that the RI-ACLU deems insufficiently covered in the rules the DOH issued relate specifically to “adoption of regulations on certain specific issues to further promote the confidentiality, security, due process and informed consent due the affected patients.” The RI-ACLU has criticized the state DOH for suggesting that general policy statements are enough to satisfy the law’s requirements, and for excusing the lack of more specific regulations on the difficulty it has encountered in working to resolve privacy, security, and consent issues associated with health information exchange. Stressing the importance of patient health data privacy and confidentiality protections in order to garner public support for the state’s HIE initiative, the RI-ACLU echoes a frequent refrain among policy makers that public trust is essential for the success of health IT initiatives such as electronic health records and HIEs, and that strong security, privacy, and consent provisions are the best way to engender that trust. Writing specifically about the Rhode Island lawsuit, HealthcareInfoSecurity‘s Howard Anderson noted: “To succeed, any HIE in any state needs to build public trust that the information it exchanges will remain private. And if states or HIEs fail to spell out detailed privacy rules and regulations, it will be difficult to develop that trust.” While it’s hard to argue with that logic, it is vitally important for states to realize that no matter how strong the legal requirements they enact to protect patient privacy, security and privacy regulations and controls alone are insufficient as the basis for individuals to establish the trustworthiness of EHR’s, HIE’s, the entities that provide these services, or the people that use them to get access to personal health information. While states focus on transparency, they should ensure that consumers are provided complete and accurate information about the parties to whom they are asked to entrust their information, including details about their intended uses of health information, their business or mission interests, and their current and past behavior with respect to protecting data under their control.
Despite the increased attention focused on the rollout of the federal government’s Trusted Internet Connections (TIC) initiative since a March 2010 GAO report highlighted the slower-than-expected progress most agencies have made in consolidating their Internet connections as required under TIC, it seems that few agencies are well positioned to make a January 31, 2011 deadline by which they are supposed to have all of their external Internet connections behind an access point provided by a certified Managed Trusted Internet Protocol Services (MTIPS) vendor. Among the factors currently influencing agencies’ movement on this issue is the limited number of certified MTIPS providers under the Networx contract that agencies are obligated to use for procuring external Internet connectivity in compliance with TIC.
Not all agencies are dependent on third-party vendors to establish trusted Internet points of presence, and many of the largest agencies have sufficient infrastructure and capabilities to handled MTIPS by themselves. Smaller agencies are, generally speaking, more likely to need to look to Networx vendors to satisfy this requirement, and it appears that some are holding off procurement actions until currently certified MTIPS vendor AT&T gets some company. Federal agency procurements of contracted services such as Internet connectivity are ordinarily (under Chapter 6 of the Federal Acquisition Regulations) required to use competitive bidding and proposal processes, further constraining the ability to act of agencies who would prefer not to make sole-source procurements, even with proper justification. In the current situation, OMB continues to urge agencies to move forward, apparently without taking into consideration the dependency represented by the General Services Administration’s approval of additional Networx vendors.
It’s not without some irony that some of the most stringent early objections to the administration’s recently released draft National Strategy for Trusted Identities in Cyberspace (NSTIC) focus on the extent to which the government itself can be trusted to hold a central repository of identity information on citizens. Quite distinct from the standards by which individuals and organizational entities determine the trustworthiness of others to whom they disclose personal information, to realize the purported benefits from a system that promises to obviate the need for individual users to keep track of many passwords and versions of digital identities online, people have to be willing to cede the maintenance of those digital identities to the government (or whatever entity might operate the “identity ecosystem” on the government’s behalf). Part of the point of using any user-centric system of claims-based identity management is affording users the ability to disclose only the minimum information necessary for a given purpose such as completing a specific transaction, increasing privacy protections and placing control over disclosure in the hands of users.
To make such a system work, it is important not only that the entities relying on claims information provided to them are able to specify exactly what assertions they need, but also that the assertions, when provide to the entities, are valid and in some way certified or augmented with information about the issuer of the claim (especially where the issuer provide the claim is not the user that is the subject of the claim). An entity can only rely on claims presented to it if those claims are credible, a problem the NSTIC intends to address through the use of an accreditation process by which claims issuers would be designated as trustworthy (although the details for the basis of such a determination are not part of the draft Strategy document). It seems most likely that some government authority will serve as the root of trust in the identity ecosystem, which if true would mean the integrity (and ultimate success) of the whole concept depends on the government being seen as trustworthy. This assumption may prove more problematic than the Strategy implies, given the relatively low and declining levels of trust citizens report having in government in general (cf. Hardin (2006), Putnam (2000), among others), although perceptions vary quite a bit with respect to specific agencies or institutions.
References:
Hardin, R. (2006). Trust. Cambridge, England: Polity Press.
Putnam, R.D. (2000). Bowling Alone. New York, NY: Simon & Schuster.
A closer look at the structure of the administration’s newly released National Strategy for Trusted Identities in
Cyberspace (NSTIC) makes clear that while the “identity ecosystem” envisioned in the Strategy offers anticipated benefits for both individual citizens and government and commercial entities, the nature of those benefits are very different, and these differences reflect different implied requirements for trust in online interactions. When the parties to an online transaction are an individual and an organizational entity, “trust” means something different for an organizational entity as the truster and an individual as trustee than it does for an individual as truster and organizational entity as trustee. These differences in the basis by which each party considers the other to be trustworthy (or at least trustworthy enough to conduct a transaction or other interaction) derive from the distinct interests each party has in the relationship, the information each needs to develop trust in the other, the risk each party faces by deciding to act on that trust.
This simple analysis adopts a definition of trust as a willingness to take risk (the risk in this case comes from one party making themselves vulnerable to the actions of another based on the expectation that the other party will behave in the way desired by the trusting party), following Mayer, Davis, and Schoorman (1995) and many other scholars (Johnson-George & Swap, 1982; Luhmann; 1988; Gambetta, 1988). Context is also essential, where trust is a three-part concept, involving a truster, a trustee, and a purpose or scope to which the relationship applies — party A trusts party B to do X (Baier, 1986; Hardin, 1993).
The primary benefits for an individual participant in the identity ecosystem are the reduction in the number of separate online credentials (such as usernames and passwords) that must be created, maintained, and recalled when needed, and greater control over personal information disclosure, resulting in enhanced privacy by limiting the information disclosed in any given interaction to the specific set of attributes the product or service providing entity requires. In theory, the governance plan for the identity ecosystem could also give individuals more confidence (via the “trustmark” issued to accredited entities) that they are interacting with the actual entities they intend, but in general this assurance will be limited to the identity of the entity, and the extent to which that entity can be trusted by the individual may depend as much or more on prior experience, existing relationship dynamics, or knowledge developed out-of-band. To the extent that the organizational entities are service providers such as government agencies, financial institutions, health care providers, or e-commerce businesses, individuals may be seeking little more information than identity verification and, perhaps, privacy policies, terms of service, or other assertions about how their personal information will be handled.
In contrast, the sorts of organizational entities most likely to participate in the identity ecosystem are likely to be most concerned with verifying the identity of individuals requesting services or engaging in online transactions, as user authentication (with appropriate identity proofing) is often all that is required to make authorization decisions. Representative government scenarios fitting this description would include enrollment in entitlement programs and receipt of benefits associated with those programs; submission of legally required information such as tax returns; renewal of personal or business licenses; or provision of services or products offered to citizens — anything from campsite reservations in federal parks to Treasury bills. In terms of authorization decisions — such as whether the individual identified and authenticated in a transaction is actually eligible to receive the information or product or service being offered by the government — depending on the nature of the transaction the entity may request a set of information (attributes) as part of the request that can be used to authorize the transaction, or may cross-reference the identity information presented by the requested with additional attributes maintained by the organization itself or by a third party. For example, when a U.S. citizen enrolls in Medicare, the government requests the individual’s social security number as part of the application process, and then validates the SSN with the Social Security Administration, not only to make sure the number itself is valid, but also to retrieve attributes such as the individual’s date of birth and citizenship, both of which are needed to determine eligibility for Medicare, to validate the information submitted by the applicant.
For commercial service providers, proof of identity is also commonly a sufficient basis to complete a transaction, whether that transaction involves access to information about an individual such as an insurance explanation of benefits or the purchase of a product from an online e-commerce site. Note that in the case of e-commerce, the vendor is typically concerned with authenticating customers only insofar as is necessary to make the vendor reasonable confident that the payment commitment from the customer is valid and not fraudulent. An e-commerce vendor rarely makes any independent assessment of the trustworthiness of a customer, instead relying on third parties such as credit card issuers to validate the attributes asserted by the customer. The reasoning here is simple — the e-commerce vendor’s primary interest is being paid for the products or services it provides, so the claims it requires from customers in order to complete a transaction are those associated with verifying that payment will be received. Where the direction of the information flow is reversed — that is, when individuals are providing personal information to organizational entities, whether in public or private sector contexts — there may be a greater need to establish the trustworthiness of the vendor, at least in terms of what safeguards, policies, and commitments the entity has in place relating to securing and protecting the privacy of information disclosed to them. To engender the sort of trust needed to support these types of interactions, the standards by which entities are accredited under the NSTIC framework will need to include information that allows individuals to make a determination of the entities’ trustworthiness, especially for entities with which the individuals have no prior relationship. This may be somewhat easier to achieve in the public sector context, since in many cases there is only one agency or organization able (or authorized) to provide the product or service in question. In contexts where information sharing or disclosure is an anticipated outcome — such as health care — individuals can and should require a higher threshold for the trustworthiness of organizations to which they provide information.
References:
Baier, A. (1986). Trust and antitrust. Ethics, 96(2), 231-260.
Gambetta, D. (1988). Can we trust trust? In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 213-237). Oxford, England: Basil Blackwell.
Hardin, R. (1993). The street-level epistemology of trust. Politics & Society, 21(4), 505-529.
Johnson-George, C., & Swap, W. C. (1982). Measurement of specific interpersonal trust: Construction and validation of a scale to assess trust in a specific other. Journal of Personality and Social Psychology, 43(6), 1306-1317.
Luhmann, N. (1988). Familiarity, confidence, trust: Problems and alternatives. In D. Gambetta (Ed.), Trust: Making and breaking cooperative relations (pp. 94-107). Oxford, England: Basil Blackwell.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. The Academy of Management Review, 20(3), 709-734.
Last week, the White House released a draft of its new National Strategy for Trusted Identities in Cyberspace (NSTIC), which is intended to create a so-called “identity ecosystem” in which individuals, organizations, and other entities rely on authoritative sources of their digital identities to enable trust in online interactions. The document was published through the Department of Homeland Security, one of many agencies and industry participants that collaborated on the Strategy, and includes many key elements that were called out as future action items in the administration’s Cyberspace Policy Review, titled “Assuring a Trusted and Resilient Communications Infrastructure.”
Trust in NSTIC context is limited only to the identity of the parties to an online interaction, so while having confidence in the validity of an asserted identity may help the parties may decisions about whether to engage in the interaction in question, the identity ecosystem envisioned in the Strategy provides insufficient basis for establishing the trustworthiness of entities, although it does allow for different participants to establish different sets of attributes about individuals that will be required in order to make authentication and authorization decisions.
With this in mind, it’s important not to confuse trust in an entity’s identity with trust in the entity itself. To engender trust in the entities, identity verification is necessary, but what is also needed is a clear explanation of the criteria that underlie the issuance of any credential presented to validate the identity, understanding that such criteria can and likely should vary depending on the context of the interaction. In the same vein, one of the things the Strategy makes clear is how important it is to separate the concepts (often spoken about in the same breath) of identification, authentication, and authorization. In general, an identity credential provider performs identity proofing (such as checking ID or other documentation if the identity proofing happens in person) and binds an individual identity to a digital representation, such as a certificate or other form of token, but often does not provide any information about what permissions the individual should have. These authorization decisions are entirely separate from identification and authentication, although identification and authentication are often prerequisites for granting authorization. This means that when considering authorization, an individual or entity evaluating the credentials presented should understand whether the issuance of those credentials took into account anything that informs the authorization decision. In the identity ecosystem as described, such consideration involves both the identity provider that establishes the digital identity, and the attribute provider that maintains and asserts characteristics or information associated with the identity.
This idea of entities requiring differing amounts of information (attributes) about each other depending on the context is one of several fundamental characteristics of claims-based identity management, a topic we’ve weighed in on before. The draft Strategy document embodies many of principles of claims-based identity management, most importantly the user-centric focus of the approach: “The Identity Ecosystem protects anonymous parties by keeping their identity a secret and sharing only the information necessary to complete the transaction. For example, the Identity Ecosystem allows an individual to provide age without releasing birth date, name, address, or other identifying data. At the other end of the spectrum, the Identity Ecosystem supports transactions that require high assurance of a participant’s identity.”
As a simple real-world example, when an individual presents a driver’s license to a TSA agent at an airport security checkpoint, assuming the license is self is authentic, the agent can assume very little information aside from the name on the license and that at least at some time between the date is was issued and now, the person resided in the state that issued the license, and was at that time a U.S. citizen or legally resident alien. This information is insufficient to determine with any real confidence whether the bearer of the license is a good person or a bad one, whether their intentions are benign or malicious, or generally whether the person is trustworthy. Context is important here too — validating an individual’s identity in this manner is sufficient for the TSA’s purposes, but would be wholly insufficient for, say, a bank to decide whether to give the person a car loan. Instead, performing a credit check in addition to verifying identity gives the loan officer more information about the financial standing of the individual, which is what the bank is most concerned about, but even with this additional information, it would be a mistake to say the individual has been shown to be trustworthy in any context other than the immediate one. The bank officer might now understand that the individual should have the resources to repay the loan, and some confidence about his or her likelihood to honor commitments to repay debts, but the information presented cannot be used to assert trustworthiness of the individual in the sense of saying he or she won’t take the new car and use it as the getaway vehicle in a robbery later that day (even on the same bank!).
It is good to see that the Strategy acknowledges the importance of accrediting identity and attribute providers and relying parties to give parties to transactions with the relying parties some degree of confidence in the identity and authenticity of those entities. However, the explanation of the functions of the governing authority and accrediting authority in the Governance Layer section provide too little detail about the criteria that will be used to accredit entities for particular types of transactions or interactions. With a long history of data breaches resulting from authorized access incorrectly given to entities or through unauthorized actions of entity employees (ChoicePoint, LexisNexis, etc.) it is essential that the accreditation process be sufficiently robust to guard against entities mis-representing themselves in order to receive accreditation, and that accreditation criteria include validation (not self-assertion) of appropriate security and privacy practices. It is only with sufficient rigor supporting accreditation of identity and attribute providers that individuals and relying parties will be able to make some determination of the trustworthiness of entities with which they interact online.