Clearer definitions and roles for intermediaries would facilitate policy decisions on directed exchange of health data

During the latest meeting of the Health IT Privacy and Security tiger team today, the bulk of the discussion centered around draft recommendations for message handling in what the group calls “directed exchange” in which health care providers send data on patients to other providers in a point-to-point communication. This type of exchange comes into play in many of the use cases intended to be satisfied by the NHIN Direct pilot project, so part of the discussion focused on the appropriate policy declarations that should be made about directed exchanges of health data, including among NHIN Direct participants. One of the primary concerns of the group is establishing the right policy and technical mechanisms to minimize the exposure of protected health information (PHI) send as part of these directed exchanges.

To guide this discussion, the draft recommendations identify four categories of exchange, differentiated by presence or absence of an “intermediary” handlers of the messages in the exchange and the role of such intermediaries where they are involved, with particular scrutiny on how much access the intermediaries would have to PHI within the message contents. The four categories the tiger team is using are:

  1. No intermediary involved (exchange is direct from point A to point B).
  2. Intermediary only performs routing and has no access to unencrypted personal health information (PHI) (message body is not unencrypted and routing information does not identify patient).
  3. Intermediary has access to unencrypted PHI (i.e., patient is identified) but does not change the message body, either the format or the data.
  4. Intermediary opens message and changes the message body (format and/or data).

Not far into this point on the agenda, the discussion devolved a bit into a debate on just what an “intermediary” is, and whether it would be good policy to state that no intermediary should have access to PHI at all under directed exchanges of health information. Clearly no consensus exists among the group on the definition of intermediary, but from the perspective of looking at solution elements proposed for NHIN Direct, a key question (not yet definitively answered) is whether a service used by a doctor, such as a hosted EHR or message packaging (including encryption) and delivery by a health information service provider (HISP) introduces an intermediary into the equation. It might simplify the discussion on both the technology, policy, and legal compliance fronts to just assume that all exchanges involve intermediaries, and focus on whether existing legal requirements such as those in the HIPAA Security and Privacy Rules would apply to those intermediaries, and therefore mitigate at least some of the concerns about the ability to see or process PHI.

For the sake of argument, it might make sense to say that when your data passes from one organizational entity to another, the involvement of any entity other than the sender and receiver means there is an intermediary. With respect to the basic categories of message handling for directed exchange, it seems to make little sense to include the first category (“no intermediary”) at all if we are talking about data exchange over the Internet or other public or common carrier-furnished communications infrastructure. Perhaps the no-intermediary case would apply with a secure domain (in the IHE sense of the term) such as an organizational LAN or WAN or company-owned private network. We would argue that neither server-to-server nor desktop-to-desktop secure communication channels (such as TLS) really remove intermediaries (ISPs, backbone infrastructure providers) from the communication, but with such a secure channel in use there should be no concerns about the intermediaries getting any access to the data — including PHI — that flows across the connection. If we can agree that for all intents and purposes, there is always some sort of intermediary involved, then we can shift the discussion where it ought to be — to the extent to which intermediaries can access data in the messages, especially if they contain PHI.

The “B” option in the list above is a pretty standard use case for mutually-authenticated point-to-point communication channels such as TLS, but could also apply to unsecured channels where message (payload) encryption was involved. This distinction is important insofar as directed exchange using SMTP is intended to be an option. The “C” option is similar to the second one, but instead of encrypting or otherwise packaging the message at the point of origin, here the intermediary (such as a HISP) performs encryption and decryption processing on behalf of the parties to the exchange. This option is favored by some working on NHIN Direct to save providers from having to install and use encryption technology locally, and also to help simplify digital certificate management by using the HISPs as the boundary for public key infrastructure established to enable secure, authenticated exchange. In this third category it seems logical that the intermediary would fit the definition of a business associate under HIPAA, as the intermediary would be an”entity that performs certain functions or activities that involve the use or disclosure of protected health information on behalf of, or provides services to, a covered entity.” To be fair, nothing in the legal definition under HIPAA explicitly includes functions like encryption or message routing, but does include functions such as “data analysis, processing, or administration” and other generic services such as data aggregation, management, administrative, and accreditation services (45 CFR §160.103). Covered entities were already responsible for the compliance of their business associated with HIPAA safeguards, and under HITECH the HIPAA security and privacy rules apply directly to business associates, so even the temporary exposure of such intermediaries to the contents of messages (before the contents are encrypted for transmission) should not raise any special privacy concerns unless the parties believe that constraints applicable to business associates are insufficiently robust.

A similar logic applies to the “D” option, although in this case because the intermediary is explicitly processing the contents of the message, the intermediary would be considered a health care clearinghouse and therefore a HIPAA-covered entity (potentially in addition to being a business associate of the parties to the exchange). This means it would be in the intermediary’s own interest to guard against unauthorized disclosure, as the full set of HIPAA requirements apply when it accesses, changes, or otherwise discloses PHI. In recent weeks, the Health IT Policy Committee’s Privacy and Security Policy Workgroup has recommended policies in other contexts (notably including consent) that would result in the need for no additional protective measures beyond what is required by current law. If the definition and role of “intermediary” in the various directed exchange patterns was more clearly specified, it would be easier to identify areas of security or privacy concern that are already addressed by current legal requirements, and also to highlight any gaps that exist that might demand new policy statements.

Data encryption for HIE sounds obvious; not so simple to implement

One of the early themes that has emerged from the initial discussions of the Office of the National Coordinator’s privacy and security tiger team is the need for stronger protection of the confidentiality and privacy of health data exchanged between entities — whether in a point-to-point exchange model such as NHIN Direct‘s or a multiparty exchange environment such as NHIN Exchange — and the call for the use of content encryption to afford that protection. This near-consensus recommendation follows from the recent work of the Health IT Policy Committee and its Privacy and Security Policy Workgroup, which resulted in recommendations for encryption of patient data whenever it is exchanged. (Side note: The tiger team was organized as a workgroup under the Policy Committee, although its membership includes people from the Health IT Standards Panel and the National Committee on Vital and Health Statistics (NCVHS); it is co-chaired by Deven McGraw and Paul Egerman, both of whom are Policy Committee members who lead standing workgroups.) The emphasis in this recommendation and something of a departure from past precedent is on encryption of the contents or payload of messages exchanged using health information exchange (HIE), alone or in addition to the transport-level encryption (such as SSL or TLS) that is already specified for secure exchange among NHIN participants. During the May 19 Policy Committee meeting, McGraw presented a set of recommendations from the Privacy and Security Workgroup that were considered from the perspective of what a reasonable patient would expect. One area of emphasis on which the Privacy and Security Policy Workgroup is continuing discussion is patient consent, but tiger team latched on to content encryption when it began discussing ways to maintain privacy when electronic health records or other protected health information is exchanged electronically.

The tiger team recognizes that even when health information exchange is limited to a transmission between two entities, there may be several ways of technically enabling the communication, many of which involve the use of intermediaries such as health information service providers (HISPs) that, depending on the nature of the exchange, may or may not have a need to examine the messages flowing through them on their way from sender to receiver. To the extent that such intermediaries may be performing functions outside the realm of what would put them under the HIPAA definition of business associate, the Health IT Policy Committee and the tiger team members are concerned that current legal requirements for the protection of health data may not apply to the intermediaries. One way to mitigate such concerns is to render the data unreadable to intermediaries, which in general means encrypting it. The discussion this month has been informed by the work on the NHIN Direct project (and participation in the tiger team meetings by some NHIN Direct team members), which has raised the issues of end-to-end content encryption and separating the potentially PHI-containing message content from the header data or other metadata needed to successfully route the message to its destination. There remains some debate as to whether such content encryption should be a mandatory requirement, or should remain “addressable” as it is under the HIPAA Security Rule.

One argument in favor of mandating the use of encryption is the technical feasibility of such an approach. By applying Web Services Security standards, particularly including SOAP Message Security, solution developers and implementers have a lot of flexibility to separate and separately protect message contents from message envelope information. The real challenge lies not in separating routing data from payload, or from enabling content (or full-message) encryption, but instead in what encryption model should be used in order to make encryption possible without imposing barriers to interoperability. Perhaps obviously, there is no value in encrypting data in transit if the recipient cannot decrypt the message, but the sort of public key infrastructure used for NHIN Exchange is not necessarily a viable approach for a solution like NHIN Direct. The use of digital certificates for encryption in health information exchange has been recommended for NHIN Direct, but because NHIN Direct will not rely on a central certificate authority, there will need to be provisions for managing and evaluating certificates from multiple issuers potentially representing different “trust domains” to which a given exchange participant might belong.

As the NHIN Direct members have discussed in the past, there are ways to do this without full-scale PKI and all of the key distribution and management overhead that comes with such an infrastructure. That potential aside, no one should underestimate the significance of the tasks of establishing, managing, and overseeing the certificates and supporting services necessary to facilitate end user encryption and decryption among health information exchange participants (to say nothing of integrating such capabilities into end user electronic health record systems, transaction gateways, web services, SMTP clients, or other messaging tools). It certainly helps that multiple technical alternatives are incorporated within available open standards and that many health IT product vendors support these standards, but there is a great deal of additional processing and management required to accommodate pervasive use of content encryption. The complexity of such a solution may explain why, to date, only the transport-level encryption is used for the NHIN, and the only encryption used within the payload is the digital signing of SAML assertions included within the SOAP messages exchanged via the NHIN. The use cases envisioned for NHIN Direct are different than those for the NHIN in general, particularly with respect to transport encryption, with is required for the NHIN but may not be in place for all possible transport mechanisms that might be supported by NHIN Direct.

Trusted computers are reliable, but that’s not the same thing as trustworthy

Trust in a security context normally means reliability or, in the identification and authentication context, authenticity. When the term trusted is applied to a system or capability, the same connotation conveys — that is, a trusted system is one that can be relied upon to perform as expected or intended. While these are valuable attributes for a system, and are characteristics that security controls can go a long way toward providing, a reliable system is only trustworthy to the extent that it is used properly, and trusted computing standards do not provide any direct assurances of this nature. For example, the specification for the WS-Trust web services security defines trust as “the characteristic that one entity is willing to rely upon a second entity to execute a set of actions and/or to make set of assertions about a set of subjects and/or scopes.” WS-Trust defines a context and security model in which web services can exchange security tokens (i.e., credentials) and communicate claims about service requesters to give service providers the information they need to determine whether to respond to the request. The need for brokering trust among requesters and providers reflects the practical reality that all service requesters cannot be known to all providers, so using WS-Trust enables a service provider to require that a requester present certain information, such as assertions about authentication and authorization to request the service being provided. If the requester cannot provide the information with a security token acceptable to the provider, the requester can ask an authorized third party (one trusted by and perhaps specified by the provider) to evaluate the requester and, assuming that evaluation is successful, issue a security token with the appropriate claims to the requester to be presented to the provider. Because different providers are free to determine their own requirements as to what claims need to be presented for authentication and authorization, using authorized third party “security token services” provides the mechanism to negotiate and validate claims among different parties and in different contexts.

The “trust” in the WS-Trust security model is really two trust relationships — one established between the service provider and whatever third parties it authorizes to evaluate requesters and issue tokens to them, and the other between the requester and the third party security token service (STS). From the service provider’s perspective, the strength of the trust in the service requester brokered by the STS is a function of the evaluation criteria or other basis by which the STS determines that a given requester should be issued a token and, in turn, invoke the provider’s service. This is precisely the locus of trust found in any centralized trust model where all parties to an exchange establish trust relationships with a central authority, rather than with each other. In such models, if the claims associated with tokens issued by the STS to requesters offer assertions about things like identity, reason for requesting the service, and permission to do so, then providers must understand and accept the basis for granting tokens to requesters before we know if we can say the provider can trust the requester.

Willful or inadvertent misuse of systems by authenticated and authorized users can and does occur, and it is important to address the potential for such misuse and mitigate it to the extent possible in order to maximize the reliability of a given system used for a given purpose. In business contexts or industry domains where highly sensitive data such as financial information or health records is concerned, it may not be possible to broker trust among parties to information exchange unless there is a way to evaluate the trustworthiness of the parties, not just validate their identity, role, or organizational affiliation.

Another limitation with a conception of trust based on reliability is the fact that a perfectly reliable system can deliver erroneous or inaccurate information — whether through the actions of a user or because the information stored, processed, or transmitted by the system has poor integrity — so accessing information from a “trusted system” in the technical sense does not in and of itself make the information trustworthy. This general concept of valid (in the sense of well-formed or conforming) information flows that provide untrustworthy information is addressed in greatest detail in the contexts of the Byzantine Generals problem, which explains a situation that is often colloquially referred to as Byzantine failure, and systems that resist failures of this type are said to be Byzantine fault tolerant. The key trust issues raised by the two generals problem highlight the importance of knowing the trustworthiness of users of a system, not just the system itself, and the criticality of ensuring data integrity, particularly when the data being transmitted or accessed is intended to be used in support of decision making.

Supreme Court rules search of police officer’s text messages legal, opts not to try to resolve reasonable expectation of privacy issue

The U.S. Supreme Court handed down a unanimous ruling in Ontario v. Quon, reversing the 9th Circuit Court of Appeals and finding that the City of Ontario (Calif.) Police Department (OPD) did not violate the 4th Amendment rights of one of its officers when it reviewed the contents of personal text messages he had sent using his city-issued pager. Despite anticipations before the case was argued that the Court would try to resolve the disputed issue of whether Quon had a reasonable expectation of privacy with respect to his text messages, the justices determined that they didn’t need to resolve that issue to reach a conclusion in the case, and based their decision on a determination that irrespective of the employee’s expectation of privacy, the review of his text messages constituted a legal search under the 4th Amendment, relying in particular on the precedents from the plurality and concurring opinions in the 1987 case O’Connor v. Ortega.While prevailing 4th Amendment doctrine maintains that warrantless searches are unreasonable, under O’Connor the Court recognized that the “special needs” of the workplace justify an exception for non-investigatory, work-related purposes or for investigations of work-related misconduct. Interestingly, while Quon was allegedly disciplined as a consequence of the OPD’s review of his text message transcripts, the city never suggested Quon’s actions rose to the level of misconduct, and justified its search on the grounds that it sought to determine whether the volume limits on the text messaging pager subscriptions were too low and might be causing overage fees for work-related communications.

The Court tried to put its own constraints on the scope of its ruling in this case, apparently believing that the rapidly pace of technological change makes it unwise to establish precedents based on a single type of device or communications medium. Instead, Kennedy writes, “It is preferable to dispose of this case on narrower grounds.” To limit its legal analysis to the reasonableness of the search that occurred when the OPD reviewed Quon’s text message transcripts, the Court accepted three propositions for the sake of argument: 1) Quon had a reasonable expectation of privacy with respect to the text messages he sent; 2) his supervisors’ review of the message contents constituted a search under the Fourth Amendment; and 3) the principles ordinarily applied (from O’Connor) to a government agency’s search of its employees’ physical office also apply when the employer searches in an electronic environment.

Because the reasonableness principle stems from the O’Connor precedent, which says the reasonable expectation of privacy must be addressed on a case-by-case basis, the finding by the lower courts that Quon did have such an expectation (taken as an assumption by the Supreme Court for this case) cannot practically be considered to establish a general principle about text messages and privacy in government agency environments, much less workplace environments generally. In O’Connor, Justice Scalia offered a somewhat simpler standard for determining reasonableness in government-as-employer contexts, under which government workplaces would be covered by the 4th Amendment as a rule, but searches involving work-related materials or to investigate violations of workplace rules would be considered reasonable (as they are in private employer settings) and would therefore not violate the 4th Amendment. Either perspective would yield the same net conclusion in Ontario v. Quon, but in a separate concurring opinion, Scalia took the majority to task for including what is essentially a side discussion on the reasonable expectation of privacy question, since the Court notes repeatedly that resolving that issue was not necessary to decide the case before the Court. Scalia maintains his disagreement with the reasonableness approach the plurality proposed in O’Connor, saying “the proper threshold inquiry should be not whether the Fourth Amendment applies to messages on public employees’ employer-issued pagers, but whether it applies in general to such messages on employer-issued pagers.”

Although it wasn’t unexpected, the narrow ruling by the Court limited to the particular facts of the situation and the reasonableness of the specific search involved mean we can only speculate about what conclusions might have been drawn had the Court followed through with some of the reasoning it described. The OPD had a computer usage, internet and email policy in place, which explicitly stated that users should have no expectation of privacy or confidentiality when using the city’s computers. OPD personnel had repeatedly expressed their position that text messages were to be treated the same as emails. In arguing the original case, there was some debate as to whether Quon’s supervisor’s statements that he did not intend to audit the text messages somehow overruled the official policy. The Court notes this disagreement without making any determinations regarding this matter. Justice Kennedy’s majority opinion also makes some important distinctions between text messages and emails, but doesn’t say whether these differences would prevent the city from applying its formal written computer policy to text messages, which are not explicitly mentioned in the policy. The key difference is the fact that while OPD emails are communicated using government servers, the text messages are not, passing instead through the communications infrastructure of the service provider (Arch Wireless). It might be interesting to see how the court would apply this line of reasoning had the city owned and operated the text messaging infrastructure, or if the communications at issue involved outsourced email services hosted by a third party.

Before hearing the case in April, the court denied cert to Arch Wireless’s appeal of the 9th Circuit’s ruling that it had violated the Stored Communications Act by turning over the contents of the text messages to the city when asked to do so. Given that the city was the subscriber of record for all the wireless pager accounts, it might have been interesting to see how the Court viewed that argument, but the issue was not taken up by the court, and was noted only in the context that legal precedent does not make the city’s search unreasonable even if the transcripts should not have been provided to them.

Without diminishing the importance or potential future significance of any of the above issues, the big unanswered question remains what reasonable expectations of privacy should public or private sector employers have in their personal communications, particularly when using employer-provided means of communication. The majority opinion made mention of the disagreement over privacy expectations and then devoted nearly as much space to justifying why the Court opted not to address this issue in its ruling: “The Court must proceed with care when considering the whole concept of privacy expectations in communications made on electronic equipment owned by a government employer. The judiciary risks error by elaborating too fully on the Fourth Amendment implications of emerging technology before its role in society has become clear.” Justice Scalia voiced concerns in his concurring opinion that future litigants would try to use Quon’s case to justify claims of reasonable expectations of privacy, despite the explicit warning in the majority opinion: “Prudence counsels caution before the facts in the instant case are used to establish far-reaching premises that define the existence, and extent, of privacy expectations enjoyed by employees when using employer-provided communication devices.”

Even assuming a reasonable expectation of privacy existed, which the court did for the sake of argument, the Court noted that given what Quon and his fellow officers had been told about the city’s perspective that text messages were considered the same as email, Quon couldn’t claim immunity from auditing in all circumstances. This seems to suggest that even where a legal expectation of privacy is established, such an expectation is not without limits. Justice Stevens, writing in a concurring opinion, said that Quon “should have understood that all of his work-related actions — including all of his communications on his official pager — were likely to be subject to public and legal scrutiny. He therefore had only a limited expectation of privacy in relation to this particular audit of his pager messages.”

Building patient trust in EHRs can’t be about security controls

The emphasis on security and privacy in electronic health record (EHR) systems as a prerequisite for building consumer trust in these systems both overstates the extent to which security controls can in fact provide trust, and understates the importance of the provider-to-patient relationship. Recent consumer polls certainly seem to indicate that individuals have a lot of concerns about their medical records being stored and used in digital form, loss, theft, and misuse of information chief among them. Practically speaking however, it is unrealistic to think that typical consumers will be able to learn or understand enough technical information about the EHR systems that their providers or hospitals or insurance companies use to be able to make an independent determination as to whether the security and privacy measures afforded by such a system are sufficient to make them confident that their personal health data is protected. Instead, most people will rely on their doctors or other providers (the actual users of the EHR systems), and their relative comfort level with digitizing their health records will likely be strongly correlated to the level of trust they put in their providers.

The point is not to diminish the importance of having strong security and privacy protections for health data stored in EHR systems, but instead to reiterate that patient trust (or lack thereof) in health information technology cannot be provided through technical means alone. I suspect that few patients today have much of a feel for how their medical records are stored (paper files, computer, or some combination) or for the physical, technical, or administrative measures in place to secure them. With the prospect of easier and more frequent sharing of health data enabled by EHR systems, patients might be expected to be more interested to know how their records are being handled, but consumer acceptance of health IT should be influenced by the benefits (purported and, over time, actually realized) to themselves, their health care providers, and the health care system (not necessarily in that order). Health information is of course usually considered to be far more sensitive that other personal data, but as supermarket and other retailer loyalty programs have illustrated for years, lots of people are willing to disclose some personal information in return for perceived tangible benefits, and this pattern should apply to health data as well. To help individuals make informed decisions about EHRs and other health IT, there needs to be more education and outreach to consumers about EHRs, their intended and permitted uses and benefits, and also the ways in which personal health data is protected against loss, theft, misuse, and unauthorized disclosure. The best way to deliver these messages is to leverage the (hopefully) trusting relationship that already exists between patients and providers, since from the patient perspective, their doctors are much more likely to take on patient interests as their own than EHR software vendors, insurance companies, or even government health agencies.