The U.S. Circuit Court for the District of Columbia offered some new judicial insight into reasonable expectations of privacy when it issued a ruling this month overturning the conviction of an alleged drug trafficker because the prosecution used evidence gathered via a global positioning system (GPS) device placed on the man’s car and monitored over a month-long period. The police placed the GPS tracking device without first obtaining a warrant, following a legal precedent (from the 1983 Supreme Court decision in United States v Knotts) that no warrant was required to use such a device to track a suspect on a single journey from an origin to a destination. The lawyers for Antoine Jones, the convicted man in this case, argued successfully that by “tracking his movements 24 hours a day for four weeks with a GPS device they had installed on his Jeep without a valid warrant” the police violated Jones’ 4th Amendment protection against unreasonable search. On its face the ruling seems to contradict precedents from at least three other federal courts where warrantless use of a GPS device is concerned, at least in terms of putting a temporal constraint on the duration of the monitoring in question.
The underlying logic of the Knotts decision was that since a trip on public thoroughfares is by definition in plain view of the public, no one could reasonably claim an expectation of privacy applied to the trip, including the route taken or the destination. In contrast, where continuously monitoring was involved with Jones, the DC Circuit panel noted that “the whole of one‘s movements over the course of a month is not actually exposed to the public because the likelihood anyone will observe all those movements is effectively nil.”
The prosecution working to convict Jones relied on the aggregate information gathered over numerous trips (correlated with cell phone calls and other intercepted communications that were obtained with the use of warrants) to develop a pattern of Jones’ movements that sufficed to convince the jury that he was engaged in cocaine trafficking. The prosecution had no direct evidence on Jones (such as possession of drugs), so the appellate court determined that without the GPS data the prosecution could not have secured a conviction, and therefore reversed it because the evidence was obtained in violation of the 4th Amendment.
Aside from the 4th Amendment implications of the ruling, the decision has raised a number of questions about the applicability of existing laws and judicial precedent to uses of new technology, particularly those that involve geo-location information. A judicial opinion that prolonged monitoring of user movements could constitute a search and, if performed by a government entity, therefore fall under the provisions of the 4th Amendment, open up speculation about implications for social networking applications such as Foursquare,
Twitter, and Facebook that can incorporate user locations based on information associated with the computers or mobile devices used to access them. On a different technological front, new concerns have been raised recently about the use of uniquely identifiable RFID transmitters — such as those used in many late model cars to communicate tire pressure to onboard automotive computers — and the potential for the RFID chips to be used to track user location and movements. The consistent theme in all these instances is the ability of technology to outpace legal, regulatory, and policy provisions about what uses of these technologies are acceptable, and how to guard against technically feasible unintended or surreptitious activities these technologies enable.
Among the gaps a formal governance system for the NHIN is intended to fill is the inability for organizations without a government contract or grant to sue the government; this lack of privity is cited by ONC leaders as a key reason that participation to date in the NHIN Exchange has been limited to government agencies and entities under contract or grant award by the government. While it may seem strange at first glance to hold out a formal legal principle underlying right of redress in this context, the right to sue is essential for entering into business relationships. This principle was effectively articulated a half century ago by Thomas Schelling is his Strategy of Conflict (1960):
Among the legal privileges of corporations, two that are mentioned are the right to sue and the “right” to be sued. Who wants to be sued! But the right to be sued is the power to make a promise: to borrow money, to enter a contract, to do business with someone who might be damaged. If suit does arise, the “right” seems a liability in retrospect; beforehand it was a prerequisite to doing business. In brief, the right to be sued is the power to accept a commitment. (p. 43)
As legal doctrine, privity explicitly limits the applicability of the terms of a contract to the parties to the contract. The Office of the National Coordinator clearly believes that entering into voluntary participation agreements with entities interested in using the NHIN Exchange is insufficient or inappropriate, and that more formal legal controls must be part of the arrangement. This is consistent with suggestions by Hardin (1991) and others who invoke Schelling that such legal rights are necessary in order for parties to an exchange like to make credible commitments to fulfill their obligations under the agreements they enter into. Since the Trial Implementations phase of the NHIN back in 2008, participating entities have executed a Data Use and Reciprocal Sharing Agreement (DURSA) that spells out numerous expectations and obligations for participants, and also disclaims liability for a variety of circumstances that might occur when exchanging information using the NHIN. Signing the DURSA is a prerequisite for connecting to the NHIN and exchanging data with other NHIN participants, but it does not address rights or obligations for entities in the process of applying to participate, and it is this gap that the forthcoming rulemaking on NHIN governance is intended to address.
Casual observers of NHIN activity to date may have been under the impression that voluntary commitments to participate were in fact part of the long-term vision for the NHIN, particularly given the public emphasis placed on the need to establish trust in the NHIN and health information exchange in general, including a “HIE Trust Framework” recommended by the NHIN workgroup of the Health IT Policy Committee that incorporates explicit oversight, enforcement, and accountability mechanisms. The current stipulation that additional legal contracting provisions are needed to begin to realize this long-term vision is not inconsistent with this framework, although it provides further evidence that the model of cooperation sought for the NHIN is one not of trust, but of mechanisms to compensate for the lack of trust (or, possibly, distrust) among the parties.
References:
Hardin, R. (1991). Trusting persons, trusting institutions. In R.J. Zeckhauser (Ed.), Strategy and choice (pp. 185-209). Cambridge, MA: MIT Press.
Schelling, T.C. (1960). The strategy of conflict. Cambridge, MA: Harvard University Press.
With all the talk about the need for effective security measures to protect personal health data stored in electronic health records and shared among organizations participating in health information exchanges, the decision of what actual security and privacy controls an organization puts in place remains highly subjective and therefore likely to vary greatly among health care entities. This is neither a new nor a particularly surprising problem in health information security given the structure of the laws and regulations that set requirements for security and privacy provisions, but in some ways the lack of more robust security requirements (and complete absence of privacy requirements) in the administration’s final rules on EHR incentives under “meaningful use” represent a lost opportunity. The security-related meaningful use measures and associated standards and certification criteria for EHR systems provide another instance of federal rules promulgated under the authority of the Health Information Technology for Clinical and Economic Health (HITECH) Act that, as implemented, fall somewhat short of the vision articulated in the law.
Where security and privacy laws are concerned, Congress has always shown reluctance to mandate specific security measures or technologies, in part to avoid favoring any particular technology or market sector or vendor, and also because the authors of such legislation correctly assume that they may lack the technical expertise necessary to identify the most appropriate solutions, and instead choose to delegate that task to NIST or other authorities. The net result however is sets of “recommended” or “addressable” security safeguards or, in the case of explicitly required security controls, endorsing a risk-based approach to implementing security that allows organizations to choose not to put some controls in place with appropriate justifications for those decisions. There’s nothing inherently wrong with this approach — it embodies fundamental economic principles about security, particularly including the idea that it doesn’t make sense to allocate more resources to securing information and systems than what those assets are worth. The problem lies in the reality that different health care organizations will value their information assets in different ways, will face different threats and corresponding risks to those assets, and will have different tolerances for risk that drive what is “acceptable” and what isn’t, and similarly drive decisions about what security measures to implement and which to leave out.
From a practical standpoint, what might be helpful to build confidence in the security of health IT such as EHR systems would be a set of minimum standards for security that all organizations would need to implement. The HIPAA Security Rule includes a large number of administrative, physical, and technical safeguards (45 CFR §§164.308, 164.310, and 164.312, respectively), but many of the “required” safeguards are described in sufficient vague terms that compliance is possible with widely varying levels of actual security, and many of the most obviously helpful safeguards, like encryption, are “addressable” and therefore not required at all. There were relatively few security standards and criteria included for meaningful use stage 1, and most of the items that were included already appear somewhere in the HIPAA security rule, but what stands out about the standards and criteria is how little specificity they contain. The minor revisions to these security items in the final rules issued late last month should make it fairly easy for organizations to satisfy the measures, but will have little impact in terms of making EHR systems or the health care organizations that use them more secure. The only identifiable “standards” included are government Federal Information Processing Standards (FIPS) for encryption strength (FIPS 140-2) and for secure hashing (FIPS 180-3), while everything else is described in functional terms that leave the details to the vendor providing the EHR system or the entity doing the implementation. Even the risk analysis requirement (the only explicit security measure in meaningful use) was reduced in scope between the interim and final versions of the rules, as under meaningful use the required risk analysis only needs to address the certified EHR technology the organization implements, not the organization overall. This is markedly less than what is already required of HIPAA-covered entities (and, under HITECH, of business associates as well) under the risk analysis provision of the HIPAA Security Rule.
In a typically insightful blog post last weekend, Margalit Gur-Arie considers issues of trust in electronic health records and other health information technology through a comparison with the banking system, financial institutions, and the use of paper currency. By using as a frame of reference a system in which public trust is well-established (we’re talking here about banking in general, not any greed-driven actions taken by Wall Street investment bankers), she highlights some of the distinct differences involved when we talk about trust in a system as opposed to trust in specific organizations or individuals. This distinction is one of the fundamental points in Niklas Luhmann’s seminal work on trust (Luhmann, 1979), in which Luhmann uses societal trust in money specifically and the financial system in general to emphasize the different factors contributing to trustworthiness in a system compared to the basis of trust involved in interpersonal relationships.
The point of the contrast between the financial system and the health care system pending the widespread adoption of health IT is that the process by which public trust is established is neither trivial nor rapid, and health IT is currently still at a very early stage in that process. Gur-Arie draws important lessons from the evolution of the banking industry in terms of safety and security as well as laws and regulations, noting that all of these elements collectively were needed to reach the level of public trust the financial system currently enjoys — robust enough that it manages to shake off the effects of even major setbacks, although historically government regulation has a lot to do with those recoveries. She notes that in the earlier days of the system, “as long as banks were easily robbed on a daily basis, and as long as nobody guaranteed that your money was safe in a bank, and as long as you didn’t travel much, the cowshed was the best option” for your keeping your money safe. Gur-Arie suggests that health IT is currently at the “daily bank robbery” stage, and it will take changes in privacy and security practices among health care organizations, in addition to appropriate policies and regulations where necessary, to provide sufficient evidence for the public to have confidence in the system and trust it to handle their personal health information.
There are many valid parallels that can be drawn between financial institutions and health care institutions, but there are some fundamental differences in the nature of a commodity like money (and all the things it enables or facilitates) and nature of individual health. The core decision involved with money (whether to put it in a bank for safekeeping or whether to put it under your mattress) is not the same as the decision to store your health record electronically or on paper, because in either case the patient is still placing the record under the stewardship of the provider (or insurance plan, or agency, or other entity). No one would suggest that the alternative to putting your medical record online is keeping it at home or with you (perhaps ironically, the whole idea of personal health records is to give consumers a means to play a more central role in managing their own health and health data). A point of greater commonality between finance and health care is the fiduciary role that both banks and health care organizations have to look after the interests of their customers. Bernard Barber (1986) among other theorists have drawn particular attention to trust in the sense of expectations that trusted entities will fulfill their fiduciary obligations, rather than betraying the trust placed in them by appropriating the objects entrusted to them (money in the case of banks, medical records in the case of health care entities) for use in self-interested purposes, whether or not those purposes are explicitly legal.
One other important difference between trust in the financial system and trust in the health care system is the focus of trust by an individual. Following the familiar characterization (Hardin, 2006; Levi, 1998; etc.) of trust as a three-part relationship — truster, trustee, and the context of the relation — the truster (patient) trusts the trustee (provider, health care organization) within the limits of a specific context, such as delivering care, but that trust need not extend beyond a given purpose for use. This potentially limited scope of trust is seen in banking as well (for instance, you may put your salary in a checking account with your bank, but may choose not to have them manage your investments), and in the health care arena, is a central aspect of the current health IT policy debate about consent and consumer privacy preferences. In the health care system, the key trusting relationship is between the patient and the provider, or perhaps the patient and institution, if the patient receives care in an environment where he or she might see a different doctor at each encounter. In most banking contexts, the relationship is likely to be more impersonal, where the bank teller or loan officer may or may not be well known to the customer, but in either case is explicitly an agent of the financial institution they represent. There are of course many people who travel and move residences quite frequently, and for these people at least, trust in the health care system goes beyond a specific doctor-patient relationship, and it is at this same systemic level that public trust in health IT needs to be established.
It is important to distinguish here that trust in EHRs as an alternative to paper-based medical records is a quite different proposition than trust in health information exchange or the interoperability (and presumed broad availability) of the data stored in electronic health records, and this distinction isn’t the same in the financial services sector. In banking, getting access to your money while away from home seems similar in nature to a doctor in another city accessing your records when you visit during your vacation, but the use of what’s exchanged is quite different, as is the relevant time horizon, since once the remote bank gives you your money, it no longer has any stewardship responsibility. Interoperability and data exchange in the banking industry (which became more or less universal on a technical level some 20 years ago) is in many ways simpler than it is in a health care setting, since the information the bank needs is largely details about your account (and the liquidity of the associated assets), while in health care the focus is more on the contents of the health record, and less about whether you happen to be a member of a given plan or customer of a given provider organization.
To bring the health IT sector anywhere close the level of nearly pervasive public trust enjoyed by the banking industry, there are important contributions to be made by many different stakeholders, including the providers and other health care entities, the technology vendors and operators whose health IT solutions will be used in the market, and the government that, in the form of regulations and oversight, can do more to encourage organizations holding health information to behave appropriately. Most sociological and economic theories of trust would stipulate that appropriate organizational behavior that occurs because it is constrained by laws, contracts, or regulations is not actually evidence of trustworthiness, at this point in the process of maturing the health care system and its use of health information technology, greater public confidence will substitute for public trust until the system reaches a point where it can rely on unconstrained demonstrations of trustworthy conduct.
References:
Barber, B. (1986). The logic and limits of trust. New Brunswick, NJ: Rutgers University Press.
Hardin, R. (1996). Trust. Cambridge, England: Polity Press.
Levi, M. (1998). A state of trust. In V. Braithwaite & M. Levi (Eds.), Trust and governance (pp. 77-101). New York, NY: Russell Sage Foundation.
Luhmann, N. (1979). Trust and power. Chichester, England: John Wiley & Sons.
A recent article on potentially troubling privacy practices by U.S. airlines posted on The Washington Post online highlights (unfortunately somewhat erroneously) some of the key differences in rules about personal data collection and use that apply to federal agencies versus those that cover commercial organizations like air carriers. To comply with the information gathering requirements of the Transportation Security Administration’s Secure Flight program, the airlines last fall began collecting the date of birth and gender of passengers, in addition to requiring that passenger name information on tickets exactly match the way names are represented on whatever official means of identification the passengers present for airport security screening. In the Post article, the author speculated after receiving a birthday card from an airline on which he travels frequently that the airline was reusing the data it collected for Secure Flight on behalf of the government for marketing purposes. In this case, it turns out that the airline had separately requested date of birth information from travelers through its frequent flier program, but the experience still prompted the question of just how the additional personal information being collected by the airlines could, or could not, be used for other purposes.
From a legal standpoint, the key issue is who collected the data from the passenger and for what purpose (and under whose authority) the data was originally collected. Generally speaking, federal agencies that collect personal information from U.S. citizens or legally resident aliens are required under the terms of the Privacy Act of 1974 to publicize the type of data to be collected and its intended purpose for use, and not to use the data for any other purpose beyond what was stated at the time of collection, unless they first obtain consent from the individuals whose information they hold. Commercial entities are not subject to the terms of the Privacy Act, unless the data collection they perform is done on behalf of the government. This means that in the case of Secure Flight, if the airlines only collect the information the TSA requires in order to give it to the government, the data collection falls under the Privacy Act and the airlines could not re-purpose the data for some other use, arguably even for customer service. However, if (as in the case of Southwest Airlines mentioned in the article) the airline already has the relevant information from passengers, the Privacy Act would not apply and the company would be held accountable only for complying with the terms of its own privacy practices, as regulated by the Federal Trade Commission under the unfair and deceptive practices section of the FTC Act. For instance, several years ago, when it came to light that several airlines had provided actual passenger data to the government in association with a program developing an anti-terrorist passenger screening system, the actions by contractors working for the TSA, NASA, and other participating agencies were investigated as possible violations of the Privacy Act, but legal complaints (ultimately dismissed) lodged against the airlines who provided the data charged only that they had acted contrary to their own published privacy practices.
The Post online article cites a security industry executive who suggests that irrespective of TSA’s information gathering requirements for Secure Flight, the airlines are bound by FISMA, the Privacy Act, and other federal laws. This simply isn’t true, as these laws apply only to federal government agencies, and “agency” in these laws is defined to mean only those that are part of executive branch (e.g., Congress is not covered by FISMA). The actual accountability here depends very much on whether the airline is collecting data for its own purposes or whether it does so on behalf of the TSA or some other government agency. If the former situation applies, then once the airlines have the data on hand, they are legally permitted to use it in just about any way they wish (including selling it to third parties), although any anticipated possible uses of personal data on passengers should be included in their privacy policies.