Prosecution of Maryland motorcyclist who recorded his traffic stop hinges on “reasonable expectation of privacy”

As reported in today’s Washington Post, a Maryland motorcyclist who used his helmet-mounted video camera to record the state trooper who stopped him and ticketed him for speeding, and then posted the video on YouTube, now faces criminal charges under the state’s wiretapping laws. Maryland is one of several states whose laws require that both (or all) parties consent before a conversation can legally be recorded, a stipulation that can only be waived when a non-consenting party is deemed to have no reasonable expectation of privacy. In this case, it would seem the key question is whether a state law enforcement officer, making a traffic stop in public, and reasonably expect that whatever he says will be private. According to the article, Maryland’s wiretapping law does not cover video, so only the audio portion of the recording is at issue, but there seems to be a growing trend to make it illegal to film police while on duty. This trend is troublesome on many levels, not the least of which is the power imbalance that exists between law enforcement personnel and members of the public, to which (as Bruce Schneier noted eloquently more than two years ago) an appropriate response would be not to increase the transparency of government actions, not put laws in place to shield them.

With the case of the Maryland motorcyclist, the treatment of the recording as an instance of illegal wiretapping raises the “reasonable expectation” principle in yet another context. In recent months this idea had been debated and argued in court cases involving employee expectations of privacy in the workplace, particular where employees use employer-provided computers or communications equipment to transmit personal communications. Among the highest profile of these cases was City of Ontario v. Quon, argued before the Supreme Court in April, that involved personal messages sent by a police officer using his city-government-issued pager. The ruling in that case, issued this week, assumed that Quon did in fact have a reasonable expectation of privacy for the contents of his text messages, but did not decide the issue more broadly than in the specific case. If a government authority can make the argument that person-to-person communications made by an officer while on duty should not be presumed to be private, it is hard to reconcile how verbal communication (allegedly including shouting) uttered on the side of an interstate highway could be considered any less public.

In letter to Congress, Google says wireless data collection wasn’t the right thing to do, but didn’t break any laws

In response to a request from Congressmen Henry Waxman, Joe Barton, and Edward Markey to Google CEO Eric Schmidt seeking information about the collection of wireless network traffic by the company during the operation of its Street View program, Google’s Director of Public Policy Pablo Chavez sent the company’s reply in a letter dated June 9. In the letter, Chavez repeats the company’s assertions that it never intended to capture or use payload data in the wireless traffic it gathered from unsecured wireless hotspots, and apologizes for doing so. In response to a specific question posed to Google asking about the company’s view of the applicability of consumer privacy laws to the situation, Chavez said that Google does not believe that collecting payload data from such networks violates any laws, because the wireless access points in question were not configured with any encryption or other privacy features and were therefore publicly accessible. This response seems to be indirectly referencing a provision in the Electronic Communications Privacy Act (ECPA) that offers an exception to the general prohibition on the interception of electronic communications, if the interception is “made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public” (18 U.S.C. §2511(2)(g)(1)). The law defines “readily accessible to the general public” (18 U.S.C. §2510(16))with respect to radio communication to mean that the communication transmission is not scrambled, encrypted, or modulated in such a way that preserves privacy, so it would seem to be a valid legal interpretation to assert that private citizens who deploy unsecured wireless access points in their homes are actually establishing public electronic communications services. The law also only prohibits intentional interception and disclosure of electronic communications, so even if Google were overruled on its characterization of wi-fi hotspots as public services, its repeated claim that it never intended to capture payload data might give it another escape clause from ECPA.

There are, however, a couple of aspects of these interpretations that don’t sit quite right. Among the most obvious is the fact that the ECPA was enacted before the advent of wireless networking — its passage predates the IEEE’s 1997 release of the first 802.11 protocol by more than a decade. In recent months a wide range of technology firms, consumer advocacy groups, and members of Congress have argued that the ECPA is long overdue for revision to bring it more in line with modern communications technology. Google in its own public statements has emphasized the public accessibility of wireless networks, and if the data collection in question had been limited to packet captures on free municipal wireless networks or free wi-fi provided at cafes and coffee shops all over the place, there might be a lot less debate and a smaller number of lawsuits, both here and abroad. When the wireless interception involves traffic transmitted within a private home or business, however, the fact that the technical capability exists to allow someone to receive the radio signal transmissions from outside the home or business may not be sufficient by itself to make the transmissions “public.” There is a different portion of the U.S. code (18 U.S.C. §1029(a)(8)) that makes it makes it a crime to knowingly use or even possess a scanning receiver capable of intercepting electronic communications if the intent of such interception is to defraud. Various state laws also prohibit either eavesdropping alone, or eavesdropping and subsequent disclosure of cordless or cellular telephone communications, despite the fact that the technology to listen in on such devices is widely available. In Google’s case, it maintains that it neither wanted the data it captured nor had any intended use for it, so there’s little to suggest it intended to disclose anything other than the location of the wireless access points it found, and certainly no evidence that the company intended to defraud anyone. Still, the law is not so straightforward as some might suggest when it comes to the legality of wireless data interception, especially when considering state-level laws, and it may take a formal court ruling to clarify exactly what, if any, constraints might be placed on the concept of “generally accessible to the public.”

Contrasting trust models under development for NHIN and NHIN Direct

The Nationwide Health Information Network (NHIN), a government-sponsored initiative started in 2004 and re-emphasized in the Health Information Technology for Clinical and Economic Health (HITECH) Act, is no longer envisioned as a “network” at all (in the infrastructure sense), but instead as a collection of standards, services, and policies that collectively support the secure exchange of health information between participating entities. The original idea for the NHIN was that public and private sector organizations would benefit from adopting a common set of parameters governing their health data exchanges, and that once a few early adopters went into production using the NHIN, participation would grow rapidly. Instead, due in part to disagreements among different types of potential participants about how NHIN standards should be implemented, and also to concerns about policy incompatibilities between federal and commercial sector entities, there are currently very few organizations in production. The group of state and federal government agencies and a small number of commercial health care entities currently operating health information exchanges using the NHIN are collectively referred to as NHIN Exchange; this exchange is focused on the data exchange needs of federal agencies, to the degree that non-federal participants must join through a federally-sponsored contract. The NHIN has in general been focused on enabling health information exchanges between large organizations, but addressing the data exchange needs of small providers has received greater attention due to the recent focus on meaningful use measures that eligible health care providers must satisfy in order to qualify for financial incentives to acquire and implement electronic health record technology. A core requirement for showing meaningful use is that providers’ EHR technology must be implemented in a way that enables “electronic exchange of health information to improve the quality of health care” (Meaningful Use Notice of Proposed Rulemaking, 75 Fed. Reg. 1850 (January 13, 2010)). In order to enable secure health information exchange among smaller providers, the NHIN Direct project began earlier this year, specifically intended to look to use or expand upon NHIN standards and services to “allow organizations to deliver simple, direct, secure and scalable transport of health information over the Internet between known participants in support of Stage 1 meaningful use.”

Without delving into the details of all the standards and services and use cases that the NHIN and NHIN Direct are seeking to support, one very noticable difference between the two initiatives is in the area of trust. Participants working on both initiatives agree that trust is an essential aspect of any solution, because health care entities — large or small — are not expected to participate in any health information exchange unless they feel they can trust the other participants and any third parties involved in operating or managing or overseeing the exchange. While everyone seems to agree that such trust is important, the approach each initiative is taking with respect to trust is quite different. In particular, the basic trust model proposed for NHIN Direct is much more explicit than the trust framework being developed for the NHIN in terms of what “trust” actually means in a health information exchange context, and on the extent to which participants involved in a multi-party exchange can agree on policies, standards, and controls intended to support trust. Both programs tend to use the word “trust” incorrectly, as the results sought from their trust models and frameworks include confidence, reliability, assurance, or even surety but don’t really even begin to address establishing the trustworthiness of a given entity that would help another decide to accept the risk of engaging in an exchange with the other based on expectations about how the trusted entity will behave. This may be due to implicit assumptions about the interests of different would-be participants in health information exchanges, or because insufficient weight is given to the manner in which participants can establish their trustworthiness, or perhaps too little attention is focused on the very real distrust that exists between potential HIE participants.

To its credit, the NHIN Direct project candidly acknolwedges that different policies and assumptions will apply to different participants in different contexts, so the NHIN Direct basic trust model limits the scope of what any assertion of trust actually covers, and allows for the possibility (even the expectation) that a given organization may participate in multiple exchanges governed by different sets of policies or rules. The NHIN Direct approach has no central authority to assert trustworthiness of participants, and no trust-by-default among participants. NHIN Direct participants are expected (if not quite obligated) to make their own determinations about the relative trustworthiness of others. The NHIN Direct Security and Trust Workgroup’s keys for consensus summary addresses “only the level of trust necessary to establish confidence that the transmitted message will faithfully be delivered to the recipient, not that the two parties trust or should trust each other; this definition of trust is to be defined by source and endpoint out of band, and may be facilitated by entities external to the NHIN Direct specifications.”

By contrast, the NHIN Exchange in particular and the NHIN trust framework in general relies on a central (or root) authority that makes determinations of trustworthiness for all potential participants, and presumably only allows participation by trustworthy entities. There is not currently a standard set of criteria to serve as the basis for determining trustworthiness, but when and if such criteria exist, they are expected to address at least the minimum technical requirements a participant must satisfy, along with providing identity assurance, and articulating the business, policy, legal, and regulatory requirements that apply to participants. The health information exchange trust framework recommended in April by the Health IT Policy Committee’s NHIN Workgroup comprised five key components:

  1. Agreed Upon Business, Policy and Legal Requirements / Expectations
  2. Transparent Oversight
  3. Enforcement and Accountability
  4. Identity Assurance
  5. Minimum Technical Requirements

NHIN participants sign a legal document called the Data Use and Reciprocal Support Agreement (DURSA) which is intended to serve as a master trust agreement applying the same permissions, obligations, expectations, and constraints to all exchange participants in all of the information exchange contexts it covers (treatment, payment, health care operations, public health activities, reporting on clinical quality measures, and other uses authorized by individuals to whom the data pertains). By executing the DURSA, participants don’t actually agree to trust each other, but they do agree to acknowledge and accept that different participants may have different policies, practices, and security controls such as system access policies. This means that a participant must rely on the determination of the NHIN governing authority (who approved applicants for participation) that the policies and controls used by an approved participant are sufficiently robust, and gives participants no real ability to question the approach that another participant takes to things like security. The reliance on a legal contract (the DURSA) and a planned monitoring, oversight, and enforcement function strongly suggests that what the NHIN has produced is a distrust framework, rather than one based on trust. While that might not sound as nice, if the scope of participation for the NHIN continues to include many different types of participating entities, many of which may have conflicting organizational interests, a common level of trust may never be established, so an approach designed to achieve cooperation despite distrust may be precisely what’s needed.

The intent to use a single overarching trust model for the NHIN is based on assumptions of feasibility:  if NHIN participants someday number in the hundreds or even thousands, negotiating trust between pairs or among small sub-sets of all those participants just isn’t practical. By positioning a common, trusted authority in the center, all that should be required to achieve trust throughout the NHIN is for each participating entity to establish a trust relationship with the NHIN governing authority (which at present means with the NHIN Coordinating Committee within the Office of the National Coordinator, but its governance role is considered interim pending the formalization of a permanent NHIN governing authority). It’s not entirely clear how such bilateral trust agreements can be made with the many different organizational interests represented by the different types of organizations (providers, insurers, researchers, agencies) that might seek to participate in the NHIN, to say nothing of the interests of the patients whose data would be exchanged by those entities. It does seem logical that working through a central agent — either a vested organization like ONC or a neutral network facilitator — would have better success in negotiating trust than if all the participants tried to reach consensus on a multilateral agreements. However, given the significant time and energy that many people have put into thinking about and trying to resolve issues like harmonizing the security and privacy requirements that apply to federal and private sector entities, both categories of which may or may not be covered by HIPAA, it is also understandable why the NHIN Direct Security and Trust Workgroup declared that “real world evidence suggests that achieving global trust is not practical.” While NHIN Direct is not primarily intended to effect changes in the approach or structure of the broader NHIN, it would be nice to see the development of the trust framework currently under consideration within the Health IT Policy Committee take some practical guidance on trust from NHIN Direct.

Privacy settings do matter: subpoenas quashed for disclosure of social networking data

In a recent federal district court ruling noted and summarized by the always-astute privacy team at law firm Hunton & Williams, an individual user of Facebook, MySpace, and other less well known online communities, who is also a plaintiff in a copyright infringement lawsuit, successfully quashed a subpoena by the defendants in his case that sought to obtain private messages he had sent through the social networking sites. Lawyers for the plaintiff argued that the subpoenas were overbroad, that the information they sought was irrelevant to the case, and that the social networking companies’ disclosure sought in the subpoenas is prohibited under the Stored Communications Act (18 U.S.C. §121), which among other provisions says that “a person or entity providing an electronic communication service to the public shall not knowingly divulge to any person or entity the contents of a communication while in electronic storage by that service” (§2702(a)(1)). The magistrate that first considered the motion originally rejected the argument under the SCA (and accepted only the claim that the subpoenas were overbroad, since they sought all of plaintiff’s communications on the sites). Not satisfied, the plaintiff moved for reconsideration of the magistrate judge’s ruling on the motion to quash the subpoenas, and the district court accepted plaintiff’s argument that the private messaging capability provided by sites like Facebook and MySpace are in fact electronic communication services under the definition in the law, and quashed the portions of the subpoenas concerning disclosure of the messages the plaintiff sent through the sites.

Still unresolved is whether the plaintiff’s comments and wall posts can similarly be considered as private communications, since they are more or less intended to be public content, at least “public” within the context of the online sites in question. The Stored Communications Act prohibitions on disclosure do not apply to “electronic communication made through an electronic communication system that is configured so that such electronic communication is readily accessible to the general public” according to a clause in a different part of the Electronic Communications Privacy Act of 1986 (18 U.S.C. §2511(2)(g)(i)). The district court directed the parties to the suit to produce detailed information about the plaintiff’s privacy settings, and in so doing provide some indication of whether he intended his posts and comments to be publicly viewable. The implication is clear, at least from a personal privacy perspective: if you want any of your activity on social networking sites to be considered private in a legal context, you should configure the privacy settings made available by the site in such a way that conveys your intent to limit the disclosure of the information. If you make your personal information public, even within the confines of a social networking community, then the courts may consider that decision as contrary to any later assertion that you wanted the information to be private.

Senators propose law banning pre-paid cell phones

In a move ostensibly intended to aid anti-terrorism efforts, Senators Charles Schumer and John Cornyn issued a joint press release two weeks ago announcing proposed legislation that would essentially end the anonymity of pre-paid cell phones by requiring buyers to present identification when purchasing one, and phone companies to maintain a record of buyers’ information. This is merely the latest strong reaction to the context surrounding the Times Square bomber, who used a “disposable” cell phone to, among other things, to call Pakistan prior to the bombing attempt and to arrange to buy the vehicle that he used to plant the explosives in his failed attempt to set off a car bomb in New York City. The proposed senate legislation would be the first federal attempt to require registration of pre-paid cell phone purchasers, although several states are already considering such rules. Schumer and Cronyn acknowledge that the vast majority of pre-paid cell phone users for law-abiding purposes, the fact that they are popular among criminals is sufficient reason in their opinion to prohibit anonymous use. This is an interesting line of thinking, as it’s not at all clear how even a criminal’s use of a cell phone would itself be an illegal act, and it seems a stretch to try to put a cell phone in the category of a weapon like a handgun, explosives, or other products already subject to buyer identification and purchase record-keeping requirements. Public reaction to the proposal, from all political perspectives, pretty unanimously points out the obvious infringement on civil liberties and individual privacy (which commentators such as Bob Barr attribute as a defining characteristic of the 111th Congress).

This proposed action is consistent with a long history of precedents where the government seeks information on a large body of individuals and their transactions or communications in the name of law enforcement (and, in this case, national security). Efforts by the U.S. government to restrict the strength of encryption used in exported products were generally ruled unconstitutional in 1997, but remain in place for exports of some product types to some countries, under a program administered by the Bureau of Industry and Security (BIS), part of the Department of Commerce. Encryption — and more specifically its use to protect the privacy of data and communications — is perhaps the most prevalent contemporary example of a technology that can be used just as effectively to hide criminal behavior as it can to protect legitimate users. Governments in many countries, not just the U.S., have struggled to find the right balance point between individual and national interests, but in the post 9/11 era, both the former and current U.S. administrations seem quite willing to restrict the civil liberties of the many to try to avoid missing the threatening actions or intentions of a few. We touched on this sort of bias in the aftermath of another terrorist near-miss last Christmas; the desire to avoid a successful terrorist attack is certainly strong enough to warrant proposals like the one from Schumer and Cronyn, and may just be strong enough to override personal privacy considerations in the name of homeland security.