An article published last week in the New York Times describes the apparent realization by both the administration and the health insurance industry — long time adversaries on health care reform — that they both have significant interests in the success of health care reform now that the legislation has passed, and HHS Secretary Kathleen Sebelius is now reportedly considering some sort of public-private partnership to implement the provisions in the law. The need to collaborate does not invalidate the real and present distrust between the parties, even if it helps both the government and the insurance industry in the pursuit of their separate interests. In this context the situation illustrates the possibility for successful inter-organizational cooperation to occur, even in the absence of trust. Such as idea runs counter to some established schools of thought on trust, in which scholars such as Robert Putnam and Francis Fukayama have suggested that trust is a prerequisite for cooperation on a significant scale. In contrast, somewhat more recent literature, such as that presented in Cook, Hardin, and Levi’s Cooperation Without Trust, embody a different perspective on trust and on trustworthiness, in which “a trust relation emerges out of mutual interdependence and the knowledge developed over time of reciprocal trustworthiness” (2005, p. 2). The interdependence of the relationship is crucial to any evaluation of trust that has as its basis what Hardin has termed “encapsulated interest” — the fact that two or more organizations, working independently, may pursue the same outcome is not sufficient to engender trust. If, however, their collective action is needed, even when the benefits sought are different (profitability for the insurance companies, cost reduction and universal access for the government) then the commonality of interests can still provide the motivation needed for the parties to cooperate. What’s missing from the health care example that would make this less a case of cooperation-despite-distrust is the consideration of trustworthiness. Those in the administration are well aware that the insurance industry needs to stay solvent (the alternative of course could have serious economic implications far beyond the health sector), but understanding their interests does not make them trustworthy. Instead, trustworthiness (or more specifically, the consideration by one party that another is trustworthy) depends on the actions and demonstrated intentions of the parties over multiple interactions, with the key assumption for mutual trustworthiness that all parties have an interest in maintaining the relationship going forward. It’s not at all clear to what extent this envisioned public-private sector collaboration will occur, and if it does, how long the collaboration may go on, particularly once the key features of the health care reform law are implemented. The theoretical good news is that neither the government nor the insurance industry needs to wait for trust to develop, as mutual distrust should not be a barrier to cooperation if both sides realize that their different interests can be furthered by making health care reform a reality.
Reports released within the past month by both government health authorities and health insurers highlight recent successes in combating health care fraud and saving or recovering substantial amounts of money. A Blue Cross and Blue Shield Association study reporting anti-fraud efforts in the past year found that investigations by the Association’s member companies yielded over $500 million, a nearly 50 percent increase compared with 2008. For it’s part, the Department of Health and Human Services announced over $2.5 billion recovered through last year’s health care fraud efforts, in addition to $441 million recovered from Medicaid through similar anti-fraud programs. Both of these were significant increases from prior year results, and future prospects appear even brighter, due to new provisions and additional funding related to fraud prevention included with the Patient Protection and Affordable Care Act (the recently enacted health reform legislation). Both government and industry efforts to combat fraud are taking a multi-pronged approach, including more education and training to health care staff and individual citizens to make them more aware of scams or other potentially fraudulent activities. There also seems to be a significant emphasis on applying analytical tools and anti-fraud technologies, and to use those tools earlier in the health care claims process, to catch fraud before payment is made (prevention works better than after-the-fact recovery). Overall, the attention to detecting and preventing fraud reflects a widespread industry shift in focus, away from a single-minded prioritization of efficient claims handling in favor of a blended approach that incorporates anti-fraud activities in the core process. This change has been a long time in coming, as many of the core process deficiencies that facilitate health care fraud have been publicized for years, perhaps best articulated by Harvard’s Malcolm Sparrow in his authoritative work on the subject, License to Steal.
A recent ruling by the 9th Circuit Court of Appeals is the latest in a series of cases where individuals whose personal information was involved in a data breach were unable to successfully pursuit causes of action due to the lack of actual harm suffered by the data breach victims. In this case, Ruiz v. Gap, Inc., the plaintiff had submitted personal information as part of an online employment application to Gap. Two laptops belonging to Vangent (a contractor providing job application processing services to Gap) were stolen from the contractor’s offices. The laptops contained data on some 750,000 Gap job applicants, including Ruiz, and he filed a lawsuit in California against Gap and Vangent alleging negligence, breach of contract, and various other California regulations. The Northern District Court granted defendants’ motion for summary judgment and rejected Ruiz’s claims, noting that while the potential future harm he faced, such as increased risk of identity theft, was sufficient to give standing to sue, the lack of proof of any actual injury due to the theft of his personal information meant the case failed to meet the standard of appreciable harm necessary to bring a cause of action for negligence under California law. The 9th Circuit affirmed the District court’s ruling.
The ruling in Ruiz v. Gap follows a recent trend in personal privacy lawsuits where the parties responsible for breaches of personal information are not subject to private rights of action unless the plaintiffs can prove harm resulted from the breach. It should be noted that the fact that organizations escape potential civil liability in such cases does not mean that they cannot be fined or even criminally prosecuted under state or federal privacy statutes, where such laws exist. A similar dichotomy exists in federal health data breach rules, where liability for individuals and even requirements for organizations suffering breaches to disclose them hinge on the determination of harm due to the breach. Even where organizations assert that no risk of harm to individuals exists, the organizations can still be held liable for violating provisions of the HIPAA Privacy Rule, and even be subject to criminal prosecution if the breaches were the result of willful negligence. As the Ruiz ruling shows, the problem in these cases for individual plaintiffs is not the privacy laws per se, but the tort law requirements for negligence or other causes of action. The legal precedents shown in these cases (and described in detailed case law citations in the Northern District court’s order of summary judgment) suggest that privacy regulations and data disclosure laws may not be the best legal avenue for plaintiffs suing Facebook over its privacy practices or in the ever-rising number of lawsuits being filed against Google for the wireless data collection activities it conducted in its Street View program. In the case of Google and Street View, plaintiffs seem to be focusing on the company’s alleged violation of federal wiretapping laws, rather than asserting privacy violations or breaches of personal information.
Reports of potential breaches of patient privacy at Tri-City Medical Center in Oceanside, California have garnered the HIPAA-related attention you would expect, but are also raising questions about the availability and use of social networking sites from hospitals and other health care facilities. It seems some Tri-City employees posted personal details about patients on Facebook, calling into question the extent to which medical facilities have policies in place about accessing social media and, if access is allowed, about appropriate use to avoid privacy violations under HIPAA. The California Department of Public Health confirmed this week that it has opened its own investigation in to the alleged disclosures; the focus of the state-level inquiry appears to be compliance with or violation of the HIPAA Privacy Rule. This recent incident is far from an isolated occurrence, and as more hospitals move to enact social media policies, the examples set by policies published by major health industry companies like Kaiser Permanente suggest that health care organizations would be wise to err on the side of conservativeness when it comes to patient information. Specifically, while many policy definitions of protected health information that is the focus of HIPAA regulations enumerate specific attributes like name and date of birth, the Privacy Rule applies to all individually identifiable health information (45 CFR §160.103) and specific details about a patient communicated orally or in writing, even without referencing name, may fall under this category. Simply put, this means even a casual conversation by two hospital employees about a patient, if done in earshot of others not involved in the patient’s care, likely constitute a HIPAA violation, and the same logic certainly applies to holding such a discussion online.
As Google continues to accede to demands from several European countries and U.S. courts to turn over copies of data it collected over unsecured wireless networks during its Street View program operations, plaintiffs in a class action lawsuit filed in Oregon are pointing to a 2008 patent application the company filed to challenge Google’s assertions that the data collection was unintentional. The application, for “Wireless network-based location approximation,” appears to emphasize the intent to fairly precisely determine the location of wireless access points, but the method proposed to make that determination clearly includes capturing packets transmitted from the access points being analyzed. Of course “packet capture” and even “packet analysis” do not necessarily equate to payload inspection, which is where the invasion-of-privacy claims lodged against Google seem to focus, but the patent application makes no distinction about use of different parts of captured packets (e.g. header vs. payload contents), so it there does not seem to be anything to back up Google’s publicly stated claims that it was never interested in the payload data. The company’s response to the latest allegations involving the patent application was to flatly deny any connection between the method for which the patent was sought and the Street View program.
It may be hard to try to prove intent on the part of Google merely by showing the absence of any explicit statement that clearly says what Google planned to do with data collected through the method it wanted to patent (or more specifically, that says exactly how wireless packets would be analyzed). Reading the patent application text, the prevailing purpose of the claims in the application is to identify the location of wireless transmission points, for the purpose of using that location information to try to provide (i.e., sell) location-based services. It certainly seems possible that location identification and traffic analysis for the purposes stated in the patent application could be performed using information in the packet headers alone (which also might prove viable when analyzing encrypted traffic, depending on the encryption method in use). In hindsight, it might be nice for Google now if its application had said it intended to strip out packet contents and keep only the header data, but prospective patents are rarely constrained to with only describing uses of the innovation that comply with laws or regulations that might be relevant should the technology or method be put into use. It remains to be seen how legally viable the plaintiff’s arguments will be about the patent application and what it means for the case, but Google’s explanations in the matter so far haven’t been very credible (save perhaps CEO Eric Schmidt’s simple statement, “We screwed up.”). If true, the current explanation offered by the company — which alleges the whole Street View data capture was an inadvertent oversight based on the work of one programmer working part-time on the project, raises its own set of concerns — such as wondering how many “rogue” developers there might be among the employees of a technology giant with a stake in just about every major online business.