Providing further evidence that the HHS Office of the National Coordinator (ONC) is increasingly focused not only on addressing personal privacy concerns related to the use of health IT and health information exchange but also on balancing privacy with functionality, ONC announced plans to conduct a large-scale survey of consumers on individual attitudes towards health information exchange and controlling disclosure of their personal health information. In a notice published in the Federal Register, ONC explained that “little is known about individuals’ attitudes toward electronic health information exchange and the extent to which they are interested in determining by whom and how their health information is exchanged.” In conducting the survey, which once begun is intended to reach more than 25,000 U.S. households, ONC hopes “to better understand individuals’ attitudes toward electronic health information exchange and its associated privacy and security aspects as well as inform policy and programmatic objectives.”
The Department of Homeland Security is apparently ready to move forward with a pilot of capabilities to test its Einstein 3 intrusion detection and prevention system. The plan is to work with a commercial service provider that is a designated Access Provider under the Trusted Internet Connection (the DHS acronym is “TICAP”) program to route live network traffic through the Einstein system and validate its technical capabilities as well as the ability to route traffic flows and provide alerts and other appropriate notifications. Given the sensitivity of the program and the well-established privacy concerns over the prospect of the NSA and other government analysts poring over the full content of Internet traffic flowing to or from government networks, DHS conducted a special Privacy Impact Analysis (PIA) just for the pilot program. In the PIA, DHS lays out the objectives for the pilot “exercise”:
One notable aspect of the network configuration planned for the pilot is that DHS will identify traffic associated with a particular federal agency (using IP addresses allocated to that agency) and re-direct the traffic to a secure monitoring environment where the Einstein system will be installed. Such a configuration effectively pulls the relevant traffic out the service provider’s network, performs whatever analysis the system can do, and then puts the traffic back on the network where it presumably can proceed along whatever appropriate route it was heading down in the first place. This is a subtle yet significant deviation from a truly in-line deployment (which might be envisioned for the Einstein system in some future production implementation) and simultaneously allows DHS to focus on traffic for one agency at a time and would seem to minimize the amount of traffic overall passing through the Einstein system. Looking ahead to some possible future scenarios, such a configuration might let DHS and the NSA optimize their detection and prevention operations based on whatever agency is the source or target of the network traffic being analyzed.
With the tremendous rise in observed security events seen by security administrators for both the Senate and House of Representatives, Congressional leaders are facing the reality that they need to do more to secure systems, data, and computing devices. Senate Sergeant-at-Arms Terrance Gainer provided some insight into the magnitude of the security problem while requesting an increase in his operating budget, including an additional $1 million to strengthen security. The problems stem from the high visibility that Senate and House systems offer to attackers to a general lack of security awareness among Congressional members and staffers alike. The trend of rising security incidents is pervasive across the federal government, with the number of incidents reported to the U.S. Computer Emergency Readiness Team (US-CERT) more than tripling between 2006 and 2008 and providing some counter-evidence to any suggestions that federal information security is improving under FISMA. Still, the rise in security events reported for the Senate was 20,000 percent between 2008 and 2009. Of course, a security “event” is different than an incident, and the vast majority of the activity seen by the Senate is handled by the security measures in put place to provide just that sort of protection. However, even if you accept the premise that legislative data is more interesting or valuable as an attack target, it is hard to fathom that there aren’t some fundamental (if unknown) aspects about security of Congressional networks that makes them so attractive.
Whatever you believe about the effectiveness of federal agency security regulations, guidelines, and standards promulgated by NIST under the authority delegated to it by FISMA, it is at least interesting to note that legislative offices, systems, and data are not subject to any of the obligations imposed on executive agencies by the very laws that Congress has enacted. Even with common standards and guidelines, the specifics of federal information security management practices vary significantly among agencies, not coincidentally because each agency is responsible for making its own risk-based determinations of what threats, vulnerabilities, and risks it faces would result in an impact significant enough to demand mitigation. Congressional systems and security administrators support a group of users (members of Congress and their staffs) as demanding as any in the government, and who have shown reluctance to adopt even basic security measures if they interfere with convenience. It’s also hard to imagine another part of the federal government in which active distrust among co-workers is so pervasive, both among members of different parties, across different committees, and even between the two houses of Congress. Given this active threat environment, perhaps those in the legislative branch should follow some of the same advice they’ve put into words in the legislation they’ve written, and take a more proactive approach to risk assessment, incident response, and evaluation of the effectiveness of security controls.
The HHS Office of the National Coordinator (ONC) seems to be putting privacy protections (along with security) high on its list of priorities as it works to make widespread adoption of health information technology a reality. In a publicly released draft of ONC’s updated “Health IT Strategic Framework” privacy and security is one of four major “themes” (the others are meaningful use of health IT, policy and technical infrastructure, and learning health system) characterizing ONC’s federal strategy for health IT. ONC puts particularly emphasis on adhering to the privacy principles enumerated in the “Nationwide Privacy and Security Framework for Electronic Exchange of Individually Identifiable Health Information,” which it released in December 2008 with the endorsement of then-HHS Secretary Michael Leavitt. In general, this Framework brought forward and augmented the Fair Information Practices contained in a 1973 report from the Department of Health, Education, and Welfare that formed the basis of the Privacy Act of 1974 and the OECD Privacy Principles. The 2008 Framework has eight core principles, which are essentially the same as what OECD specifies, with the addition of principles of individual access and correction.
From a personal privacy standpoint, it’s hard not to see the implied priority from ONC as a positive development, but given the ambitious goals for health information exchange the government has had since 2004 and re-emphasized in the HITECH Act, some serious balancing among priorities is likely to be needed. The Strategic Planning workgroup of the Health IT Policy Committee has taken up this debate with specific attention to realizing the goal of using health IT to “transform the current health care delivery system into a high performance learning system” in which greater access to information may improve the delivery and quality of health care. While protecting individual rights like patient privacy and honoring consumer preferences is seen as a prerequisite for gaining acceptance of electronic medical records and data sharing through health information exchange, the workgroup seems to understand that some benefits of greater information sharing may be too compelling to be prevented in the name of guaranteeing privacy. As workgroup member Don Detmer said at the group’s March meeting, “We should not force privacy to be more important than health.”
Another point of reference on the relative importance of privacy is the absence of any specific measures, criteria, or standards for privacy in the rules on meaningful use. The healthcare providers, professionals, and organizations eligible to seek the incentive funding to which the meaningful use determination applies are all HIPAA-covered entities, so there is an assumption that these entities’ obligations under the HIPAA Privacy Rule serve to make a separate meaningful use privacy requirement redundant. The language used in the Federal Register publication of the meaningful use Notice of Proposed Rulemaking included a recommendation that providers follow the principles in the Nationwide Privacy and Security Framework, but that direction is advisory, rather than binding. The American Hospital Association, in detailed comments on the proposed rules, objected to references to the Nationwide Privacy and Security Framework principles, primarily because in some instances they exceed what is required of healthcare providers under HIPAA. For others such as the Coalition for Patient Privacy, the lack of explicit privacy requirements for meaningful use is more problematic, particularly the lack of criteria to ensure that individuals (patients) can control the use or disclosure of the information in their electronic health records. The comment period on the meaningful use rules and criteria ended last Monday, so we should know in the next several weeks if any changes are planned with respect to privacy requirements, but the strong emphasis so far on encouraging electronic medical record adoption and enabling exchange of information suggests that to the extent meaningful use incentives are seen as a facilitator of health IT, adding privacy requirements that might constrain the progress sought by ONC seems fairly unlikely.
In honor of the 10-year anniversary this week of the International Association of Privacy Professionals (IAPP), it seems like a good time to take stock of the state of privacy — in general but especially online — and the active debate over whether privacy matters to people the way it once did. Depending on who you listen to, no one cares about privacy anymore, or privacy has never been a more important concern, and the fight to preserve and extend privacy protections is a significant undertaking. Regardless of where you might land on the continuum bounded by those two opinions, there is a constant struggle going on in many fields and industries right now to find the right balance between protecting privacy and letting organizations conduct their business. Also, it seems that whether or not you have a strong interest in protecting the privacy of your own information and that of others, it seems to be getting harder and harder to do it.
On the social networking front, now you’ve got CNET columnist Declan McCullagh adding to the positions espoused by Facebook CEO Mark Zuckerberg and Google CEO Eric Schmidt that the growth and enormous popularity of social networking sites is clear evidence that people just aren’t concerned about the privacy of their personal information. McCullagh sites Google’s new Buzz service and its quick rise in use as the latest evidence that no one cares about their privacy online, coming as it does in the face of well-publicized default configuration settings that many considered a critical privacy flaw — enough that the company quickly revised the questionable program behavior. The claims from the big tech execs are pretty remarkable given the indications that each of them has personally given that they do at least care about their own privacy, even if they think none of their users have the same feelings. Regardless of the practical realities about how many people read and understand (or just ignore) privacy policies posted by online service providers, at the end of the day, individual users who share lots of personal details online are doing so by choice, so it’s not a big logical leap to say the erosion of privacy online is exactly what users want. In an effort to bring more credible opinions (less vested in getting people to share personal information) to the table, McCullagh and others have pointed to the words of federal Circuit Court justice Richard Posner, who said in an interview in 2008 that he thought privacy as a social good is “overrated” and also says that privacy is not “deeply ingrained in human nature.” Upon fuller examination, what Posner believes actually appears to be more relevant for the contemporary discussions about the right trade-offs or balancing points between privacy and utility or efficiency or convenience.This issue comes up almost daily in privacy discussions in healthcare and the move towards electronic health records, in the press and other media access to government information, and of course, in social networking.
Of course, not everyone is drinking the privacy-doesn’t-matter Kool-Aid. In a sideways response to his CNET colleage, Chris Matyszczyk first spelled out a lot of the claims from the no-privacy camp, but drew an important distinction between openly publishing trivial information and revealing really personal details, and ultimately concluded that privacy does matter to people, because people value having some things that they keep to themselves, and even if that set of things varies from person to person, they all ascribe value to being able to exert some control over what gets shared and with whom. Privacy advocates have long argued that consumers really are empowered to make their preferences known — essentially to choose not to do business with companies that don’t do a good job of protecting privacy — and it may be that companies in more traditional markets than social networking hold different perspectives about what customers expect. The need to honor customer wishes to keep their information private does not seem to be something an online enterprise can do if it hopes to have a successful future. Online movie rental powerhouse Netflix learned its lesson in this regard, after customers sued the company for violating its own privacy policies by allowing the movie rental preferences of some customers to be disclosed. For its part, the Federal Trade Commission seems to be giving fair notice to companies that consumer-focused changes in the way privacy protections are regulated may well be on the way, particularly with the general dissatisfaction with the notice-and-choice framework most companies currently rely on. We’ll address in a forthcoming post the outcomes of the FTC’s third (and last in the series) roundtable on exploring privacy, held on March 17.