Just a day after Steve Jobs presented the highly-anticipated iPad at an event in San Francisco, security analysts are raising concerns about malware, browser and other application attacks, phishing, and other threats seen against the iPhone. The full technical details on the new tablet device are not well known, so some of the likely issues raised are based on speculation and assumptions, but the popularity of the iPhone has already made it something of an exception to the usual rule of thumb that there are far fewer security threats for Apple computers that there are for PCs. The iPad shares technical characteristics of an iPhone (with a new processor designed just for the iPad), and will presumably run many of the same sorts of applications currently sold through Apple’s Apps Store. The potential for exploitation using the sort of attacks often mentioned are certainly not limited to Apple’s products, but devices like the iPhone and the iPad appeal to both Mac computing enthusiasts, non-Mac techies, and far less technical segments of the general population. The prevalence of security threats historically is correlated to the popularity of the devices or platforms targeted for exploitation, so the types of attacks that have been launched against the iPhone are driven in part by the popularity of the product, as well as by exploitable vulnerabilities. To the extent that the iPad adopts security features seen in the iPhone (or also lacks capabilities not available with the iPhone like remote disablement), the same sorts of attacks are likely to hit the iPad. This is of course all speculation pending the release of more complete technical details of the product, particularly in the security arena, but regardless of the validity of recommendations security analysts or vendors might make, so far it seems unlikely that the iPad would be targeted at enterprise use any more than the iPhone has been.
In the latest setback for the Electronic Frontier Foundation and its efforts to hold the National Security Agency accountable for its mass surveillance of phone calls and emails, a federal district court dismissed with prejudice two actions filed by the EFF on behalf of American citizens. The U.S. District Court for the Northern District of California ruled that the plaintiffs’ claims were not “sufficiently particular to those plaintiffs or to a distinct group to which those plaintiffs belong,” but instead constitute a “generalized grievance shared in substantially equal measure by all or a large class of citizens.” Writing for the court, Justice Vaughn Walker cited as precedent a finding in Seegers v. Gonzales: “injuries that are shared and generalized — such as the right to
have the government act in accordance with the law — are not sufficient to support standing.” This essentially means that the courts have found that since the government is monitoring everyone(the case uses the participation of AT&T to extrapolate to all major telecommunications providers; EFF has focused its legal action on AT&T among telcos due to the existence of documentation leaked by a former AT&T employee that ostensibly shows that AT&T participated in the illegal wiretapping program), the surveillance can’t be prevented by the courts, even if it is illegal.
This is the second legal defeat for the EFF in the past year. Last June, the same judge in the same federal district court ruled in favor of AT&T as a defendant in Hepting v. AT&T, in which the EFF sued the telco giant for cooperating in the NSA’s warrantless wiretapping program. Initially filed in 2006, the case had made it to an appeal to the 9th Circuit in 2007 before the government, in enacting the FISA Amendments Act of 2008, granted retroactive immunity to telecommunications companies that had violated the Foreign Intelligence Surveillance Act (FISA). It’s strange enough to make sense of a legal construct that at once forbids warrantless wiretapping and forgives it as long as it is conducted broadly enough; it’s particularly hard to fathom in the context of government-sponsored monitoring of personal communications sparked by the recent Chinese-based hacking incidents.
In honor of Data Protection Day (tomorrow, January 28) and its “Think Privacy” theme, let’s turn our attention to a few current efforts to bring legislated privacy requirements into the 21st century. In Europe, privacy watchers are looking to Viviane Reding, the European Commission’s commissioner for information society and media, who has stated publicly that protection rights for personal data are among her top priorities. Now entering her third term in office, Reding has been appointed the commissioner for Justice, Fundamental Rights, and Citizenship for the EC’s 2010 session, and unnamed officials (the European press likes to use those unnamed sources too) purportedly close to Reding have suggested that one area of focus will be a review of the EU’s Data Protection Directive, which among other provisions constrains the collection and use (the broad general term in the EU law is “processing,” which encompasses more than two dozen operations in the official definition) of personal data by EU member countries. The Data Protection Directive was enacted 15 years ago, so it would seem that a least some European commissioners think it might be due for revision, or at least a close look to see if it covers modern information usage.
In the United States, one of the central privacy laws is the Privacy Act of 1974 that constrains U.S. federal government activities related to data collection, use, and disclosure. The Privacy Act has been amended since its enactment over 35 years ago, typically in cases where the advance of technology creates gaps in the law that Congress needs to fill, as was the case with the Computer Matching and Privacy Protection Act, which in 1988 amended (and became part of) the Privacy Act of 1974 to constrain the use of personal data in automated matching programs. In recent years both government and private sector bodies have called for revisions to the Privacy Act due to the significant changes both in information technology used to collect and process personal information and to evolving threats to privacy enabled by technology (identity theft, for example, has existed for many years but did not provide thieves the opportunity for substantial financial gain prior to the advent of automated banking technology). Last May, the Information Security and Privacy Advisory Board released a report including a recommended framework for federal privacy policy in the 21st century. Also in process in both houses of Congress are bills that, among other provisions, would strengthen data protection standards in areas such as breach disclosure requirements and consumer empowerment. There are of course many important issues competing for government attention, but as the continued pace of technical change outstrips technical, policy, and regulatory governance mechanisms, it becomes more critical that the legal framework is adapted accordingly.
In all the discussion about health information exchange and electronic health records and establishing trust among public and private sector organizations, what’s often lost is the voice of the consumer. The goal of widespread EHR adoption is usually expressed not in terms of the number or percentage of health providers, insurance plans, or government agencies that will be using the systems, but instead in terms of what proportion of health records are stored in electronic form, with a vision articulated in January 2009 that all U.S. residents would have electronic health records by 2014 (the same deadline that President Bush intended with his 2004 executive order seeking the same goal of widespread adoption of EHRs). Significant federal funding has been allocated through the American Recovery and Reinvestment Act to provide financial incentives to health care providers to implement and use EHR technology, although adoption rates in the United States, while improving, are still well short of a majority, and full penetration within the next four years seems a very ambitious objective. One factor contributing to the lack of progress on EHRs may be patients themselves: the results of a study by the Ponemon Institute released this week suggest that few Americans have sufficient trust in either the federal government or industry to store and access their personal health data. The Office of the National Coordinator within HHS has for a couple of years been focusing on ways to capture, manage, and honor consumer preferences about disclosing personal health information, but to the extent this survey reflects public sentiment, the unwillingness of individual consumers to allow their health information to be shared may present just as significant a barrier to realizing the health information exchange vision as an7 of the organizational-level issues. Overcoming this resistance will require significant consumer education and outreach to be sure, but the effort could be facilitated by doing more to demonstrate that all appropriate measures are being taken to ensure the privacy and security of personal health information.
In a coincidental reinforcement of a point we raised recently in a different context about the difficulty of establishing the credibility of information found on the Internet, a reliance on unsubstantiated claims and poorly verified (or unverified) information seems to be at the heart of some of the recent criticisms of the intelligence communities failure to “connect the dots” and prevent the would-be Christmas Day airline bomber from boarding the flight from Amsterdam to the U.S. In response to a detailed listing of “articulable facts” about the bombing attempt proposed by Bruce McQuain to refute testimony before Congress by FBI Terrorist Screening Center Director Timothy Healy that there was insufficient factual information to provide “reasonable suspicion” about the underwear bomber, Kevin Drum of Mother Jones offers a point-by-point response backed up by news reports and other evidence. (The point-counterpoint came to our attention via security expert Bruce Schneier.) Perhaps the most interesting of the points incorrectly asserted by McQuain (and many many others) is the claim that Abdulmutallab was traveling on a one-way ticket (which therefore should have served as a red flag). This claim, first asserted on the day of the attack and widely repeated by just about every reputable news source covering the story, turns out not to be true, though despite corrections made by the New York Times, MSNBC, and others, the false claim continues to appear in published reports.
So the message here is simple: when you read a claim, you have to look for the evidence, and if there isn’t any, it’s a mistake to rely on the information as factual, no matter how logical it sounds or how reputable the source is considered to be. In theory this should be easier to avoid for those posting information online, because adding hyperlinks to reference sources is a simple matter. The more information gets passed around, however, the more likely it is to lose the traceability to sources that helps determine its validity. For a recent example we need look no further than Bruce Schneier once again. In a recent essay on the Google-China hacking incident Schneier refers to reports that China used long-existing “back doors installed to facilitate government eavesdropping” (the “government” in this statement is the American one, not Chinese), and the article embeds links to more than one published story as well as some of his own previous writing to provide evidence for the assertion. When CNN.com picked up the piece and ran it, none of the supporting evidence (or more specifically, links to it) was included with the story. So a reader on CNN.com would see a strong but unsubstantiated assertion that the attacks on Google were actually facilitated by a legally-required back channel maintained by Google to allow access by law enforcement authorities. The existence of and exploitation of the “internal intercept” access back-channel is attributed by Macworld only to an anonymous “source familiar with the situation” — a familiar phrase in the press, but one not particularly useful in assessing the credibility of a claim.