A new report released yesterday by the Government Accountability Office (GAO) includes a reiteration of existing security issues and weaknesses across the federal government, and includes a dozen recommended actions to improve federal cybersecurity reflecting the results of panel discussions with public and private sector experts. It’s an ambitious list, but given the persistent of some of the problems, if the GAO can provide a roadmap that senior policy officials like still to-be-named cybersecurity czar can use to focus attention and direct resources to a set of priorities, there may be an opportunity to make some progress in these areas.
On a timely parallel note this week, NSA Information Assurance Director Richard Schaeffer Jr. testified before the Senate Judiciary Committee’s Subcommittee on Terrorism and Homeland Security that if agencies focused security efforts on instituting best practices, standard secure configuration settings, and good network monitoring, those actions alone can guard against the majority of threats and cyberattacks agencies face. This sort of 80/20 rule is not intended to obviate the need for risk assessments or comprehensive implementation of effective security controls in accordance with FISMA and other federal requirements, but the message from NSA seems to be a clear call to agencies to get the basics right.
There are lots of new and improved privacy and security requirements scheduled to come into effect over the next few months, including enhancements of existing HIPAA security and privacy provisions that were added in the HITECH Act that passed in February as part of the American Recovery and Reinvestment Act. As the time draws near when both HIPAA covered entities and some non-covered entities will need to comply with the new regulations, many indications point to a general lack of readiness by these organizations to be able to meet HITECH’s requirements. The results of a survey conducted by the Ponemon Institute and published this week by the survey’s sponsor Crowe Horwath found that the vast majority of healthcare organizations surveyed do not think they are ready to comply with the new security and privacy requirements in the HITECH Act. While it should be noted that Crowe Horwath has a business interest in this research as a provider of risk management and compliance consulting services, the near consensus of survey respondents and the troubling lack of available resources in order to try to achieve compliance raise significant questions about the realistic expectations for compliance and enforcement of the new requirements. On a similar theme, the effective date for Massachusetts’ sweeping personal information security regulations in 201 CMR 17 has been pushed back twice — first from January 1, 2009 to May 1, and then to January 1, 2010 — in order to give affected organizations more time to understand what was needed to comply and to put appropriate measures in place.
What is less often cited when focusing on efforts to comply with new rules is the extent to which organizations are (or are not) already complying with existing regulations and requirements such as those in the HIPAA privacy rule and security rule. The ability for organizations to reach and maintain compliance has varied greatly with organization size — small organizations tend to have less ability to dedicate staff or financial resources to compliance efforts, or to have personnel with explicit responsibility for information and security privacy. The recent survey indicated that a large majority of organizations do not currently comply with all mandated practices, such as the 79 percent of respondents that do not conduct regular audits or independent assessments of their compliance or of the adequacy of their security and privacy programs.
One way to approach this situation is of course to delay implementation dates. However, it may make more sense to stick to the schedule prescribed in the HITECH Act for when requirements take effect, and adopt an approach to organizational monitoring and compliance enforcement that takes into account the time, resources, and level of effort required to meet the regulations. Current health IT initiatives almost always include phased or incremental rollout strategies, so a similar approach could be followed for security and privacy compliance. One potential benefit from keeping to the original implementation schedule is that as the subset of covered organizations that are ready for HITECH formalize their programs, there should be an opportunity to leverage their example to facilitate less prepared organizations getting to the place they need to be to comply with the law.
In another example of stronger individual-level privacy protections in the European Community compared to those in the United States, the EU Council this week approved a law that requires online users to be asked for explicit permission before a cookie is stored on a user’s system. This is a shift from existing EU telecommunications law that requires users to be given a way to opt-out. While the intent of the law is clearly to protect users from a variety of potentially invasive practices such as online behavioral tracking, critics of the provision have rightly suggested that the larger ramifications of the law on economic drivers of Internet operations such as online advertising, as well as impacts on end-user experience if web site visitors are constantly interrupted by prompts seeking their consent to place cookies. Even when ostensibly done to protect users, response to similar security-driven approaches such as the user account control prompts in Windows Vista suggest that this sort of interruption of usability in the name of security is over the line of what is acceptable.
As noted in this space last week, based on recent activity in the Senate and similar if less immediate legislative proposals in the House of Representatives, it seems possible that Congress will move ahead with enactment of a federal-level data breach disclosure law. Given the patchwork of state-level and domain-specific laws that already exist, there is clearly potential to standardize and perhaps simplify the data breach picture, at least with a minimum threshold, that might in turn translate into the use of more proactive data protection practices and technologies by organization subject to such regulations. In a countering view, CSO magazine conducted its own informal survey as to whether a federal cybersecurity law was the right approach, and saw responses heavily leaning to an answer of “no.” Before extending this sentiment to the current efforts on national data breach disclosure standards, however, it would be a good idea to distinguish just how much “cybersecurity” is really in the proposed legislation.
The responses highlighted by CSO show a lot of skepticism about the government’s ability to legislate anything that results in better security for those subject to the laws. Without debating the merits of these arguments (there surely are some merits), it might be useful for the survey respondents to remember that the proposed laws aren’t primarily intended to increase the level of security measures organizations apply to data and systems to reduce breaches, but instead to require that when breaches occur, those affected by the breaches must be told. Hopefully such a law will provide an incentive to organizations to take steps to avoid breaches, but aside from granting exceptions in cases where lost data has been rendered unusable through encryption or comparable mechanisms, the Congressional bills don’t even attempt to mandate any particular security practice or use of technology. The provisions in Leahy’s S. 1490 that increase the penalties for identity theft logically can only be seen as an additional disincentive to behavior already prohibited by current law. The absence of technical specificity is a standard feature of security laws, as Congress (quite rightly) doesn’t believe it has the expertise to specify technical mechanisms and certainly doesn’t want to be in the business of promoting one technology over another.
We read the proposed legislation in the context of greater transparency sought by the current administration on many fronts. Requiring data breach disclosures is a way to make data-holding organizations accountable for their security lapses, and according to the sponsors of the bill is driven largely by concerns over consumer protection issues, rather than a desire to augment data stewardship requirements or strengthen data protection practices. Those who argue that the security realm doesn’t need more enforcement mechanisms are presumably working under an assumption that commercial and public sector entities can be trusted to do the right thing, with the very sort of trust model that has defined approaches to complying with FISMA, HIPAA, FERPA, and other major security and privacy laws. These assumptions have more serious implications for organizational security postures than do the prospect of federal-level laws addressing data security and privacy.
Another opportunity this week for federal health IT executives working on information exchange to continue to focus attention on the challenge of reconciling different security and privacy laws applicable to federal and non-federal entities. As seen and heard previously, and this week at an Input event, there remains an implicit bias on the part of the feds to assert that being subject to FISMA somehow translates to more rigorous security. The continuity of this theme, plus what seem to be over-simplifications in the content of the article by the usually outstanding and insightful Mary Mosquera, prompted the following reply, submitted online to Government Health IT:
The statement in the article, “SSA does not provide healthcare, so HIPAA regulations do not apply” only addresses one end of the information exchange being described. MedVirgina is absolutely a HIPAA-covered entity, even if SSA is not. This puts different obligations in play for each participant in the exchange, which is the crux of the problem. Those quoted in this article (once again) imply that because FISMA is required for the federal government, government agency security is a stronger constraint (more specific and more detailed, if not more robust or actually “better”) than security requirements that apply to non-government entities. This is a false argument. Sankaran’s statement that “we can’t have the government having to check that all these systems are compliant” is particularly non-sensical. The only FISMA “auditing” that occurs now is internal, as agency inspectors general conduct FISMA compliance reviews of their own agencies. There is no independent audit of agencies for FISMA compliance, and there is also no penalty imposed (other than a bad grade on a scorecard) for agency failure to comply with FISMA requirements.
The scenario described by FHA lead Vish Sankaran where a small medical practitioner would be challenged to comply with all the requirements for security controls that would apply under the law is a red herring too. Small practitioners are already bound by HIPAA as covered entities, just as large hospitals are, and to the extent that these offices use computerized records (the standard industry term of use is ePHI or electronic personal health information), they must already adhere to the requirements of the HIPAA security rule. Sankaran implies that by exchanging data with government agencies, these practitioners would be subject to FISMA, but this is not the way the law works. Non-government entities like contractors are only bound by FISMA if they hold or process data “on behalf of” the federal government; merely storing or using copies of government data does not bring a private health provider under the coverage of FISMA, even if that data is owned by the government. The current situation described in this article, where federal agencies would want to hold private providers to FISMA’s requirements, may in fact be what federal health stakeholders want, but it is simply not a requirement under the law.