NASA implements policy to suspend C&A in favor of continuous monitoring

Taking the latest information security guidance from OMB to heart, NASA Deputy CIO for Information Security Jerry Davis issued a directive this week to all NASA ISSOs, system owners, authorizing officials, and IT manager and operators that suspends certification and accreditation activity for existing systems, in favor of a streamlined, risk-based approach that focuses on continuous monitoring. The decision, hailed as a bold move in the press, comes in advance of any federal standards or guidelines as to just how agencies should effect the sort of shift called for by OMB, and is notable at least in part for its candid acknowledgment that C&A processes to date “have proven largely ineffective and do not ensure a system’s security, or a true understanding of the system’s risk posture.” Davis acknowledges in the memo that OMB allows and expects different agencies to make their own decisions about how to apply security guidance and requirements, and clearly believes that the new approach complies with the spirit and real intent of the NIST security guidelines agencies are obligated to follow under FISMA, even if it departs from the FISMA reporting requirements that remain in force. NASA will use current standard C&A processes only for new systems seeking their initial authorization to operate, and even that use is intended to be temporary “until a more effective security authorization process is established.

NASA is not the first or even highest profile agency to move forward aggressively on continuous monitoring, but it may be the first one to do so with the explicit goal of improving its security posture. The State Department has implemented a well-regarded program under CISO John Streufert that frequently and regularly scans all the systems and devices in the State computing environment for vulnerabilities and correct software configurations, assigning risk scores to the results of those scans. While highlighting the improvements seen in those risk scores since the monitoring and assessment initiative began, Streufert also suggests a real economic benefit comes from State’s risk-based methods, often citing the tens of millions of dollars State has spent in the recent past producing thousands of pages of security accreditation documentation. For its part, NASA’s change in information security focus is likely in part driven by recommendations from a GAO report issued last October that found significant weaknesses in many aspects of the agency’s information security program. State and NASA (and many other agencies and members of Congress) agree that following conventional FISMA compliance-focused security processes is not the most effective approach to security, State emphasizes the wasted time, money, and operational resources devoted to compliance as a reason to take a different approach, while NASA seems primarily interested in finding ways to improve its security posture.

What’s most interesting (dare we say exciting?) to us is the extent to which decisions like NASA’s represent might represent a harbinger of a permanent and substantive shift in federal information security management practices. It’s hard to know where the tipping point is, or even whether the changes at individual agencies can achieve the critical mass necessary to abandon compliance-based security without coordinated action from OMB, or Congress, or both. It is certainly helpful for agencies that have not yet made this shift in their information security programs to have multiple examples or models to consider when seeking successful approaches to continuous monitoring.

Issues raised about no-fly list checks provide a nice lesson in disparate impact of false positives and false negatives

The last-minute apprehension of the would-be Times Square bomber, who had already boarded an international flight despite being placed on the government’s no-fly list, provides one of the rare instances where real-time integration or data propagation is actually needed. Prior to this incident, airlines had a 24-hour window to compare ticketed passengers to the no-fly list after an update; the window is down to two hours now. There are many aspects of the government’s anti-terrorism practices that remain under scrutiny, not just the no-fly list part, but apparent problems with checking the list surfaced in connection with the attempted Christmas Day airliner bombing as well (although in that incident the issue at hand was why the person in question hadn’t been put on the no-fly list despite suspected terrorist affiliations).

There are pervasive shortcomings in the system both in terms of performing as intended, and also for often recurring instances of non-threatening individuals flagged because something about them is similar to someone legitimately on the no-fly list. This has been a problem for years, usually coming to public light when someone famous or important falls victim to false positives from terrorist detection activities. Senator Ted Kennedy ran into this sort of issue five times in the same month back in 2004. More recently we noted the case of European Member of Parliament Sophie In’t Veld, frustrated not only with being called out for screening when traveling but also for not being able to find out what information the government had on her that kept flagging her name, actually sued the U.S. government (unsuccessfully) under the Freedom of Information Act to try to learn what information was on file about her.

As frustrating as the experience may be for anyone incorrectly flagged by the no-fly system or mistaken for someone on any terrorist watchlist, the fact that non-threatening people are mis-identified by the systems as threats is partly by design. No detection system is perfect, so with any such system you have to expect there will be some level of false positives (what statisticians call Type I error) and false negatives (Type II error). The relative effectiveness of information security measures such as intrusion detection systems or access control mechanisms like biometrics is sometimes described in terms of the rates of these types of errors, or in terms of the crossover error rate, which generally is the point at which Type I and Type II error rates are equal. In many cases, have equal rates of false positives and false negatives is not the goal, because the potential impact of these errors is not equivalent. In the case of terrorist watch lists, the government is comfortable with a relatively high false positive rate (that is, mistakenly flagging individuals as threats when in fact they are not) because the impact there is (merely) inconvenience to members of the traveling public. What they want to avoid is a false negative, or the failure to identify a threat before that threat makes it on to an airplane. The fact that Faisal Shahzad was able to board is an example of a false negative, as his name was on the no-fly list, but the airline apparently didn’t check either at the time of purchase or before boarding. Tightening the time constraint within which airlines must check the no-fly list has the effect of reducing Type II errors, which is the primary goal of government anti-terror programs. The government is much less interested in reducing Type I errors, at least if there is any chance that by reducing false positives (say by removing names from watchlists when false positives are associated with them) the chance of false negatives might increase.

HITECH restrictions on sale of health record data constrain some EHR plans

As the provisions of the Health Information Technology for Economic and Clinical Health (HITECH) Act continue to be implemented, many health care organizations are beginning to understand that changes to security and privacy requirements originally promulgated under HIPAA and now strengthened under HITECH are not the only considerations. While one significant change under HITECH (Title XIII of the American Recovery and Reinvestment Act of 2009) made business associates directly accountable for most HIPAA requirements, there are changes in the rules, and in the penalties for non-compliance, that will have an impact on HIPAA-covered entities and business associates that may have thought they had HIPAA compliance well under control. Newer entrants to the health IT market such as personal health record vendors have a potentially confusing path to navigate, as some prominent features of HITECH like health data breach notification rules apply to PHR vendors and other non-covered entities, but many of the other provisions do not.

A relevant example of the evolving regulatory landscape is the extent to which organizations that have electronic medical records or other online health data are allowed to charge for sharing it with someone else, and the circumstances under which any such payments may be constrained by the law. Not only might charging for health records appeal to some third-party providers looking to offer up EHR system usage, patient or provider portals, clinical data repositories, or other health record functionality on a software-as-a-service basis, but some sort of data access or per-record fee might help give covered entities and business associates financial incentives (or just help cover costs) for operating health IT systems and making their data available for exchange with other entities.We’ve noted before that the absence of such a business model is a significant but typically overlooked obstacle to widespread adoption of health information exchanges.

Section 13405(d) of the HITECH Act specifies new prohibitions on the sale of electronic health records or any protected health information held by covered entities or business associates (the restriction does not apply to other third-party entities, which could make for some interesting legal loopholes for third-party holders of health data who do not have business associate agreements in place with the whatever organizations serve as the source for their data (covered entities, individuals, personal health record systems, etc.) or do not “process” health data in a way that would make them covered entities as clearinghouses under HIPAA. At first read, the text of the law seems very clear and highly restrictive:  “a covered entity or business associate shall not directly or indirectly receive remuneration in exchange for any protected health information of an individual unless the covered entity obtained from the individual a valid authorization that includes a specification of whether the protected health information can be further exchanged for remuneration by the entity receiving protected health information of that individual.” There are, however, quite a few exceptions provided in the law, notably payment for cases including:

  • public health activities
  • research, where fee reflects the costs of preparing and transmitting the data
  • treatment of the individual
  • for a health care operation (anything falling within the HIPAA definition of the term)
  • for remuneration provided by a covered entity to a business associate involving the exchange of protected health information that the business associate undertakes on behalf of the entity
  • to provide an individual with a copy of the individual’s protected health information

This means that a covered entity is allowed under the law to charge for providing or processing health records for a variety of purposes, including charging individuals for their own records (although an existing requirement under HIPAA specifies that the charge to an individual cannot exceed the actual labor cost to furnish the record). The exception for health care operations would seem to pave the way for some enterprising third party data management services to provide outsourced health IT to providers, and would also allow for covered entities to charge each other for sharing the records they hold. The rules prohibiting sales of health data would, however, appear to make illegal some of the creative business models being proposed in the market, including ones where providers receive free or discounted access to shared EHR services in exchange for the service providers selling some of the data to third parties such as drug manufacturers. The HITECH rules would seem to prohibit this sort of approach even where the data being sold is de-identified.

Federal agencies have a window of opportunity to move on continuous monitoring

The call now seems to coming from all sides that federal government agencies need to fully embrace risk-based approaches to information security and move towards continuous monitoring and enterprise situational awareness. OMB, in coordination with the Departments of Justice and Homeland Security, is pushing executive agencies to change the way they report security program information under FISMA, first by going to online submission via Cyberscope, and then moving to monthly reporting as a step towards “continuous” monitoring of federal IT systems. There is little question this should be an improvement (from a security standpoint) over the current approach of producing hundreds of pages of documentation to certify IT systems and accredit them so they can run in production, then basically ignore them for as long as three years unless something significant changes about the systems or the environment they run in. However, unless and until the information reported by agencies actually represents meaningful security metrics, it’s hard to see how upping the frequency of reporting is going to help that much. Monthly compliance verification may be better than annual (or less often), but it’s still just compliance, and compliance does not equal security. There appears to be a lot of thinking going on about what sort of metrics or operational activities are most appropriate to deliver continuous monitoring, but to date, there aren’t a lot of concrete recommendations.

This is a risky position for federal agencies to be in, because there are also bills working through both houses of Congress that aim to strengthen and improve FISMA and that could potentially end up dictating what agencies need to do to improve security. Given the other priorities in Congress and the looming mid-term elections, it’s anyone’s guess whether a new security bill will make it through to enactment during this 111th Congress; we suspect none will. This gives federal agencies of window of opportunity to propose approaches or metrics or processes that would help realize the objectives sought in the draft House and Senate legislation, without waiting to be on the receiving end of legislative mandates that agency CISOs may not be that happy about. For all its good intentions, this really seems to be an area that OMB is ill-prepared to address effectively, but there are enough agencies (State and VA come to mind among larger agencies) making significant inroads into continuous monitoring that it would be feasible to carve out some common ground for potential government-wide security approaches. NIST’s new Risk Management Framework and the corresponding guidance resulting from the Joint Task Force Transformation Initiative would also seem to be a step in the right direction here, but despite the apparent executive agreement among NIST, DoD, the Intelligence Community, and CNSS to adopt a common security control framework and risk management process, the message has yet to reach the program and project teams working to accredit systems, or their authorizing authorities. There seems to be a lot of business as usual, with DoD folks following DIACAP and civilian agencies still producing more documentation than evidence of effective security control implementation and usage. Before the sort of common risk-based approach advocated by the Joint Task Force can become pervasive, it seems a stronger business case needs to be made (or different governance criteria need to be put in place) to evolve the process to match the new guidance. One way this sort of governance could happen is by giving the cybersecurity czar budgetary approval authority (as the draft House FISA bill would do), but presumably most federal CISOs would rather avoid going to that extreme if they could avoid it. Proactivity is not a strong suit for many agencies or their information security programs, but if they act now, agencies just might be able to obviate the need for such oversight.

HHS says stronger HIPAA enforcement on the way with privacy and security audits

Representatives from the HHS Office for Civil Rights (OCR) said last week that OCR plans to begin conducting HIPAA compliance audits for security and privacy later this year, implementing a proactive audit program required under the provisions of the HITECH Act and marking a shift from the largely reactive approach to compliance and enforcement seen since the HIPAA Privacy and Security Rules went into effect in 2003 and 2005, respectively. Susan McAndrew, OCR’s Deputy Director for Privacy, said in an interview that OCR is still working to determine the best model to use for compliance audits, but noted that when implemented, the audit program will likely be contracted out, rather than performed by OCR staff, and that audits will focus on how covered entities are meeting specific HIPAA requirements such as implementation of appropriate safeguards and seek evidence that risk analysis, contingency planning, and other key activities are in fact being carried out.

The HITECH Act included several provisions intended to strengthen HIPAA enforcement, including increasing civil and criminal penalties for HIPAA violations, giving state attorneys general the right to sue covered entities for violations on behalf of state residents, and obligating OCR to launch formal investigations in cases where willful neglect of HIPAA rules is involved. All of these measures still focus on HIPAA violations after they have been reported, typically through complaints filed with the government alleging violations. OCR has long been responsible for HIPAA Privacy Rule compliance activities, and was given responsibility for Security Rule enforcement last July (it was previously handled by CMS). The standard enforcement process OCR uses allows for compliance audits of covered entities, but practically speaking, investigative and enforcement actions depend overwhelmingly on the complaint process. In contrast, shifting to a more proactive stance and checking for compliance by covered entities absent any complaints is accurately perceived as a significant strengthening of HIPAA enforcement, particularly for security. This comes as welcome news for those in the healthcare privacy and security arena who believe that reactive enforcement alone is tantamount to no enforcement at all — a belief supported by the paucity of civil and, especially, criminal cases brought against violators, and by recent surveys that suggest that large percentages of healthcare organizations have not implemented core Security Rule requirements such as conducting a risk analysis.

While the additional responsibility for HIPAA security enforcement came with some additional OCR resources, regardless of the audit approach adopted by OCR,  two obvious questions are what level of compliance auditing is even feasible given the Office’s resources (contractor or in-house), and will that amount of auditing be able to provide any meaningful information about compliance levels more broadly? OCR has indicated that it wants to settle on a model first, and then determine the best approach to implement the model, but the importance of the model itself should not be underestimated. The envisioned audit process purports to examine the extent to which the safeguards put in place by covered entities are appropriate, but there is no measurable standard for the full set of administrative, operational, and technical security controls called for in the Security Rule, so OCR either needs to come up with one, or alternatively produce some consistent guidance by which subjective determination of security control effectiveness — and by extension, Security Rule compliance — can be made.