Sunday, March 25, 2007

Self-Help on the Internet: Is the Best Defense Really a Good Offense?

What are you willing to do to defend your network against hackers, zombies and Fourth Generation information warriors? What are you willing to do to defend the Internet from them? Are you going to hunker down and harden your defenses, or are you willing to defend yourself by “hacking back” and shutting the attackers down?

You may not have considered these questions; they are not usually brought up as part of information security strategy or tactical development, or in IT planning in general. But they are fundamental questions, and the answers we give may determine how well we are able to manage and maintain the information systems we have built and become increasingly dependent upon.

The Fundamental Problem of Network Insecurity.

The Internet was not designed for security, and neither were most computers. This was a feature, not a bug; security slows down communications and interferes with convenience. There was no real demand for security until the vulnerabilities of these systems became painfully obvious, which is a recent development for most people; and many still don’t seem to get it. As a result the Internet, including all the networks which connect to and constitute it, are exposed to attacks from vast swaths of unsecured systems.

The Internet is also not something you can effectively police. Most law enforcement agencies don’t have the time, resources or expertise to investigate or prosecute Internet-based crimes. And many attacks cross jurisdictional boundaries, making legal action difficult and often impossible. Even when legal action is possible, it is usually too late: the harm has been done. For the bad guys this too is a feature rather than a bug.

This means that networks connected to the Internet – ultimately the Internet itself – are subject to degradation by hostile or malicious activities. The Internet is a common good – an amazing asset shared by a community whose membership is limited only by access to and ability to use a networked computer – and as such is subject to partisan or minority abuses which undermine or conceivably could even ruin it for everyone.

So how do we defend this amazing resource? If we can’t call in law enforcement, what about self-help? Should we form some kind of Internet militia? Maybe some vigilante action? Before you decide, consider the following cautionary tale.

Shooting from the Hip.

Warbucks Financial is a boutique financial services firm whose founder, “Sonny” Warbucks, is a well-known entrepreneur with controversial views and a choleric personality. Warbucks uses the latest information technologies for trading and employs Francis X. Hackerman as its Chief Information Officer. Hackerman made his reputation as a notorious hacker, and while officially reformed he considers himself a highly skilled “hired gun.”

The University of Hard Knocks has a large, complex network serving thousands of users. Security is hard to maintain, since security measures are resisted and/or ignored by many users. One of the groups of resisters is the Script Kiddiez for Justice, which has taken a very public dislike to Warbucks.

Shortly before closing on a Friday afternoon Warbucks begins experiencing a distributed denial of service (DDOS) attack which threatens to shut down its ability to execute trades. This is a crucial time of the week and Warbucks’ clients may face serious losses if their trades are delayed.

Hackerman tries to deal with the attack by hardening the Warbucks network, but this slows trading even further. He identifies the Hard Knocks network as a source of the attack and assumes the Script Kiddiez are behind it. Hackerman tries to contact Hard Knocks Information Services to get them to intervene, but all he gets is voice mail.

A red-faced, bellowing Sonny appears in Hackerman’s doorway, demanding that he “fix it and quick.” Hackerman decides to try to eliminate the attack – and Sonny’s disturbing presence - by shutting down some or all of the Hard Knocks network.
Hackerman is a former student at Hard Knocks and knows a number of vulnerabilities in its network. He quickly modifies a publicly available worm and releases it into the Hard Knocks network, and soon hosts on the network begin shutting down.

Meanwhile, Eunice Victim has just been admitted to the Hapless Hospital emergency room to have a boil lanced. Hapless is a teaching hospital which is part of Hard Knocks and runs its information systems on the Hard Knocks network. These systems include a computerized physician order entry (CPOE) application linked to its electronic medical records system (EMR).

Victim’s EMR indicates she has an allergy to amoxicillin. However, as her treating physician, Dr. Ohno, was ordering antibiotics Hackerman’s worm crashed the CPOE. Ohno then asked Victim her about possible antibiotic allergies, but unfortunately Victim misunderstood and indicated she had none. When Victim received the amoxicillin Ohno ordered she went into anaphalytic shock and died.

The attack on Warbucks continued, and subsequent investigations indicated it most likely originated somewhere in the Middle East as part of a broad series of attacks on financial institution networks probably intended to harm U.S. financial markets.
The Hard Knocks incident response team traced the worm back to Warbucks Financial. Victim’s estate sued Ohno and Hapless for negligence in her death, and they in turn cross-claimed against Warbucks Financial, Sonny and Hackerman. Hapless, Hard Knocks and Victim’s family all demanded that criminal charges be brought against Hackerman, Sonny and Warbucks Financial.

Categories of Network Self-Help.

The above scenario would make a great bar exam question, and I challenge readers to identify all the legal issues it presents. The immediate point, however, is that the risks posed by network self-defense actions increase dramatically in proportion to the degree that they affect systems outside the network’s legal and operational perimeter.

This is because within a network perimeter the network operator has (1) sufficient information, at least in principle, to identify and avoid unintended harmful consequences of security measures, and (2) the legal authority to implement any security measures it wants, subject to its own policy limitations. Conversely, in others’ networks a party generally has limited information, and the legal right to act only to the extent they give permission.

Given these constraints Internet self-help can be categorized roughly as follows:

• Baseline: At the most basic level, within its network perimeter a party can implement whatever security measures it considers appropriate. It may also have a legal duty to do so, if the failure to implement security measures exposes others to avoidable risks (e.g., unsecured hosts used to launch DDOS attacks on third parties).

• Investigative: Moving out from its own network perimeter, a party has the legal right to conduct limited investigative activities to identify potential attack sources, to the extent these activities are not harmful and are consistent with ordinary Internet functions (e.g., pinging a host). This may be useful for identification of a party who has the authority and ability to shut down attack activity, at least sometimes.

• Cooperative: Two or more parties may take joint defensive actions within their networks on an ad hoc basis in response to an attack, or agree to a “mutual defense pact” which defines the terms of joint responses within their networks. This may be particularly useful where two or more parties are regular business partners.

• Adversarial: One or more parties may take action affecting resources in a network owned by another, without that party’s permission. This action could violate laws such as the federal Computer Fraud and Abuse Act and state computer trespass laws – not to mention issues if the network turns out to be a national security system or located in another country. There are self-defense theories which might work in a legal action, but they have not been tested in court.

Conclusion.

The Internet isn’t quite the Wild West, but it’s no well-regulated commonwealth, either. In this environment it’s up to the individual organization to defend its own network. This, of course, not only helps the organization, but helps preserve the Internet by preventing network misuse. There is also a valuable role for cooperative efforts, such as information sharing and even joint incident and attack responses. Something like a “well-regulated militia,” then, might be worth exploring, at least in the context of a mutual defense pact.

Vigilante action, on the other hand, is strictly at your own risk. There may be circumstances when adversarial self-help is really needed – certainly there may be circumstances where that seems to be the case. But before undertaking such action you had better be very sure of yourself – you may very well wind up having to explain it in court.

Wednesday, March 14, 2007

Organizational Governance and Risk Acceptance

Managing HIPAA Security Compliance:Organizational Governance and Risk Acceptance

One of the fundamental but sometimes overlooked questions in HIPAA Security Rule compliance is, who decides how much residual security risk the organization will accept? The level at which this decision is made can have important consequences not just for the acceptance of HIPAA security compliance measures within the organization, but for the cost-effectiveness of the safeguards selected for compliance and the organization’s ability to defend itself and its officers against civil penalties or criminal charges if its personnel do violate HIPAA’s information protection requirements.
The level at which security risk acceptance authority is vested depends on how the issue is perceived. Senior executives, auditors and legal counsel and board members may not understand or be comfortable with information security issues, and may perceive them as matters of technical implementation. They may therefore explicitly, or perhaps more often implicitly and by default, delegate decisions about such issues to security professionals they consider more qualified to deal with such problems. Some security professionals may be quite willing to accept such delegation, not recognizing that it may be inappropriate – maybe not really recognizing that it is occurring – perhaps even seeing it as a positive enhancement of their power and authority.
Risk Management: In the Eye of the Beholder?

The inappropriate delegation of risk acceptance authority may be particularly likely to occur under the HIPAA Security Rule because of the way it uses the term “risk management.” The Rule specifies that compliance decisions – the selection of safeguards which are “reasonable and appropriate” for addressing risks under the standards and specifications set forth in the rule – are to be made using a risk assessment-based risk management process. “Risk management,” however, can mean different things to different professions, creating a real possibility of confusion and a dysfunctional approach to compliance.
For organizational governance purposes, “risk management” generally means

. . . a policy process wherein alternative strategies for dealing with risks are weighed and decisions about acceptable risks are made. . . . In the end, an acceptable level of risk is determined and a strategy for achieving that level of risk is adopted. Cost-benefit calculations, assessments of risk tolerance, and quantification of preferences are often involved in this decision-making process.

Confusingly, however, the HIPAA Security Rule and many (by no means all) security professionals give the term “risk management” a much more limited meaning, as the implementation of “security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level.” Security “risk management” under the latter definition is therefore equivalent to risk reduction at the organizational level – a process which depends upon the prior determination of the acceptable risk level to be achieved by the reduction.
Information technology security risks cannot as a practical matter be reduced to zero, nor does the HIPAA Security Rule require “zero risk tolerance.” The rule does require that healthcare organizations “take steps, to the best of [their abilities,] to protect [protected health information in electronic form].” This requirement is based on an interpretation that in the legislation Congress intended to set “an exceptionally high goal for the security of electronic protected health information” but “did not mean providing protection, no matter how expensive.” Covered Entities are therefore permitted to use a “flexible approach” to security measure implementation which permits them to implement “any” measures which are “reasonable and appropriate,” taking organizational circumstances and factors including costs into account.
At the end of this process residual risks will have to be accepted by some party on behalf of an organization. Compliance with the rule is itself an organizational obligation, and the organization is exposed to civil and potentially even criminal penalties in the event of a compliance failure. Since acceptance of residual risks necessarily means the acceptance of some degree of exposure to potential penalties – even if the organization makes its compliance decisions properly and in good faith there is a possibility that enforcement authorities will disagree with them – this decision should only be made as a matter of organizational policy.

Fiduciary Obligations and Security Risk Acceptance.

It is a truism that the officers and directors of an organization have fiduciary obligations to provide oversight to ensure it complies with regulatory obligations. What is perhaps less well understood is that a failure to exercise such oversight could itself be a factor exposing an organization to avoidable legal penalties.
HIPAA provides not only for a regulatory regime, but for criminal penalties for organizations which obtain or disclose protected health information (“PHI”) in violation of the statute or the regulations. (Individuals can be subject to criminal penalties too, but this article is concerned with organizations.) Healthcare organizations obtain and disclose PHI constantly – it’s a necessary part of most operations – which means that a failure to comply with the HIPAA Security Rule in the course of operations involving PHI is a per se criminal violation. For example, the rule presumes that all personnel will receive appropriate security training, and requires that all information system users be assigned unique user names or other identifiers. Any receipt or disclosure of PHI by an untrained user, or by a user who is allowed to log-in under another user’s identifier, could be considered a criminal HIPAA violation by the organization.
This may seem a somewhat extreme reading of the statute, but it is the result of its literal interpretation. Whether charges are ever brought against a healthcare organization which fails to comply with the HIPAA Security Rule will therefore generally be a function of whether the failure has been brought to the attention of the U.S. Department of Justice (which has federal criminal jurisdiction), and if so whether the U.S. Attorney elects to bring charges. While it is to be hoped that prosecutors will exercise their discretion cautiously in such cases, hope is not a prudent strategy for legal compliance.
A better strategy, and one which is recognized in federal criminal sentencing and prosecution decisions against organizations, is to implement a compliance program including organizational policies and board and executive-level oversight. The existence and good faith, reasonable management of such a program is a very material factor relied on by the U.S. Department of Justice in deciding against organizational prosecution when one of its employees or agents breaks the law, and in minimizing penalties if the organization is prosecuted.
More than that, a compliance program would constitute the kind of policy-level security risk management process necessary to determine acceptable levels of risk at the organizational level under the HIPAA Security Rule, which in turn would guide decisions about the reasonable and appropriate safeguards which the organization should implement. By instituting this process the organization would be able to ensure that “reasonable and appropriate” decisions are made, in reliance on the “flexible approach” factors required by the rule. Upon the implementation of safeguards selected under such guidance, the organization will have both brought itself into compliance with the HIPAA Security Rule. While risks cannot be eliminated through such a process, if and when an incident does occur which could expose the organization to penalties it will have a sound defense.
Organizational Risk Acceptance and Security Cost Control.

While the potential consequences are perhaps less dire than criminal penalties, inappropriate delegation of risk acceptance authority may also lead to excessive spending on security safeguards and inappropriately burdensome compliance decisions. This can be demonstrated by analyzing alternative compliance decision-making approaches under one of the more problematic security standards.
The HIPAA Security Rule requires a Covered Entity to "identify[,] respond to[,] mitigate the harmful effects of [and] document security incidents and their outcomes." A "security incident" in turn is defined as an "attempted or successful unauthorized access, use, disclosure, modification, or destruction of information or interference with system operations in an information system." A risk reduction perspective might well interpret these provisions to require that any and every event which meets this definition must be dealt with according to the specification. But anybody familiar with systems administration in a large enterprise knows that events which fit this definition happen constantly.
At a very basic level network connections, particularly ports accessible to the Internet, are constantly "pinged" by unknown and presumably unauthorized applications or individuals. Each such attempted connection is "attempted unauthorized access" within the regulatory definition of “security incident.” They are also almost always harmless, assuming basic security safeguards have been implemented. Nonetheless compliance with the letter of the regulation would require each one to be identified, responded to and documented.
This would seem to be a pointless exercise in security log review and documentation, except that the failure to do so could be construed as a meaningful failure if some evildoer were to succeed in gaining access through such connections – and if the regulation is interpreted to require "zero risk tolerance" it could be evidence of negligence.
Since it is also not possible to rule out the risk that someone will succeed in hacking in to your network, a zero risk tolerance approach would require the routine review of all relevant system event logs, and the documentation of all apparent attempts at unauthorized access as required by the regulation. And such documentation is presumably subject to the 6-year HIPAA document retention requirement. If the Security Rule is interpreted as requiring zero risk tolerance, however, this burdensome approach is appropriate, even though the risks presented by attempted unauthorized access would be much better addressed through good system management practices.
This would be the approach under a definition of “risk management” as equivalent only to risk reduction at the system level. But if, instead, HIPAA Security Rule compliance is a function of considered organizational risk management, it is possible to determine a level of risk which can and should be accepted – and the burdens of compliance can be appropriately balanced against their benefits.
Conclusion.

The risk acceptance decision is the key to the risk management process which is the foundation of HIPAA Security Rule compliance, and as seen above such decisions should be vested at the organizational policy level. The vesting of risk acceptance authority at a lower administrative level, expressly or by default, may well lead to dysfunctional security safeguard selections, and expose the organization to avoidable penalties.
This does not mean that the board, or CEO, COO or CFO for that matter, should micromanage HIPAA compliance or security administration. It does mean that they should fulfill their fiduciary obligations and provide guidance to those who do manage compliance and security. They should receive routine briefings on the status of security and compliance, and establish policies and procedures intended to ensure compliance. Such policies should include guidance on risk acceptance, perhaps requiring CEO or COO approval for acceptance of residual risks above certain thresholds of probability and financial exposure, and vesting risk acceptance discretion in the Chief Information Security Officer (or equivalent title) below those thresholds. Since the security program budget will also be generally determined at the organizational policy level this will also tend to prevent overspending – and where a bigger budget is necessary to reduce risks to organizationally acceptable levels, the decision will be forced upward to the level best suited to balance budgetary and security issues and needs.
Such an approach also requires organizational policy-makers to overcome any reluctance to address security issues on an informed basis, and requires security officers to overcome any tendencies they may have to build their own fiefdoms. Given the considerable and increasing importance of information security for information technology-dependent organizations, however, policy oversight is essential and the separation of security from operations is dysfunctional. HIPAA Security Rule compliance is therefore an opportunity for such organizations to “do well by doing good” – to learn to function better while ensuring they comply with the law – for those who will take it as such.


Originally published in New Perspectives in Healthcare Auditing (November 2004)

Sunday, March 11, 2007

A Modest Proposal for Catalyzing Health Information Technology Adoption

USING SAFE HARBORS TO REDUCE LEGAL BARRIERS TO IMPLEMENTATION OF ELECTRONIC HEALTH RECORDS AND HEALTH INFORMATION NETWORKS
This post is based on a white paper I've prepared, which proposes that state governments should take a leadership role in reducing legal barriers to electronic health record (EHR) and health information network (HIN) adoption, by implementing a regulatory “safe harbors” scheme for EHR and HIN privacy and security policies and practices. I've also developed model EHR and HIN safe harbors legislation, and would be happy to provide copies of these (and the original white paper) upon request.
Since this white paper is intended as a “straw man” to advance discussion of solutions to legal barriers – real and perceived – to EHR and HIN implementation, it does not include comprehensive legal analysis or legal citations. It does assume the reader is generally familiar with EHR and HIN issues, the privacy and security requirements of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and comparable state law principles, and to some extent with regulatory processes.
Introduction.
The lack of clear legal standards for EHR and HIN privacy and security is perhaps the fundamental legal obstacle to their widespread adoption. In their absence healthcare organizations don’t know what they have to do to avoid possible regulatory penalties and civil liabilities. Uncertainty always weighs against action, especially when the uncertainty concerns legal risks.
While some of this uncertainty could probably be resolved by minimal research and analysis, some of it is legitimate and inevitable given the current state of the law. The solution is therefore to develop legal certainty to the extent possible, at least for key privacy and security issues.
In principle this might be done by legislative mandate, but that is a blunt and inflexible instrument badly suited to emerging technology issues. Over time it might also be developed by common law, through litigation; but that would take many years at best, and the risk of litigation is itself part of the current problem.
Legal certainty is therefore more readily developed through a regulatory “safe harbors” solution. This kind of solution has been implemented for comparable problems in a number of areas, including the confusing and problematic field of healthcare financial “fraud and abuse” (the so-called “Stark” and “antikickback” laws), which provides a useful model for EHR and HIN safe harbors.
One solution to this might be federal safe harbors, but it would probably take much longer to develop and pass federal implementing legislation and develop the necessary regulations than it would to do so at the state level. Quite apart from the more complex political logistics, it seems likely it would be much more difficult to identify nationally-acceptable policies and practices given the variations among the states. A state-based strategy would instead let states whose healthcare communities felt they were ready to implement safe harbors go forward, and allow the others to follow as they were ready.
A state-based approach does raise the potential problem of non-uniformity. One state’s safe harbors may not match those of its neighbors, or its neighbors may not implement or formally recognize safe harbors at all.
While this is a legitimate concern, the fact is that there is currently no legal mechanism for development of uniformity at all. Policies and business practices tend to be developed by standards bodies and professional organizations which are not legal authorities, and are implemented ad hoc by organizations which may or may not take standards bodies’ and professional organizations’ guidance into account. The implementation of safe harbors state-by-state should therefore tend to increase rather than decrease uniformity compared to the current situation, especially if states adopting safe harbors coordinate their regulations.
By the same token compliance for organizations operating across state lines should also become simpler. Since safe harbors compliance is by definition not mandatory, interstate organizations will be able to opt-out of safe harbors which are not appropriate. Perhaps more likely, organizations operating in both states with safe harbors and those without will opt to comply and get the benefit of the safe harbors where possible. Since states which are not ready for safe harbors are also unlikely to be ready to impose legal mandates which conflict with other states’ safe harbors, interstate organizations should be better able to implement consistent policies and practices across the organization.
Uncertainties in EHR and HIN Privacy and Security Law.
There is no special legal domain for EHRs and HINs. An EHR is nothing more than a computer system used to receive, store, process and produce health information, and a HIN is any set of network technologies used to transmit it from computer system to computer system. However, the laws which apply to EHRs and HINs are the same which apply to health information in general: Principally HIPAA and a few other federal laws, plus the laws of whatever states the computer systems and the organizations which use them are located, and the individuals whose information is present in the EHR or HIN are residents.
While some requirements of these laws are fairly clear, at least with a little work, others are not. Since EHR and HIN implementation always requires changes to business practices, it is sometimes unclear how a privacy-related policy or practice should be adapted to new arrangements. Security requirements in particular are problematic, since these are almost universally risk-based and not prescriptive. In other words, they do not describe specific policies, practices or technologies which must be adopted, but require healthcare organizations to analyze security risks and make reasonable and appropriate decisions about the security safeguards they will implement.
It is therefore difficult and sometimes impossible to determine a priori whether many privacy policies and practices, and almost all security policies and practices, will be considered compliant with applicable law. This may be made somewhat clearer with a couple of examples.
On the privacy side, for example, a health care provider wishing to share health information using a HIN may be concerned whether this sharing needs to be disclosed to potentially affected patients. Under HIPAA and a number of state laws health care providers are required to give patients a notice of their privacy or information practices – that is, a general description of the ways they use or disclose patient information. However, none of these laws has any provision specifically applicable to HIN usage. The health care provider has no guidance, and must decide for itself whether HIN participation information should be included, and if so what the notice should say.
On the security side, the same provider may wonder what authentication processes it should implement for users of an EHR it is setting up. Its EHR vendor may suggest single-factor password authentication, a relatively inexpensive option. Its consultants, on the other hand, may suggest using two-factor authentication using both passwords and tokens or swipe cards, a more expensive option. While HIPAA and some state laws both indicate that some form of authentication must be implemented, they provide no guidance for choosing between single- and two-factor authentication; they simply tell the provider to do a risk analysis, and choose the “reasonable and appropriate” option.
The degree to which this kind of uncertainty is acceptable depends on the provider’s tolerance for risk. In principle, if the provider makes informed and reasonable determinations appropriate to its conditions and circumstances – if its privacy and security decision-making processes are adequate – it should not be liable if something goes wrong and health information is negatively affected. In practice, patients may be alarmed to discover that their information is being shared over a HIN and claim that the notice they received was inadequate; or in the event of an EHR security breach, may claim even two-factor authentication was inadequate.
While good decision-making practices should prevent legal liabilities in such cases, there is no assurance that they will. If there is some sort of harm to affected individuals, especially if there is a public outcry or media attention, judges, juries and even regulatory authorities may be inclined to try to find reasons to give the victims some kind of recourse. What seemed reasonable and appropriate at the time may, with the benefit of hindsight and adversarial scrutiny, come to seem unreasonable and negligent.
This kind of legal risk is a material obstacle to EHR and HIN implementation. Some organizations are comfortable with this level of legal risk, or perhaps don’t notice it. Others have a lower tolerance for legal risk, perhaps especially when it is added to the operational and financial risks of new technology implementation in the first place.
To the extent that legal uncertainties about EHR and HIN standards and practices can be reduced, then, a material barrier to their implementation will be lowered for at least some organizations. And some of these uncertainties, at least, can be dealt with by the creation of safe harbors for key policies and practices.
Using Safe Harbors to Reduce Legal Uncertainty.
Safe harbors should be carefully distinguished from legal mandates. A legal mandate is a statute or regulation (or much more rarely caselaw) which prescriptively identifies a specific legal requirement, with penalties for its violation. For example, the HIPAA privacy regulations require publication of a notice of privacy practices, and prescribe its content with some specificity. An organization which is required to publish a privacy practices notice and fails to include content prescribed by the rules is subject to regulatory penalties, and possibly exposed to claims for damages by patients claiming to have been harmed by the failure.
A safe harbor, on the other hand, does not prescribe any requirements, nor is there a penalty for noncompliance. Rather, a safe harbor describes a set of facts and the policies and practices implemented by an organization under those facts, and states an agency’s interpretation that under those facts the described policies and practices do not violate the applicable law. Organizations are not penalized for failing to implement those policies and procedures, but those which choose to do so are assured they will not be penalized. Organizations which choose not to do so have no such assurance, but are not necessarily in violation of applicable law and therefore not necessarily subject to penalties.
A safe harbor therefore reduces legal risk, as opposed to a legal mandate which actually creates it. A safe harbor scheme would therefore reduce legal risks in EHR and HIN implementation, and so reduce this barrier to implementation, as opposed to legal mandates which would only raise it higher.
A safe harbor scheme can also accommodate the problematic issue of different and changing technologies and circumstances better than a legal mandate scheme. This problem is the legitimate reason why the HIPAA security regulations are risk-based rather prescriptive: It takes much longer to change statutes than it does regulations, and longer to update regulations than to update regulatory guidance. Any specific prescriptive requirements would be at risk of becoming obsolete, and perhaps counter-productive, more quickly than they could be revised.
For this reason HIPAA itself – the legislation rather than the regulations usually identified with it – deliberately established a regulatory structure which authorized and directed agency issuance of appropriate regulations, to accommodate changing and variable needs and circumstances. The HIPAA enforcement regulations in turn establish a dispute resolution structure which includes publication of interpretive decisions to help guide healthcare organizations – though it appears it will be some time before a significant number of cases reaches that level.
This structure is not unique to HIPAA, and in fact is relatively well-developed in the healthcare “fraud and abuse” area. This is a field in which legislation established draconian penalties for violations of broad, confusing and counterintuitive laws. Given the breadth and difficulty of interpretation of these laws, a risk-averse interpretation would tend to rule out many legitimate and even beneficial business arrangements and transactions. In other words, the fraud and abuse laws created legal uncertainties which may be a material barrier to valuable activity.
In order to overcome this barrier, the U.S. Department of Health and Human Services publishes safe harbor regulations interpreting the fraud and abuse laws as applied to specific sets of facts. Less formal guidance documents, as well as opinions on specific factual situations presented in letters requesting guidance, provide additional assurances which help reduce the risks to healthcare organizations seeking to develop business arrangements and transactions which they otherwise might avoid altogether – even when they might provide material benefits to patient care and administration.
A comparable regulatory scheme for health information privacy and security in EHR and HIN environments could issue comparably useful safe harbor regulations and interpretation. For example, in the case of patient notice of HIN participation, an agency could issue regulations (or guidance) describing the form and content of one or more provisions which would provide adequate notice. In the case of EHR authentication, an agency could issue regulations specifying factors which would be considered reasonable and appropriate and therefore in compliance with the law. In neither case would healthcare organizations be required to use the specific provision or authentication factors, but those which chose to do so would be assured their implementation was consistent with the agency’s authoritative interpretation of the law.
Developing Content for EHR and HIN Safe Harbors.
An EHR and HIN safe harbors scheme would be adaptable to – and should in fact be based upon – prevailing industry standards and best practices, and would also be transparent and open to the public. Legislation might require implementation of formally-developed industry standards, as HIPAA does for transactions, but that is probably more appropriate for prescriptive legal mandates rather than safe harbors. A better strategy would be to develop proposed safe harbors based on research into healthcare standards and practices, to be finalized after a public comment period open (as is generally required for regulations) to any interested party.
A public safe harbors development process would present a much greater opportunity for public understanding of and input into EHR and HIN policies and practices than current practice. Currently, EHR and HIN policies and practice are developed ad hoc, to some extent in a few standards groups but principally in negotiations among healthcare organizations and vendors. Not only is this activity mostly unknown to the public, for the most part there is not even an opportunity for public understanding and input.
Ad hoc development also leads to avoidable divergence in EHR and HIN policies and practices among organizations. This in itself is a barrier to widespread implementation, since organizations using different policies and practices often find it difficult or even impossible to share networks and information, or find it difficult to adapt to each other when they try. Publicly-developed safe harbors would present common policies and practices all participants could use, again lowering a barrier to implementation.
As noted above, and reflected in HIPAA, there is a valid point that technologies, economic conditions and operating environments are diverse and changeable, often rapidly. However, this point argues for careful execution of a safe harbors strategy, rather than its avoidance. Safe harbors should be carefully chosen and defined to apply to and solve common problems, at a sufficiently general level that they should not need frequent revision. This is also an argument for the inclusion of additional regulatory guidance opportunities, through reports, publications and perhaps opinion letters, so that new developments and distinctive circumstances can be addressed.
In practical terms, this process might work for the privacy notices and authentication examples discussed above as follows. Given appropriate enabling legislation, the agency authorized to develop EHR and HIN safe harbors would identify a set of key issues for which uncertainty about legal privacy or security standards appeared to be discouraging EHR or HIN implementation. These issues might very well include privacy notice content and authentication. Initial proposals for their resolution would then be solicited from appropriate stakeholders and interest groups, as well as the public.
Based on this initial feedback, the agency would develop proposed regulations and publish them for comment. The proposed regulations would be sufficiently detailed to permit meaningful comments; for example, the proposed privacy notice regulation might provide one or more provisions which could be adopted, while the proposed authentication regulation might specify that use of two-factor authentication would be considered compliant. Following comments on the proposed regulations, the agency would develop and publish final regulations.
Organizations could choose to implement the policies and practices described in the regulations, and have the agency’s assurance they were in compliance; organizations which chose not to do so, would not be penalized per se. For example, an organization could still conclude, based on its HIPAA risk analysis, that two-factor authentication was not a reasonable and appropriate safeguard in its environment. This decision might be open to question in the event of a regulatory investigation or litigation, especially arising from an incident raising the question of the adequacy of authentication, but the mere fact of noncompliance with the safe harbor would not be grounds for a penalty.
Implementation of the Safe Harbor Scheme.
An EHR and HIN safe harbors regulatory scheme would be no silver bullet. Given the complexities of federal and state jurisdiction no agency would be able to cover all the issues. And while ideally, perhaps, EHR and HIN safe harbor regulations should be a federal function, creating significant new federal agency authority can take a long time. Further, achieving a national consensus on appropriate safe harbors is likely to be much harder than achieving it within a state or region. Federal safe harbors are not likely to be available for some time at best.
State-by-state safe harbors, on the other hand, raise the questions of HIPAA applicability and the potential for excessive and unnecessary cross-state variation. While the former question needs more analysis, HIPAA does provide that state laws which are more protective of information control where both HIPAA and state laws apply.
State-based regulations which establish safe harbors more protective than HIPAA should therefore provide assurances of compliance with both state and federal law. Where HIPAA does not provide a clear standard, while state agencies may have limited authority to interpret HIPAA, the fact that a state agency has determined that a given policy or practice provides reasonable and appropriate safeguards, following a public comment process, should be very persuasive for HIPAA purposes.
Safe harbors could therefore be implemented using model legislation for state adoption. In order to maximize uniformity, the states implementing such a scheme could establish a coordinating group to keep their safe harbors (and perhaps other health information laws) consistent.
This would not be a complete solution, of course, unless all the states and territories adopted the same scheme and safe harbors, and that is not likely any time soon. Even with a coordinated state-by-state scheme, interstate organizations operating in both states with safe harbors and those without (or those with materially different safe harbors) would face the question whether they could adopt uniform policies and practices across the organization, and comply with both states’ laws.
Upon analysis, this problem becomes something of a red herring. Interstate organizations already face the problem of actually or potentially conflicting state requirements, with much less guidance and uniformity than would be possible under a state-by-state safe harbors scheme. Such a scheme would therefore be a clear improvement over the current situation.
The uniformity problem would only arise in the first place for interstate organizations operating in both safe harbor and non-safe harbor states if there was a conflict between the safe harbor of the one state and some legal requirement of the other. One reason such conflicts seem unlikely to arise is that a safe harbors scheme is probably more likely to be adopted by states whose legislators and regulators feel competent in addressing health information technology issues. If legislators and regulators in non-safe harbor states do not feel sufficiently competent in this area to adopt a safe harbors scheme, it seems unlikely they would feel competent enough to implement legal mandates in this area in sufficient depth to create a conflict with other states’ safe harbors.
Should this problem arise anyway the nature of safe harbors compliance would allow interstate organizations to resolve it, by adopting policies and practices compliant with the mandate; there would be no penalty for failing to comply with the safe harbor. The same principle would allow resolution of a conflict between different safe harbors provided by different states, should that arise, since an interstate organization could choose between available safe harbors without penalty.
A coordinated state-by-state safe harbors approach would therefore allow the incremental development of national uniformity. States which were ready to address EHR and HIN issues could adopt safe harbors reflecting well-accepted, reasonable and appropriate policies and practices; other states could follow their lead when they were ready and if they found such safe harbors acceptable. Healthcare organizations would have an incentive to adopt safe harbor policies and practices to gain some currently available legal certainty, but could move to them as and when it worked for them without penalty.
Conclusion.
As a general rule there are good reasons for governments to tread carefully on technology-related issues, especially in emerging fields like EHR and HIN implementation. However, we seem to have reached a point at which legal uncertainty is itself a barrier to potentially beneficial progress, and governments – as the principal source of the laws – may be especially well-suited for resolving this kind of uncertainty. A carefully managed safe harbors strategy would allow for the reduction of legal uncertainty without imposing prescriptive requirements which would be hard to change if and when they became obsolete. While it would probably be most valuable in the long run for this to be a federal function, in the short run the states could assume a leading role, and reduce legal barriers to EHR and HIN implementation by reducing its attendant legal uncertainty.