Saturday, April 28, 2007

Data Havens and UFO Hacking

This probably dates me but is too good to pass up. The once-upon-a-time data haven of Sealand has offered political asylum to the guy who hacked into NASA and DoD computers in search of UFO information.

From Personal Computer World (UK)



North Sea 'state' offers McKinnon asylum

Sealand may not be enough to save 'most prolific hacker' from extradition
Emil Larsen, Personal Computer World 26 Apr 2007

Gary Mckinnon, who faces extradition to the US for allegedly hacking into military computers, has been offered asylum by the self-styled breakaway state of Sealand, it was claimed at the Infosec security conference today.

The "state", a World War II fort known as Roughs Tower in the North Sea just north of the Thames, was declared an independent principality in 1967 by a former major called Paddy Roy Bates. He dubbed himself Prince Roy.

Mckinnon sat on a ‘hackers panel’ at Infosec to debate new changes to the Computer Misuse Act. The claim about Sealand was made by one of his fellow panellists, a "security analyst" identified only as Mark.

Mckinnon, described by American prosecuters as the most prolific hacker of all time, spoke only twice, first to introduce himself and then when asked if companies often overstate the value of damage done by hackers.

Mckinnon said they did. He added the US could only have extradited him from the UK, if it could show his the offence was "worth a year in prison in both countries".

He added that to merit that sentence the damage had to amount to $5,000 dollars. The damage he was accused of causing came to exactly that so US military were "obviously not shopping in PC World".

McKinnon's lawyers have said they plan an appeal to the House of Lords against Home Secretary John Reid's granting of a US request to extradite McKinnon.

Tuesday, April 3, 2007

Newbie Lawyers as Enablers of Bad Security

This post is a bit of a rant so please bear with.

I'm currently neck-deep in a variety of projects and a couple of presentations about security for electronic health records. This happens to be a field I've worked in since the early 1990s - though we didn't generally call them "electronic health records" then - so I've seen quite a few initiatives, applications and companies come and go.

The current round of activity is seeing a number of newbie lawyers advising and opining on EHR issues - and there's nothing wrong with that; it's a great field of practice and we all have to start somewhere - but as far as I can tell some of these folks either haven't bothered to research security issues and the history of the field, or don't even know there's something they are missing.

Twice in the last few days I've read a lawyer's assurance that EHR issues are all copacetic because, by golly, the records are going to be password protected. One seemed quite pleased to note this, almost breathless with enthusiasm (if you can be breathless in print).

I've got nothing against passwords - some of my favorite files are password-protected - but what these folks appear not to understand is that by telling me this, you have told me nothing very important. Not as important, for example, as if they had noted that there wasn't password protection - then I would either be interested to know what alternative means of authentication were proposed, or stunned to find out that even minimal authentication was foregone. But passwords are so pervasive and basic (a "due care" safeguard, to use Donn Parker's term) that this is not significant information.

What is meaningful is the suite of security safeguards, including but not at all limited to authentication, used to protect a given EHR. And this is something we've known for quite some time - even lawyers have known it. Or I guess to be more accurate: some lawyers have known it. The ABA's digital signatures/PKI initiatives developed a solid, if smallish cadre of lawyers who are pretty good (some very good) on information security issues, and there have been other groups, projects, and activities which have trained lawyers to deal with information security issues; there are some pretty good law review articles and a few treatises out there. And of course, the HIPAA Security Rule gives a pretty good list of security areas which should be addressed in any EHR. And that's just the resources available without consulting information security professionals and infosec publications, which any lawyer working in this field should do routinely.

One reason this makes me peevish is that this seems to me a tremendous waste of good intellectual capital. We've learned a lot already, some of it very much the hard way - wouldn't it be a good idea to avoid known pitfalls? Especially if you're representing clients who are going to be putting highly sensitive, personal medical information into networked applications with the deliberate goal of enabling remote access?

Which leads to the main reason I'm feeling peevish: I often see this kind of advice used to validate bad security decisions. There's almost always a good argument for bad security: It's cheaper than good security. Look what happened with PCASSO - very good EHR security, patients liked it (they felt secure) but doctors found it burdensome. So what do they want to use? Passwords. Preferably their dog's name. (I got that again a couple of weeks ago - mentioned that as an example of bad security in talking to a doc, whose immediate blush-and-cough I took as an admission of guilt, which he confirmed.) And a lawyer who doesn't know better will validate this choice.

Do I think passwords are never good enough authentication? Certainly not - that's not the point of this post.

The point is that lawyers play a significant role in risk identification and management decisions, such as those affecting EHR implementation, and it behooves us to either get up to speed on the issues before giving advice, or admit we aren't up to it and not fake it. (Sidebar: "Behoove" is a great word which doesn't get nearly enough use.)

If passwords provide sufficiently low-risk authentication, given the client's risk tolerance and in the context of the client's business processes and information systems, then it is the client's decision that it is an acceptable implementation. But this decision should be made in consultation with a lawyer, and if that lawyer doesn't know the issues - and perhaps doesn't even know that s/he doesn't know - that decision is badly grounded. A newbie lawyer may well wind up enabling a bad security decision.

I happen to think that EHRs and health information networking are in general a good idea and will ultimately be very beneficial. But we need to recognize that we are building an untested infrastructure for the storage and management of vast quantities of the most sensitive personal information, with opportunities for privacy, health and safety threats we can't yet forecast accurately. We should make security decisions for this infrastructure with caution, an appreciation for the limits of our knowledge and expertise, and a willingness to learn and figure out new tricks.

Rant over!

Sunday, March 25, 2007

Self-Help on the Internet: Is the Best Defense Really a Good Offense?

What are you willing to do to defend your network against hackers, zombies and Fourth Generation information warriors? What are you willing to do to defend the Internet from them? Are you going to hunker down and harden your defenses, or are you willing to defend yourself by “hacking back” and shutting the attackers down?

You may not have considered these questions; they are not usually brought up as part of information security strategy or tactical development, or in IT planning in general. But they are fundamental questions, and the answers we give may determine how well we are able to manage and maintain the information systems we have built and become increasingly dependent upon.

The Fundamental Problem of Network Insecurity.

The Internet was not designed for security, and neither were most computers. This was a feature, not a bug; security slows down communications and interferes with convenience. There was no real demand for security until the vulnerabilities of these systems became painfully obvious, which is a recent development for most people; and many still don’t seem to get it. As a result the Internet, including all the networks which connect to and constitute it, are exposed to attacks from vast swaths of unsecured systems.

The Internet is also not something you can effectively police. Most law enforcement agencies don’t have the time, resources or expertise to investigate or prosecute Internet-based crimes. And many attacks cross jurisdictional boundaries, making legal action difficult and often impossible. Even when legal action is possible, it is usually too late: the harm has been done. For the bad guys this too is a feature rather than a bug.

This means that networks connected to the Internet – ultimately the Internet itself – are subject to degradation by hostile or malicious activities. The Internet is a common good – an amazing asset shared by a community whose membership is limited only by access to and ability to use a networked computer – and as such is subject to partisan or minority abuses which undermine or conceivably could even ruin it for everyone.

So how do we defend this amazing resource? If we can’t call in law enforcement, what about self-help? Should we form some kind of Internet militia? Maybe some vigilante action? Before you decide, consider the following cautionary tale.

Shooting from the Hip.

Warbucks Financial is a boutique financial services firm whose founder, “Sonny” Warbucks, is a well-known entrepreneur with controversial views and a choleric personality. Warbucks uses the latest information technologies for trading and employs Francis X. Hackerman as its Chief Information Officer. Hackerman made his reputation as a notorious hacker, and while officially reformed he considers himself a highly skilled “hired gun.”

The University of Hard Knocks has a large, complex network serving thousands of users. Security is hard to maintain, since security measures are resisted and/or ignored by many users. One of the groups of resisters is the Script Kiddiez for Justice, which has taken a very public dislike to Warbucks.

Shortly before closing on a Friday afternoon Warbucks begins experiencing a distributed denial of service (DDOS) attack which threatens to shut down its ability to execute trades. This is a crucial time of the week and Warbucks’ clients may face serious losses if their trades are delayed.

Hackerman tries to deal with the attack by hardening the Warbucks network, but this slows trading even further. He identifies the Hard Knocks network as a source of the attack and assumes the Script Kiddiez are behind it. Hackerman tries to contact Hard Knocks Information Services to get them to intervene, but all he gets is voice mail.

A red-faced, bellowing Sonny appears in Hackerman’s doorway, demanding that he “fix it and quick.” Hackerman decides to try to eliminate the attack – and Sonny’s disturbing presence - by shutting down some or all of the Hard Knocks network.
Hackerman is a former student at Hard Knocks and knows a number of vulnerabilities in its network. He quickly modifies a publicly available worm and releases it into the Hard Knocks network, and soon hosts on the network begin shutting down.

Meanwhile, Eunice Victim has just been admitted to the Hapless Hospital emergency room to have a boil lanced. Hapless is a teaching hospital which is part of Hard Knocks and runs its information systems on the Hard Knocks network. These systems include a computerized physician order entry (CPOE) application linked to its electronic medical records system (EMR).

Victim’s EMR indicates she has an allergy to amoxicillin. However, as her treating physician, Dr. Ohno, was ordering antibiotics Hackerman’s worm crashed the CPOE. Ohno then asked Victim her about possible antibiotic allergies, but unfortunately Victim misunderstood and indicated she had none. When Victim received the amoxicillin Ohno ordered she went into anaphalytic shock and died.

The attack on Warbucks continued, and subsequent investigations indicated it most likely originated somewhere in the Middle East as part of a broad series of attacks on financial institution networks probably intended to harm U.S. financial markets.
The Hard Knocks incident response team traced the worm back to Warbucks Financial. Victim’s estate sued Ohno and Hapless for negligence in her death, and they in turn cross-claimed against Warbucks Financial, Sonny and Hackerman. Hapless, Hard Knocks and Victim’s family all demanded that criminal charges be brought against Hackerman, Sonny and Warbucks Financial.

Categories of Network Self-Help.

The above scenario would make a great bar exam question, and I challenge readers to identify all the legal issues it presents. The immediate point, however, is that the risks posed by network self-defense actions increase dramatically in proportion to the degree that they affect systems outside the network’s legal and operational perimeter.

This is because within a network perimeter the network operator has (1) sufficient information, at least in principle, to identify and avoid unintended harmful consequences of security measures, and (2) the legal authority to implement any security measures it wants, subject to its own policy limitations. Conversely, in others’ networks a party generally has limited information, and the legal right to act only to the extent they give permission.

Given these constraints Internet self-help can be categorized roughly as follows:

• Baseline: At the most basic level, within its network perimeter a party can implement whatever security measures it considers appropriate. It may also have a legal duty to do so, if the failure to implement security measures exposes others to avoidable risks (e.g., unsecured hosts used to launch DDOS attacks on third parties).

• Investigative: Moving out from its own network perimeter, a party has the legal right to conduct limited investigative activities to identify potential attack sources, to the extent these activities are not harmful and are consistent with ordinary Internet functions (e.g., pinging a host). This may be useful for identification of a party who has the authority and ability to shut down attack activity, at least sometimes.

• Cooperative: Two or more parties may take joint defensive actions within their networks on an ad hoc basis in response to an attack, or agree to a “mutual defense pact” which defines the terms of joint responses within their networks. This may be particularly useful where two or more parties are regular business partners.

• Adversarial: One or more parties may take action affecting resources in a network owned by another, without that party’s permission. This action could violate laws such as the federal Computer Fraud and Abuse Act and state computer trespass laws – not to mention issues if the network turns out to be a national security system or located in another country. There are self-defense theories which might work in a legal action, but they have not been tested in court.

Conclusion.

The Internet isn’t quite the Wild West, but it’s no well-regulated commonwealth, either. In this environment it’s up to the individual organization to defend its own network. This, of course, not only helps the organization, but helps preserve the Internet by preventing network misuse. There is also a valuable role for cooperative efforts, such as information sharing and even joint incident and attack responses. Something like a “well-regulated militia,” then, might be worth exploring, at least in the context of a mutual defense pact.

Vigilante action, on the other hand, is strictly at your own risk. There may be circumstances when adversarial self-help is really needed – certainly there may be circumstances where that seems to be the case. But before undertaking such action you had better be very sure of yourself – you may very well wind up having to explain it in court.

Wednesday, March 14, 2007

Organizational Governance and Risk Acceptance

Managing HIPAA Security Compliance:Organizational Governance and Risk Acceptance

One of the fundamental but sometimes overlooked questions in HIPAA Security Rule compliance is, who decides how much residual security risk the organization will accept? The level at which this decision is made can have important consequences not just for the acceptance of HIPAA security compliance measures within the organization, but for the cost-effectiveness of the safeguards selected for compliance and the organization’s ability to defend itself and its officers against civil penalties or criminal charges if its personnel do violate HIPAA’s information protection requirements.
The level at which security risk acceptance authority is vested depends on how the issue is perceived. Senior executives, auditors and legal counsel and board members may not understand or be comfortable with information security issues, and may perceive them as matters of technical implementation. They may therefore explicitly, or perhaps more often implicitly and by default, delegate decisions about such issues to security professionals they consider more qualified to deal with such problems. Some security professionals may be quite willing to accept such delegation, not recognizing that it may be inappropriate – maybe not really recognizing that it is occurring – perhaps even seeing it as a positive enhancement of their power and authority.
Risk Management: In the Eye of the Beholder?

The inappropriate delegation of risk acceptance authority may be particularly likely to occur under the HIPAA Security Rule because of the way it uses the term “risk management.” The Rule specifies that compliance decisions – the selection of safeguards which are “reasonable and appropriate” for addressing risks under the standards and specifications set forth in the rule – are to be made using a risk assessment-based risk management process. “Risk management,” however, can mean different things to different professions, creating a real possibility of confusion and a dysfunctional approach to compliance.
For organizational governance purposes, “risk management” generally means

. . . a policy process wherein alternative strategies for dealing with risks are weighed and decisions about acceptable risks are made. . . . In the end, an acceptable level of risk is determined and a strategy for achieving that level of risk is adopted. Cost-benefit calculations, assessments of risk tolerance, and quantification of preferences are often involved in this decision-making process.

Confusingly, however, the HIPAA Security Rule and many (by no means all) security professionals give the term “risk management” a much more limited meaning, as the implementation of “security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level.” Security “risk management” under the latter definition is therefore equivalent to risk reduction at the organizational level – a process which depends upon the prior determination of the acceptable risk level to be achieved by the reduction.
Information technology security risks cannot as a practical matter be reduced to zero, nor does the HIPAA Security Rule require “zero risk tolerance.” The rule does require that healthcare organizations “take steps, to the best of [their abilities,] to protect [protected health information in electronic form].” This requirement is based on an interpretation that in the legislation Congress intended to set “an exceptionally high goal for the security of electronic protected health information” but “did not mean providing protection, no matter how expensive.” Covered Entities are therefore permitted to use a “flexible approach” to security measure implementation which permits them to implement “any” measures which are “reasonable and appropriate,” taking organizational circumstances and factors including costs into account.
At the end of this process residual risks will have to be accepted by some party on behalf of an organization. Compliance with the rule is itself an organizational obligation, and the organization is exposed to civil and potentially even criminal penalties in the event of a compliance failure. Since acceptance of residual risks necessarily means the acceptance of some degree of exposure to potential penalties – even if the organization makes its compliance decisions properly and in good faith there is a possibility that enforcement authorities will disagree with them – this decision should only be made as a matter of organizational policy.

Fiduciary Obligations and Security Risk Acceptance.

It is a truism that the officers and directors of an organization have fiduciary obligations to provide oversight to ensure it complies with regulatory obligations. What is perhaps less well understood is that a failure to exercise such oversight could itself be a factor exposing an organization to avoidable legal penalties.
HIPAA provides not only for a regulatory regime, but for criminal penalties for organizations which obtain or disclose protected health information (“PHI”) in violation of the statute or the regulations. (Individuals can be subject to criminal penalties too, but this article is concerned with organizations.) Healthcare organizations obtain and disclose PHI constantly – it’s a necessary part of most operations – which means that a failure to comply with the HIPAA Security Rule in the course of operations involving PHI is a per se criminal violation. For example, the rule presumes that all personnel will receive appropriate security training, and requires that all information system users be assigned unique user names or other identifiers. Any receipt or disclosure of PHI by an untrained user, or by a user who is allowed to log-in under another user’s identifier, could be considered a criminal HIPAA violation by the organization.
This may seem a somewhat extreme reading of the statute, but it is the result of its literal interpretation. Whether charges are ever brought against a healthcare organization which fails to comply with the HIPAA Security Rule will therefore generally be a function of whether the failure has been brought to the attention of the U.S. Department of Justice (which has federal criminal jurisdiction), and if so whether the U.S. Attorney elects to bring charges. While it is to be hoped that prosecutors will exercise their discretion cautiously in such cases, hope is not a prudent strategy for legal compliance.
A better strategy, and one which is recognized in federal criminal sentencing and prosecution decisions against organizations, is to implement a compliance program including organizational policies and board and executive-level oversight. The existence and good faith, reasonable management of such a program is a very material factor relied on by the U.S. Department of Justice in deciding against organizational prosecution when one of its employees or agents breaks the law, and in minimizing penalties if the organization is prosecuted.
More than that, a compliance program would constitute the kind of policy-level security risk management process necessary to determine acceptable levels of risk at the organizational level under the HIPAA Security Rule, which in turn would guide decisions about the reasonable and appropriate safeguards which the organization should implement. By instituting this process the organization would be able to ensure that “reasonable and appropriate” decisions are made, in reliance on the “flexible approach” factors required by the rule. Upon the implementation of safeguards selected under such guidance, the organization will have both brought itself into compliance with the HIPAA Security Rule. While risks cannot be eliminated through such a process, if and when an incident does occur which could expose the organization to penalties it will have a sound defense.
Organizational Risk Acceptance and Security Cost Control.

While the potential consequences are perhaps less dire than criminal penalties, inappropriate delegation of risk acceptance authority may also lead to excessive spending on security safeguards and inappropriately burdensome compliance decisions. This can be demonstrated by analyzing alternative compliance decision-making approaches under one of the more problematic security standards.
The HIPAA Security Rule requires a Covered Entity to "identify[,] respond to[,] mitigate the harmful effects of [and] document security incidents and their outcomes." A "security incident" in turn is defined as an "attempted or successful unauthorized access, use, disclosure, modification, or destruction of information or interference with system operations in an information system." A risk reduction perspective might well interpret these provisions to require that any and every event which meets this definition must be dealt with according to the specification. But anybody familiar with systems administration in a large enterprise knows that events which fit this definition happen constantly.
At a very basic level network connections, particularly ports accessible to the Internet, are constantly "pinged" by unknown and presumably unauthorized applications or individuals. Each such attempted connection is "attempted unauthorized access" within the regulatory definition of “security incident.” They are also almost always harmless, assuming basic security safeguards have been implemented. Nonetheless compliance with the letter of the regulation would require each one to be identified, responded to and documented.
This would seem to be a pointless exercise in security log review and documentation, except that the failure to do so could be construed as a meaningful failure if some evildoer were to succeed in gaining access through such connections – and if the regulation is interpreted to require "zero risk tolerance" it could be evidence of negligence.
Since it is also not possible to rule out the risk that someone will succeed in hacking in to your network, a zero risk tolerance approach would require the routine review of all relevant system event logs, and the documentation of all apparent attempts at unauthorized access as required by the regulation. And such documentation is presumably subject to the 6-year HIPAA document retention requirement. If the Security Rule is interpreted as requiring zero risk tolerance, however, this burdensome approach is appropriate, even though the risks presented by attempted unauthorized access would be much better addressed through good system management practices.
This would be the approach under a definition of “risk management” as equivalent only to risk reduction at the system level. But if, instead, HIPAA Security Rule compliance is a function of considered organizational risk management, it is possible to determine a level of risk which can and should be accepted – and the burdens of compliance can be appropriately balanced against their benefits.
Conclusion.

The risk acceptance decision is the key to the risk management process which is the foundation of HIPAA Security Rule compliance, and as seen above such decisions should be vested at the organizational policy level. The vesting of risk acceptance authority at a lower administrative level, expressly or by default, may well lead to dysfunctional security safeguard selections, and expose the organization to avoidable penalties.
This does not mean that the board, or CEO, COO or CFO for that matter, should micromanage HIPAA compliance or security administration. It does mean that they should fulfill their fiduciary obligations and provide guidance to those who do manage compliance and security. They should receive routine briefings on the status of security and compliance, and establish policies and procedures intended to ensure compliance. Such policies should include guidance on risk acceptance, perhaps requiring CEO or COO approval for acceptance of residual risks above certain thresholds of probability and financial exposure, and vesting risk acceptance discretion in the Chief Information Security Officer (or equivalent title) below those thresholds. Since the security program budget will also be generally determined at the organizational policy level this will also tend to prevent overspending – and where a bigger budget is necessary to reduce risks to organizationally acceptable levels, the decision will be forced upward to the level best suited to balance budgetary and security issues and needs.
Such an approach also requires organizational policy-makers to overcome any reluctance to address security issues on an informed basis, and requires security officers to overcome any tendencies they may have to build their own fiefdoms. Given the considerable and increasing importance of information security for information technology-dependent organizations, however, policy oversight is essential and the separation of security from operations is dysfunctional. HIPAA Security Rule compliance is therefore an opportunity for such organizations to “do well by doing good” – to learn to function better while ensuring they comply with the law – for those who will take it as such.


Originally published in New Perspectives in Healthcare Auditing (November 2004)

Sunday, March 11, 2007

A Modest Proposal for Catalyzing Health Information Technology Adoption

USING SAFE HARBORS TO REDUCE LEGAL BARRIERS TO IMPLEMENTATION OF ELECTRONIC HEALTH RECORDS AND HEALTH INFORMATION NETWORKS
This post is based on a white paper I've prepared, which proposes that state governments should take a leadership role in reducing legal barriers to electronic health record (EHR) and health information network (HIN) adoption, by implementing a regulatory “safe harbors” scheme for EHR and HIN privacy and security policies and practices. I've also developed model EHR and HIN safe harbors legislation, and would be happy to provide copies of these (and the original white paper) upon request.
Since this white paper is intended as a “straw man” to advance discussion of solutions to legal barriers – real and perceived – to EHR and HIN implementation, it does not include comprehensive legal analysis or legal citations. It does assume the reader is generally familiar with EHR and HIN issues, the privacy and security requirements of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and comparable state law principles, and to some extent with regulatory processes.
Introduction.
The lack of clear legal standards for EHR and HIN privacy and security is perhaps the fundamental legal obstacle to their widespread adoption. In their absence healthcare organizations don’t know what they have to do to avoid possible regulatory penalties and civil liabilities. Uncertainty always weighs against action, especially when the uncertainty concerns legal risks.
While some of this uncertainty could probably be resolved by minimal research and analysis, some of it is legitimate and inevitable given the current state of the law. The solution is therefore to develop legal certainty to the extent possible, at least for key privacy and security issues.
In principle this might be done by legislative mandate, but that is a blunt and inflexible instrument badly suited to emerging technology issues. Over time it might also be developed by common law, through litigation; but that would take many years at best, and the risk of litigation is itself part of the current problem.
Legal certainty is therefore more readily developed through a regulatory “safe harbors” solution. This kind of solution has been implemented for comparable problems in a number of areas, including the confusing and problematic field of healthcare financial “fraud and abuse” (the so-called “Stark” and “antikickback” laws), which provides a useful model for EHR and HIN safe harbors.
One solution to this might be federal safe harbors, but it would probably take much longer to develop and pass federal implementing legislation and develop the necessary regulations than it would to do so at the state level. Quite apart from the more complex political logistics, it seems likely it would be much more difficult to identify nationally-acceptable policies and practices given the variations among the states. A state-based strategy would instead let states whose healthcare communities felt they were ready to implement safe harbors go forward, and allow the others to follow as they were ready.
A state-based approach does raise the potential problem of non-uniformity. One state’s safe harbors may not match those of its neighbors, or its neighbors may not implement or formally recognize safe harbors at all.
While this is a legitimate concern, the fact is that there is currently no legal mechanism for development of uniformity at all. Policies and business practices tend to be developed by standards bodies and professional organizations which are not legal authorities, and are implemented ad hoc by organizations which may or may not take standards bodies’ and professional organizations’ guidance into account. The implementation of safe harbors state-by-state should therefore tend to increase rather than decrease uniformity compared to the current situation, especially if states adopting safe harbors coordinate their regulations.
By the same token compliance for organizations operating across state lines should also become simpler. Since safe harbors compliance is by definition not mandatory, interstate organizations will be able to opt-out of safe harbors which are not appropriate. Perhaps more likely, organizations operating in both states with safe harbors and those without will opt to comply and get the benefit of the safe harbors where possible. Since states which are not ready for safe harbors are also unlikely to be ready to impose legal mandates which conflict with other states’ safe harbors, interstate organizations should be better able to implement consistent policies and practices across the organization.
Uncertainties in EHR and HIN Privacy and Security Law.
There is no special legal domain for EHRs and HINs. An EHR is nothing more than a computer system used to receive, store, process and produce health information, and a HIN is any set of network technologies used to transmit it from computer system to computer system. However, the laws which apply to EHRs and HINs are the same which apply to health information in general: Principally HIPAA and a few other federal laws, plus the laws of whatever states the computer systems and the organizations which use them are located, and the individuals whose information is present in the EHR or HIN are residents.
While some requirements of these laws are fairly clear, at least with a little work, others are not. Since EHR and HIN implementation always requires changes to business practices, it is sometimes unclear how a privacy-related policy or practice should be adapted to new arrangements. Security requirements in particular are problematic, since these are almost universally risk-based and not prescriptive. In other words, they do not describe specific policies, practices or technologies which must be adopted, but require healthcare organizations to analyze security risks and make reasonable and appropriate decisions about the security safeguards they will implement.
It is therefore difficult and sometimes impossible to determine a priori whether many privacy policies and practices, and almost all security policies and practices, will be considered compliant with applicable law. This may be made somewhat clearer with a couple of examples.
On the privacy side, for example, a health care provider wishing to share health information using a HIN may be concerned whether this sharing needs to be disclosed to potentially affected patients. Under HIPAA and a number of state laws health care providers are required to give patients a notice of their privacy or information practices – that is, a general description of the ways they use or disclose patient information. However, none of these laws has any provision specifically applicable to HIN usage. The health care provider has no guidance, and must decide for itself whether HIN participation information should be included, and if so what the notice should say.
On the security side, the same provider may wonder what authentication processes it should implement for users of an EHR it is setting up. Its EHR vendor may suggest single-factor password authentication, a relatively inexpensive option. Its consultants, on the other hand, may suggest using two-factor authentication using both passwords and tokens or swipe cards, a more expensive option. While HIPAA and some state laws both indicate that some form of authentication must be implemented, they provide no guidance for choosing between single- and two-factor authentication; they simply tell the provider to do a risk analysis, and choose the “reasonable and appropriate” option.
The degree to which this kind of uncertainty is acceptable depends on the provider’s tolerance for risk. In principle, if the provider makes informed and reasonable determinations appropriate to its conditions and circumstances – if its privacy and security decision-making processes are adequate – it should not be liable if something goes wrong and health information is negatively affected. In practice, patients may be alarmed to discover that their information is being shared over a HIN and claim that the notice they received was inadequate; or in the event of an EHR security breach, may claim even two-factor authentication was inadequate.
While good decision-making practices should prevent legal liabilities in such cases, there is no assurance that they will. If there is some sort of harm to affected individuals, especially if there is a public outcry or media attention, judges, juries and even regulatory authorities may be inclined to try to find reasons to give the victims some kind of recourse. What seemed reasonable and appropriate at the time may, with the benefit of hindsight and adversarial scrutiny, come to seem unreasonable and negligent.
This kind of legal risk is a material obstacle to EHR and HIN implementation. Some organizations are comfortable with this level of legal risk, or perhaps don’t notice it. Others have a lower tolerance for legal risk, perhaps especially when it is added to the operational and financial risks of new technology implementation in the first place.
To the extent that legal uncertainties about EHR and HIN standards and practices can be reduced, then, a material barrier to their implementation will be lowered for at least some organizations. And some of these uncertainties, at least, can be dealt with by the creation of safe harbors for key policies and practices.
Using Safe Harbors to Reduce Legal Uncertainty.
Safe harbors should be carefully distinguished from legal mandates. A legal mandate is a statute or regulation (or much more rarely caselaw) which prescriptively identifies a specific legal requirement, with penalties for its violation. For example, the HIPAA privacy regulations require publication of a notice of privacy practices, and prescribe its content with some specificity. An organization which is required to publish a privacy practices notice and fails to include content prescribed by the rules is subject to regulatory penalties, and possibly exposed to claims for damages by patients claiming to have been harmed by the failure.
A safe harbor, on the other hand, does not prescribe any requirements, nor is there a penalty for noncompliance. Rather, a safe harbor describes a set of facts and the policies and practices implemented by an organization under those facts, and states an agency’s interpretation that under those facts the described policies and practices do not violate the applicable law. Organizations are not penalized for failing to implement those policies and procedures, but those which choose to do so are assured they will not be penalized. Organizations which choose not to do so have no such assurance, but are not necessarily in violation of applicable law and therefore not necessarily subject to penalties.
A safe harbor therefore reduces legal risk, as opposed to a legal mandate which actually creates it. A safe harbor scheme would therefore reduce legal risks in EHR and HIN implementation, and so reduce this barrier to implementation, as opposed to legal mandates which would only raise it higher.
A safe harbor scheme can also accommodate the problematic issue of different and changing technologies and circumstances better than a legal mandate scheme. This problem is the legitimate reason why the HIPAA security regulations are risk-based rather prescriptive: It takes much longer to change statutes than it does regulations, and longer to update regulations than to update regulatory guidance. Any specific prescriptive requirements would be at risk of becoming obsolete, and perhaps counter-productive, more quickly than they could be revised.
For this reason HIPAA itself – the legislation rather than the regulations usually identified with it – deliberately established a regulatory structure which authorized and directed agency issuance of appropriate regulations, to accommodate changing and variable needs and circumstances. The HIPAA enforcement regulations in turn establish a dispute resolution structure which includes publication of interpretive decisions to help guide healthcare organizations – though it appears it will be some time before a significant number of cases reaches that level.
This structure is not unique to HIPAA, and in fact is relatively well-developed in the healthcare “fraud and abuse” area. This is a field in which legislation established draconian penalties for violations of broad, confusing and counterintuitive laws. Given the breadth and difficulty of interpretation of these laws, a risk-averse interpretation would tend to rule out many legitimate and even beneficial business arrangements and transactions. In other words, the fraud and abuse laws created legal uncertainties which may be a material barrier to valuable activity.
In order to overcome this barrier, the U.S. Department of Health and Human Services publishes safe harbor regulations interpreting the fraud and abuse laws as applied to specific sets of facts. Less formal guidance documents, as well as opinions on specific factual situations presented in letters requesting guidance, provide additional assurances which help reduce the risks to healthcare organizations seeking to develop business arrangements and transactions which they otherwise might avoid altogether – even when they might provide material benefits to patient care and administration.
A comparable regulatory scheme for health information privacy and security in EHR and HIN environments could issue comparably useful safe harbor regulations and interpretation. For example, in the case of patient notice of HIN participation, an agency could issue regulations (or guidance) describing the form and content of one or more provisions which would provide adequate notice. In the case of EHR authentication, an agency could issue regulations specifying factors which would be considered reasonable and appropriate and therefore in compliance with the law. In neither case would healthcare organizations be required to use the specific provision or authentication factors, but those which chose to do so would be assured their implementation was consistent with the agency’s authoritative interpretation of the law.
Developing Content for EHR and HIN Safe Harbors.
An EHR and HIN safe harbors scheme would be adaptable to – and should in fact be based upon – prevailing industry standards and best practices, and would also be transparent and open to the public. Legislation might require implementation of formally-developed industry standards, as HIPAA does for transactions, but that is probably more appropriate for prescriptive legal mandates rather than safe harbors. A better strategy would be to develop proposed safe harbors based on research into healthcare standards and practices, to be finalized after a public comment period open (as is generally required for regulations) to any interested party.
A public safe harbors development process would present a much greater opportunity for public understanding of and input into EHR and HIN policies and practices than current practice. Currently, EHR and HIN policies and practice are developed ad hoc, to some extent in a few standards groups but principally in negotiations among healthcare organizations and vendors. Not only is this activity mostly unknown to the public, for the most part there is not even an opportunity for public understanding and input.
Ad hoc development also leads to avoidable divergence in EHR and HIN policies and practices among organizations. This in itself is a barrier to widespread implementation, since organizations using different policies and practices often find it difficult or even impossible to share networks and information, or find it difficult to adapt to each other when they try. Publicly-developed safe harbors would present common policies and practices all participants could use, again lowering a barrier to implementation.
As noted above, and reflected in HIPAA, there is a valid point that technologies, economic conditions and operating environments are diverse and changeable, often rapidly. However, this point argues for careful execution of a safe harbors strategy, rather than its avoidance. Safe harbors should be carefully chosen and defined to apply to and solve common problems, at a sufficiently general level that they should not need frequent revision. This is also an argument for the inclusion of additional regulatory guidance opportunities, through reports, publications and perhaps opinion letters, so that new developments and distinctive circumstances can be addressed.
In practical terms, this process might work for the privacy notices and authentication examples discussed above as follows. Given appropriate enabling legislation, the agency authorized to develop EHR and HIN safe harbors would identify a set of key issues for which uncertainty about legal privacy or security standards appeared to be discouraging EHR or HIN implementation. These issues might very well include privacy notice content and authentication. Initial proposals for their resolution would then be solicited from appropriate stakeholders and interest groups, as well as the public.
Based on this initial feedback, the agency would develop proposed regulations and publish them for comment. The proposed regulations would be sufficiently detailed to permit meaningful comments; for example, the proposed privacy notice regulation might provide one or more provisions which could be adopted, while the proposed authentication regulation might specify that use of two-factor authentication would be considered compliant. Following comments on the proposed regulations, the agency would develop and publish final regulations.
Organizations could choose to implement the policies and practices described in the regulations, and have the agency’s assurance they were in compliance; organizations which chose not to do so, would not be penalized per se. For example, an organization could still conclude, based on its HIPAA risk analysis, that two-factor authentication was not a reasonable and appropriate safeguard in its environment. This decision might be open to question in the event of a regulatory investigation or litigation, especially arising from an incident raising the question of the adequacy of authentication, but the mere fact of noncompliance with the safe harbor would not be grounds for a penalty.
Implementation of the Safe Harbor Scheme.
An EHR and HIN safe harbors regulatory scheme would be no silver bullet. Given the complexities of federal and state jurisdiction no agency would be able to cover all the issues. And while ideally, perhaps, EHR and HIN safe harbor regulations should be a federal function, creating significant new federal agency authority can take a long time. Further, achieving a national consensus on appropriate safe harbors is likely to be much harder than achieving it within a state or region. Federal safe harbors are not likely to be available for some time at best.
State-by-state safe harbors, on the other hand, raise the questions of HIPAA applicability and the potential for excessive and unnecessary cross-state variation. While the former question needs more analysis, HIPAA does provide that state laws which are more protective of information control where both HIPAA and state laws apply.
State-based regulations which establish safe harbors more protective than HIPAA should therefore provide assurances of compliance with both state and federal law. Where HIPAA does not provide a clear standard, while state agencies may have limited authority to interpret HIPAA, the fact that a state agency has determined that a given policy or practice provides reasonable and appropriate safeguards, following a public comment process, should be very persuasive for HIPAA purposes.
Safe harbors could therefore be implemented using model legislation for state adoption. In order to maximize uniformity, the states implementing such a scheme could establish a coordinating group to keep their safe harbors (and perhaps other health information laws) consistent.
This would not be a complete solution, of course, unless all the states and territories adopted the same scheme and safe harbors, and that is not likely any time soon. Even with a coordinated state-by-state scheme, interstate organizations operating in both states with safe harbors and those without (or those with materially different safe harbors) would face the question whether they could adopt uniform policies and practices across the organization, and comply with both states’ laws.
Upon analysis, this problem becomes something of a red herring. Interstate organizations already face the problem of actually or potentially conflicting state requirements, with much less guidance and uniformity than would be possible under a state-by-state safe harbors scheme. Such a scheme would therefore be a clear improvement over the current situation.
The uniformity problem would only arise in the first place for interstate organizations operating in both safe harbor and non-safe harbor states if there was a conflict between the safe harbor of the one state and some legal requirement of the other. One reason such conflicts seem unlikely to arise is that a safe harbors scheme is probably more likely to be adopted by states whose legislators and regulators feel competent in addressing health information technology issues. If legislators and regulators in non-safe harbor states do not feel sufficiently competent in this area to adopt a safe harbors scheme, it seems unlikely they would feel competent enough to implement legal mandates in this area in sufficient depth to create a conflict with other states’ safe harbors.
Should this problem arise anyway the nature of safe harbors compliance would allow interstate organizations to resolve it, by adopting policies and practices compliant with the mandate; there would be no penalty for failing to comply with the safe harbor. The same principle would allow resolution of a conflict between different safe harbors provided by different states, should that arise, since an interstate organization could choose between available safe harbors without penalty.
A coordinated state-by-state safe harbors approach would therefore allow the incremental development of national uniformity. States which were ready to address EHR and HIN issues could adopt safe harbors reflecting well-accepted, reasonable and appropriate policies and practices; other states could follow their lead when they were ready and if they found such safe harbors acceptable. Healthcare organizations would have an incentive to adopt safe harbor policies and practices to gain some currently available legal certainty, but could move to them as and when it worked for them without penalty.
Conclusion.
As a general rule there are good reasons for governments to tread carefully on technology-related issues, especially in emerging fields like EHR and HIN implementation. However, we seem to have reached a point at which legal uncertainty is itself a barrier to potentially beneficial progress, and governments – as the principal source of the laws – may be especially well-suited for resolving this kind of uncertainty. A carefully managed safe harbors strategy would allow for the reduction of legal uncertainty without imposing prescriptive requirements which would be hard to change if and when they became obsolete. While it would probably be most valuable in the long run for this to be a federal function, in the short run the states could assume a leading role, and reduce legal barriers to EHR and HIN implementation by reducing its attendant legal uncertainty.

Tuesday, February 13, 2007

The Role of Legal Counsel in Information Security Risk Assessment and Strategic Information Security Decisions

Legal counsel can and should play an important role in information security legal compliance and risk management. While the implementation of many security safeguards requires substantial technical knowledge, the development and selection of specific security policies, procedures and technical requirements for purposes of legal compliance and risk management requires the integration of such technical knowledge with legal interpretation and strategic risk management insight.
Specification of Legal Security Issues.
Legal requirements for security compliance, whether under HIPAA, Gramm-Leach-Bliley, emerging common law or almost any other law, are organizational obligations, not technical specifications. (The California Database Protection Act and comparable laws, which create incentives for encryption of personal information stored in databases, may be an exception. Even in this case, however, the law does not specify the type or strength of the encryption, or make encryption mandatory.) Any given organization may be subject to one or more set of legal security requirements, depending on the kind of activities it engages in and the jurisdictions where it does business.
> Legal Task: Identification of security laws applicable to organization, based on jurisdictions and activities.
As a rule, security legislation and regulations do not have any “safe harbors,” so there is no security control or set of security safeguards whose adoption can be guaranteed to make an organization compliant. Rather, these laws require organizations to assess and manage information security risks, to a degree usually framed as “reasonable and appropriate,” or as applicable to “reasonably foreseeable risks.” Unfortunately, “risk” is a multi-dimensional concept, a factor which always should be but too often is not taken into account in security risk assessment and management.
The usual formulation of information security objectives, which are the objectives against which security risks are determined, is the “CIA triad,” for “confidentiality, integrity and availability” – that is, the extent to which a given asset is protected against unauthorized viewing, use or alteration. In some settings, such as financial system assessment, the additional objective of “accountability,” meaning the ability to strongly identify participants in transactions, may also be a key objective.
These security objectives are frequently in conflict; for example, any process which protects confidentiality by making asset access more difficult will tend to decrease availability to the same degree. When security risk objectives conflict their resolution is a matter for organizational policy.
> Legal Task: Ensure security objectives of organization are consistent with legal obligations of the organization.
Information Security Risk Assessment.
The foundational process for information security is risk assessment. In this process an appropriate professional or team of professionals undertakes a structured review of the security controls and safeguards used in connection with an organization’s processes, physical facilities and technical systems used to receive, store, process and transmit legally-protected data.
The results of a risk assessment may be used (1) to identify gaps or weaknesses which might put the protected data at risk, supporting the recommendation or development of appropriate new or supplemental safeguards and controls to fill the gaps or mitigate the weaknesses; or (2) to confirm an organization’s compliance with security standards. The former type of assessment is frequently called a security “gap analysis,” while the latter is sometimes, but not always, referred to as a security “audit.”
From a lawyer’s point of view both types of assessment are factual investigations, and assessment reports are (or should be) findings of fact. It should be noted, however, that assessments sometimes purport to go beyond findings of fact, to conclusions of law; e.g., that a given organization is or is not “HIPAA compliant.” This is understandable when the objective of the assessment is to determine compliance, but the actual determination whether an organization is in compliance with the law is something only a lawyer is trained and authorized to do. Quite apart from issues of the unauthorized practice of law, the organization might well get an incorrect answer about its compliance status.
> Legal Task: Help develop risk assessment scope of work to ensure focus on appropriate objectives and fact-finding limitations.
A compliance assessment therefore should either be a joint lawyer/security professional project or a two-stage project in which the legal implications of the security professional findings are determined by a lawyer. On the fact-finding level there are a number of possible risk assessment methodologies available, none of which are required as a matter of law for private or state governmental organizations.
Federal agencies are supposed to use the risk assessment methodologies published by the National Institute of Standards and Technology (“NIST”), which has been influential in federal security regulation development and therefore should be taken into account in assessing compliance with federal regulations. A very few industries have developed or are developing their own appropriate methodologies, especially the banking and energy sectors, and the major consulting firms tend to have proprietary methodologies.
Generally, any information security risk assessment will start with an identification of (1) security “assets,” (2) “threats” to those assets, and (3) operational and system “vulnerabilities” to identified threats. While precise definitions vary, generally these terms refer to the following:
• An “asset” may be information as well as operational resources such as software applications, bandwidth and memory, and networked devices and equipment, which the organization is legally obliged to protect, is materially necessary to operations, or is of value to the organization.
• “Threats” are the various agencies which may harm or interfere with assets, including human threats such as hackers and malicious insiders; environmental threats such as facility fires, power outages and burst water pipes; natural threats such as floods, earthquakes, and the like; and technical threats such as computer viruses, worms and spyware (arguably a subset of human threats, since they are of human origin).
• “Vulnerabilities” are those operational and system characteristics which make it possible for specified threats to harm or interfere with specified assets. Some vulnerabilities may be obvious and easily resolved, as with implementation of a firewall to prevent unauthorized external access to a network. Others may be the result of normal or even generally beneficial features of a process or system element, as where remote database access to support telecommuting creates unavoidable (though to some extent reducible) risks that an unauthorized individual will “spoof” an authorized user’s identity to gain network access.
Once assets, threats and vulnerabilities have been identified, the next step in risk assessment is “impact” and “control” analysis, to make a “risk determination.” Risk is typically considered a function of the probability that a given threat will cause harm to a given asset, given the existing vulnerabilities. The finding at this stage is sometimes called the “inherent risk.”
Risk assessment can be a difficult, burdensome and uncertain process. In large or complex organizations and/or systems it may only be practical to assess a sample of the processes, facilities and/or systems, though choosing reasonably representative samples may be problematic.
Assets are usually readily identifiable, but doing so requires defining the perimeters of the processes and systems under assessment, and inventorying the information, devices, and equipment which constitute its assets. It is also important to try to assign values to assets, and in this connection it should be noted that the term “asset” has something of a specialized meaning in the risk assessment context.
Ordinarily, assets only have a positive value, such as the market price at which they can be sold, or their value to the organization in operational support, which might be measured by the cost of replacement. For risk assessment purposes, however, the liability or penalty “value,” meaning the exposure of the organization to liabilities and/or penalties due to loss, disclosure or misuse of the asset should also be estimated.
> Legal Task: Help identify assets organization is legally obliged to protect, e.g. legally-protected information, licensed software and trade secrets, etc.
> Legal Task: Estimation of liabilities and/or penalties associated with loss, disclosure or misuse of assets identified for risk assessment purposes.
Like assets, most types of threats are usually identifiable at a categorical level though some, especially threats caused by malicious software, are constantly evolving. However, the identification of the specific threats applicable to a given asset requires a detailed review of the operational environment in which the asset is kept, used and/or transferred.
Threats can generally be categorized as follows:
• Human threats, from insiders or outsiders (e.g. hackers), who may unintentionally or deliberately access, use, modify, transfer or harm assets.
• Physical facility threats such as power failures, fires, burst water pipes and other event harming the facilities or equipment used in connection with the assets.
• Environmental threats, such as floods, earthquakes and tornados, which may also cause harm to facilities or equipment.
• Technical threats such as computer worms, viruses and spyware (which might be considered a subcategory of human threats since humans create them), as well as system-related issues such as application instability, etc.
Vulnerabilities are also functions of the operational environment, and are identified by the known characteristics of the operating environment, including those of the specific technical systems, buildings and equipment, as well as those of human beings in general. Whether or not a given characteristic is a vulnerability depends entirely upon the assets and threats presented in the given environment.
An assessment also inventories existing security safeguards and controls (two overlapping terms, in this context sharing the meaning of protections against potentially harmful events). These are frequently categorized as administrative, physical and technical, though there is an emerging recognition of governance controls as an important category as well. These categories break out as follows:
• Administrative safeguards are the policies and procedures used to manage operational processes and human activities involving or pertaining to assets and vulnerabilities. These would include policies and procedures pertaining to hiring and employment, authorization for and management of asset access and use, etc.
• Physical safeguards are the policies, procedures and physical requirements applicable to the buildings and equipment relevant to asset management, such as locked-door requirements, key issuance, fire suppression, disaster recovery plans, portable device (e.g. laptop) protection policies, etc.
• Technical safeguards are the policies, procedures and system requirements controlling access to and use of software and information in devices and/or on the network. Technical safeguards include system configuration requirements, user identification and authentication procedures and processes (e.g. password issuance and management), malicious software screening and disposition, encryption of data, etc.
• Governance controls constitute the policies and procedures used to provide security oversight. While it has long been recognized that factors such as demonstrated executive commitment and reporting, accountable security officers and appropriate security training are essential for effective security, governance controls have not tended to be a separate subject of security assessment (though some aspects, such as training, are sometimes assessed as part of administrative safeguards). With the emergence of security as a regulatory compliance issues, governance control assessment is at least prudent if not necessary to avoid penalties and liabilities, including penalties or liabilities applicable to individual officers or directors responsible by law for organizational governance.
The effectiveness of policies and procedures is in many cases at least partially a legal question, as where employees are supposed to be subject to discipline for policy violations, or oversight policies are implemented to avoid or minimize liability and penalty exposures.
> Legal Task: Review legal effectiveness of policy and contractual documents used as security safeguards and controls.
The most difficult step in risk assessment may be risk determination, since this depends upon probability information which may not be available, or if available may not be reliable. There is currently no central repository of threat or security incident information, and no mandatory reporting, so to date there is no robust information on the incidence of most threats.
Some security professionals argue that certain vulnerabilities are so well known and so easily corrected (such as the use of weak passwords) that “due care” requires their correction. This suggests that some specific safeguards may be required, in at least some specific settings, as a matter of law. There is little or no specific law on this point, so the identification of such safeguards would seem to be a matter for determination by properly qualified security experts.
> Legal Task: Work with security professionals to identify safeguards which may be required to meet the applicable standard of care, and basis for such identification.
Impact information may be more available but more problematic, since assessment according to different security objectives (as discussed below) may lead to different impact outcomes. For example, electronic health records (“EHR”) systems are used to store and process personal health information, which is required to be protected under HIPAA and is accorded highly confidential status under not only HIPAA but a variety of other laws. At the same time, an EHR may be used to support critical clinical care, so that a failure of availability might cause erroneous treatment decisions leading to a patient’s serious harm or even death.
Note that in both cases the impact determination is based on a projection of legal exposure. In this case, the differential impacts are that a failure to provide confidentiality protections judged adequate in a HIPAA administrative enforcement proceeding might lead to a few thousand dollars in civil penalties, while a treatment error causing a patient’s death could lead to a multimillion dollar negligence judgment.
This risk assessment step therefore requires legal insight and analysis. And this scenario also demonstrates the reason why a security risk assessment should only be undertaken with a clear understanding of the organization’s risk management strategies and tolerance, and the security objectives of processes and systems under assessment.
> Legal Task: Review and analyze legal implications of risk assessment findings, including alternative liability and penalty exposures under different scenarios.
Strategic Information Security Decision-Making.
Security risk management and compliance decisions will always be subject to second-guessing in hindsight, by regulators or counsel for parties alleging harm caused by a security breach. The only effective response to this is to implement appropriate security risk assessment and management diligently and in good faith.
The information security legal compliance process therefore resembles the processes used by organizational fiduciary officers in compliance with the corporate “business judgment rule,” and to minimize organizational and officer exposures to criminal penalties under the Federal Sentencing Guidelines. Such processes require informed executive oversight and careful documentation. Advice from qualified experts and legal counsel can help demonstrate due diligence, and legal counsel can be helpful in developing the strategy for properly documenting the process for use as defensive evidence, if needed.
> Legal Task: Assist in development of organizational oversight policies and procedures for security compliance oversight and risk management.
> Legal Task: Ensure adequacy of security compliance documentation for evidentiary purposes.
Legal counsel may also be helpful in making hard choices, as where a technical solution is available but expensive and a policy control is under consideration as an alternative. A good security consultant can make appropriate findings identifying security vulnerabilities, and can recommend alternative solutions, but the organization’s accountable executives must make the decision whether or not the risks associated with the policy control alternative are acceptable.
This is fundamentally a governance-level decision, which should be made in accordance with the organization’s strategies for managing its full portfolio of risks – financial, operational, legal, and so on – which includes but is not limited to information security risks. At the organizational level there are four basic risk management strategies, any or all of which may have implications for security management:
• Risk avoidance, a strategy under which an organization determines that its exposure is simply too great in performing some specified activity, and avoids engaging in that activity. For example, a bank might find the lower costs of offshore processing of customer information attractive, but conclude that the lack of adequate oversight of and legal recourse against offshore processors for failing to protect the information makes this option unacceptable.
• Risk assumption is a strategy under which risks are understood and deliberately accepted, as an informed policy decision. Since risks can never be reduced to zero as a practical matter, risk assumption is an inevitable element of risk management. If risks to be assumed can be accurately projected, it may be possible to reserve against them. Any organization which fails to assess its risks is essentially adopting a strategy of assuming all risks by default.
• An uncommon strategy which may be becoming more available is risk transfer, under which the exposed party obtains some coverage for its own risk exposure by having a second party assume some or perhaps all of the risk. Insurance, where available, is one example of security risk transfer; so is an indemnification clause in a contract with a party hosting or otherwise performing services affecting assets.
• The most common strategy, and sometimes the only one recognized as a security strategy by the less sophisticated, is risk reduction. Risk reduction includes the implementation of whatever policies, procedures and technical solutions may be necessary or desirable to reduce identified risks to a level at which they can be assumed.
The precise mix of strategies an organization uses depends in part on what is available, both practically and as a matter of law. Some risks are inherent in an organization’s mission and cannot be avoided; for example, fraud is an inherent risk for financial services, and medical error is an inherent risk for health care providers. And risk transfer, in particular, may or may not be option, depending on the availability of insurance or the ability to transfer risk to other parties by indemnification.
> Legal Task: Assist in negotiation of insurance coverage and/or contracts transferring risk, where available.
The bottom line on an organization’s security strategies depends upon its security risk tolerance. While there have been arguments that there is or can be a “return on investment” from security activities, security is usually perceived as a zero-sum game: Any resources invested in security are taken away from other possible uses. The organization, therefore, must make a policy decision about how much it is willing to allocate to security, based on the availability of resources and the security-related risks it is willing to assume.
This kind of decision must be informed, but cannot be determined by security risk assessment findings. Information security legal compliance and risk management is just one of the portfolio of risks any organization must manage. An over-allocation of resources to security which harmed the organization’s ability to fulfill its mission, for example, could be more detrimental than many security events.
Deciding whether or not a given level of security risk is tolerable therefore depends less on an understanding of specific security threats and vulnerabilities, than on an understanding of their implications for the organizational mission. Potential financial, operational and reputational harms and legal penalties associated with security risks must be balanced against potential harms associated with their prevention, and there is no a priori formula for striking such a balance. Decisions like this are in the final analysis the fiduciary responsibility of the officers and board of the organization, and the role of both lawyers and security professionals at this level is to provide these officers and directors with the information and professional advice they need to make them.
> Legal Task: Provide legal information and counsel to executive officers and board in the strategic management of the organization.
Conclusion.
Lawyers should play an active role at all levels of the information security risk assessment process, from defining the scope of the assessment and determining the legal effects of policies and procedures under assessment, through interpretation of the legal implications of an assessment to advise the officers who must decide what it means to the organization. Technology-dependent organizations should therefore identify (or develop) and make use of attorneys who understand how to work with information security concepts, documentation and professionals, to help them appropriately manage their information security compliance obligations, and manage their security-related risks. Conversely, lawyers serving such organizations should develop appropriate expertise, or identify and make use of outside counsel when dealing with potentially important security issues. Either way, this means involving legal counsel in information security risk assessment and management processes and procedures.

Tuesday, February 6, 2007

InfoSec Risk-Shifting and Consumers

One of my pet peeves (I have quite a few) is the way that we tend to use the term "risk management" as if it had a generally accepted meaning everybody understands. For infosec and most other IT professional purposes risk generally means a "hazard" associated with IT usage, in more formal terms described as a function of the probability of an event with negative consequences occurring and the potential severity of such harm.

From an IT and infosec professional's POV, "risk management" is what you do to reduce the likelihood of an identified, potential negative event or class of events, its harmful consequences, or both. Safeguards and controls are selected depending on whether their associated cost is reasonably proportionate to the expected benefits in reducing risks.

This concept set is a little fuzzy around the edges, but is generally accepted as a viable algorithm for IT management and infosec. (I actually think don't actually think this algorithm works all that well in these areas either, and I think I've got a solution for that, but that's a topic for a future post.) However, I don't think this particular algorithm is recognized and accepted by one very important IT stakeholder group: Consumers.

Consumer advocates will not find the infosec/IT professional cost-benefit model very attractive for a simple reason: It generally shifts residual risks to them. Any cost-benefit-based risk management strategy will inevitably wind up determining that some risks are not worth the cost of elimination. If this model is the legal standard of care - which it in fact is under HIPAA and GLBA, and other laws and standards - that means that an organization which has decided not to protect against such risks is not liable if a negative event in that risk range occurs. If the individual(s) affected by a negative event have no recourse, they have assumed the risk; in other words, the residual risks have been shifted to the consumer.

For an example, consider a mythical ecommerce company which gathers customer data as part of financial services it provides. The company is subject to the Gramm-Leach-Bliley Act, and so must provide security safeguards for this data. It selects these safeguards based on the standard cost-benefit model, and decides it would not be cost-effective to implement, say, two-factor authentication for access to customer transactions data. It then experiences a security incident involving theft and fraudulent misuse of customer data, through an exploit which could have been prevented by two-factor authentication.

Is the company liable to the customers who have been harmed? I would say probably not, if the standard of care is set by Gramm-Leach-Bliley and the company performed a reasonably competent risk analysis whose data supported going with single- rather than two-factor authentication. (Yes, I know Gramm-Leach-Bliley doesn't provide for a cause of action, but trust me I could write up a complaint using the regulatory standard to set the negligence standard of care.) I'd also say it probably isn't exposed to regulatory penalties from the FTC, for the same reason.

If you're one of the consumers harmed by this incident the fact that the company's cost-benefit analysis justified the decision to leave you exposed and then take all the harm yourself is probably not just cold-hearted, it's probably insulting. And if you were one of the consumers, you'd probably feel that way too.

The problem is that when we look at the world as individuals (not just consumers!) we don't do it through cost-benefit lenses, and (notwithstanding Milton Friedman, may he rest in peace) that's probably a good thing. We consider that we have our own rights and interests, and don't want to be harmed (materially) just to save someone else some money. And that's what being on the receiving end of standard model risk management looks and feels like, if you're the victim of residual risk-shifting.

I don't know quite what the solution is for this dichotomy of perspectives; I think it is quite common in many areas - I rather suspect it is the rule rather than the exception. I do know that it makes infosec public policy and legal standards inherently unstable, because use of the standard cost-benefit model means that there will unavoidably be consumers aggrieved at being (or at least feeling) victimized, and so there will be public policy pressure by privacy and victims' advocates to shift the risks back to the companies.

At the public policy level, I think this means we need to have robust discussions about what, exactly, we mean by "risk," and what the trade-offs might be. At the company level, I think we need to be very careful to think through how residual risks might be shifted by the risk management strategies we adopt, and whether that in itself is acceptable.

After all, the more infosec residual risk you shift to consumers, the greater the risk you will create aggrieved plaintiffs and/or advocacy and pressure groups. In the final analysis, a low-cost infosec strategy just might wind up turning the residual risks you tried to shift into negative publicity, lawsuits and regulatory action . . .

Thursday, February 1, 2007

Vista: Secure enough for hospital life support?

I've been wondering for some time about standards for the stability and security of applications and operating systems supporting critical systems, like electronic medical records, and especially those applications providing decision support (e.g. computerized patient order entry). I've tended to punt via disclaimers about not using them for critical systems, which users ignore at their peril (and ignore them they do).

Maybe Vista will set a new standard? Billg seems to thinks so, with a number of (very valid) qualifiers. And we'll have to see what the EULA says . . .

Excerpt from an interview with Bill Gates, from Digg: http://www.our-picks.com/archives/2007/02/01/bill-gates-vista-is-so-secure-it-could-run-life-support-systems/(

Journalist: Let’s imagine a hospital where life support systems are running Vista. Would you trust it with your life?

Bill Gates: . . . The answer to your question is that, absolutely, Vista is the most secure operating system we’ve ever done, and if it’s administred properly, absolutely, it can be used to run a hospital or any kind of mission crytical thing. But it’s not as simple as saying “If you use Vista, that happens automatically”. The issues about patient records and who should be able to see them, the issue about setting up a network, so that authorized people can connect up to that hospital network, the issue about having backup power, so that the computer systems can run even if the generators go down. There are a lot of issues to properly set up that system, so that you have the redundancy and the security walls to make sure it fullfils that very crytical function. So we are working with partners to raise their skills to make sure that when get involved in an installation like that they can make it secure. So I feel better about Vista than any other operating system, but there’s a lot of things that need to be done well, and we’re certaintly committed to step up and make sure these security issues are ieasier and better understood.

Monday, January 29, 2007

Security Incident Response Policy

The following policy is intended to set up a structure for security incident response for healthcare organizations. It takes into account HIPAA as well as state security incident response laws, as well as other federal requirements and the other information security laws of most US states. (It might well be consistent with all of them but I've only had reason to check it against maybe 3 dozen states.)

Obviously it is designed for a larger organization, but should be readily adapted to smaller - the real point is to be sure to identify the tasks which have to be accomplished and designate accountable individuals to handle them. It also takes its place in a broader legal architecture (policy and procedural structure) which includes some defined terms and acronyms whose definitions I haven't bothered to include here - sorry! - but I think they should be easy to figure out by context.____________________________________________________________________

© 2005 John R. Christiansen
Subject to Creative Commons License
Attribution Share-Alike 2.5 License

ORGANIZATION NAME Security Incident Response Policy
Information Security Policy No. __

1. Objectives of this Policy


The objectives of this Policy are to help assure:

  • The confidentiality, integrity and availability of Protected Information held by ORGANIZATION, including but not limited to protected health information as defined by Health Insurance Portability and Accountability Act of 1996 and its implementing regulations ("HIPAA"); and

  • The operational integrity of ORGANIZATION's Information Systems.

2. Scope of Policy.


This Policy is intended to help accomplish its objectives by providing guidance to ORGANIZATION Workforce and Contractors, so that they will be able to:

  • Recognize events or circumstances which may indicate that a Security Incident is occurring or has occurred;

  • Know who is responsible for and authorized to respond to possible Security Incidents; and

  • Know the procedures which should be followed in responding to possible Security Incidents.

3. Recognizing Security Incidents


3.1 A Security Incident is any action or event which:

  • Provides an unauthorized person with access to and/or the ability to use, disclose, modify or destroy Protected Information; or

  • Permits an unauthorized person to modify the functioning of ORGANIZATION's Information Systems, including any equipment or device and any software application or operating system which is a component of an Information System; or

  • Permits a software application which is not authorized under the Acceptable Use policy to access or perform actions affecting Protected Information or the functioning of any Information System or component of an Information System.

3.2 ORGANIZATION Workforce and Contractors are only authorized to access, use, disclose, modify or destroy Protected Information, and to access, use and perform activities on ORGANIZATION information systems, in compliance with ORGANIZATION policies. Any action by a member of the Workforce or a Contractor which may provide access to or affect Protected Information and/or an Information System which is not in compliance with ORGANIZATION policy may therefore be considered a Security Incident.

3.3 Individuals and entities which are not members of the Workforce or Contractors are not authorized to have access to Protected Information or Information Systems without specific authorization by the CISO or other Authorized Security Officer. Any action which may provide access to or affect Protected Information and/or an Information System by an individual or entity who is not part of the Workforce or a Contractor and is not specifically otherwise authorized by an Authorized Security Officer, may therefore be considered a Security Incident.


3.4 Both direct and indirect actions which result in access to or affect Protected Information and/or Information Systems may be considered security incidents. Some possible types of Security Incident therefore include:


  • An employee or Contractor viewing Protected Information in a database the individual is not authorized to access under ORGANIZATION policy.

  • An employee or Contractor downloading software which is not permitted under the Acceptable Use Policy.

  • An unauthorized third party ("hacker") using a falsified user name and password to gain access to Information Systems.

  • An unauthorized third party seeking Information System access control or other information by pretending to be an individual authorized to obtain such information ("social engineering").

  • An email or other communication purporting to be from an authorized party seeking Protected Information or information potentially useful in obtaining Information System access ("phishing").

  • A software virus or worm ("malware") interfering with the functioning of personal computers which are part of an Information System.

This is not intended to be a comprehensive list of possible types of Security Incident.

4. Security Incident Priorities


Security Incidents shall be ranked as follows:

4.1 Categories

Critical:

  • Risks: Exposure to criminal penalties; exposure to major financial losses; potential threat to life, health or public safety; major damage to reputation or operations

  • Examples: Employee theft of Protected Information; disruption of or denial of service by Critical Systems, including clinical decision-support applications, financial reporting systems, and electronic medical records information; unauthorized access to security administrator applications or information

Moderate:

  • Risks: Exposure to minor financial losses; minor damage to reputation or operations

  • Examples: Employee views medical record of fellow employee without authorization; worm causes fraudulent mass emailing from infected systems; website is defaced

Minor:

  • Risks: Exposure to minimal financial losses; minimal or no damage to reputation or operations

  • Examples:"Phishing" email is received; employee accesses prohibited websites

Suspicious Activities:

  • Observations indicate possibility of past, current or threatened security incident, but may be consistent with authorized or non-harmful activities.

  • Examples: Access logs show limited number of unsuccessful attempts by authorized user; employee loiters near restricted work area beyone his authorization; user returns to workstation to find new application started without her authorization

5. Information Security Incident Response Team


The Information Security Incident Response Team ("ISIRT") will be responsible for response to all Critical and Material Security Incidents, and shall develop procedures and delegate responsibilities for response to Moderate and Minor Security Incidents to the Security Team. ISIRT membership shall include Security Team staff, representatives of the principal departments of ORGANIZATION, and representatives of the CIO, the Law Department, Public Affairs and Human Resources. The ISIRT will be chaired by the CISO.

The ISIRT will be responsible for developing and maintaining incident response procedures, and will lead and coordinate responses to Incidents. The ISIRT shall establish contact procedures and responsibilities to ensure that appropriate individuals are contacted for response as needed. Members of the ISIRT shall be responsible for advising and assisting the Incident Leader in response to Critical and Material Security Incidents. At all times, the ISIRT shall have appropriate members on-call to respond to incidents.


The ISIRT shall maintain relationships with and contact information for local, state, and/or federal law enforcement agencies, Internet Service Providers (ISPs), third party contractors, outside legal counsel, managed security providers and technology experts as the ISIRT deems appropriate or helpful.


6. Security Incident Reporting


All members of the Workforce and Contractors are required to report possible or suspected Security Incidents when they observe activities or records which reasonably seem to indicate their occurrence.

6.1 Observed Policy Violations.


Potential or suspected Security Incidents in which Workforce members and/or Contractors are observed acting contrary to policy shall be promptly reported to the Information Asset Supervisor responsible for oversight of the Protected Information and/or Information System element which is implicated, unless the Information Asset Supervisor, an Authorized Security Officer or a member of the Security Team is the individual suspected of acting contrary to policy.

6.2 Records of Incidents.


The Security Team shall be responsible for the review of audit trails, log files and other records of activity involving Protected Information and Information System usage.

6.3 Malicious Software.


All members of the Workforce and Contractors are required to immediately report to the Security Team the possible presence of software viruses and worms, and any spyware which appears present or to be affecting the performance of any personal computer or other device or application they are using.

6.4 Social Engineering.


All members of the Workforce and Contractors are required to immediately report to the Security Team if they receive any communication from an individual requesting Protected Information and/or information potentially useful in obtaining Information System access or use, from any individual whose authority to obtain such information is not known and cannot be confirmed with the applicable Information Asset Supervisor. This requirement applies to all communications, whether face-to-face, by telephone or email, or otherwise.

6.5 Violations by Accountable Security Personnel.


Potential or suspected Security Incidents involving an Information Asset Supervisor, Authorized Security Officer or member of the Security Team shall be promptly reported to the [COMPLIANCE OFFICER/LEGAL OFFICER/COO/OTHER].

7. Responding to Security Incidents


All reports of potential or suspected Security Incidents shall be documented upon receipt. Any actions taken in response to a potential or suspected Security Incident shall be documented in the form provided by the ISIRT. The originals of all Security Incident documentation shall be kept by the ISIRT according to the Policies and Procedures Documentation Policy.


7.1 Malicious Software Incidents.


The Security Team shall respond to all Security Incidents involving malicious software according to the Malicious Software Policy, Policy No. __.


7.2 Information Asset Supervisors.

Upon observing or receiving a report of a potential or suspected Security Incident the Information Asset Supervisor shall:

  • Notify the ISIRT and cooperate with all ISIRT response requests.

  • Document the observation or report.

  • If the observation or report indicates the involvement of a Workforce member or Contractor, suspend the access of the individual(s) involved to the Information System pending investigation.

7.3 Critical Security Incidents.


An Incident Leader will be designated for each Critical Security Incident. The Incident Leader will be responsible for identifying and coordinating responsive actions; identifying and convening the members of the ISIRT necessary or appropriate for response to the incident; coordinating with the Law Department, Public Affairs and other internal parties; and reporting on the incident and responses to the Security Oversight Committee.


The Incident Leader shall consult promptly with legal counsel to determine whether the Security Incident may expose ORGANIZATION to material legal penalties and/or liabilities. If there appears to be a material risk of such penalties or liabilities, the Incident Leader shall promptly ask the ISIRT to consider whether the investigation and reporting should be conducted through or under the oversight of legal counsel. External consultants, technical experts and/or legal counsel may be retained for purposes of incident response upon authorization by the ISIRT.


During a Critical Security Incident response the ISIRT members will meet in a predetermined physical location, by teleconference and/or by electronic communication to ensure that all members are informed of their duties and tasks in connection to the response, and to avoid duplication of effort and loss of evidence.

7.4 Moderate and Minor Incidents

An Incident Investigator will be designated for each Moderate or Minor Security Incident. The Incident Investigator shall be provided with any additional investigative or analytical help which may be necessary or desirable by the Security Team. External resources may be obtained by authorization by the ISIRT. Moderate and Minor Security Incidents shall be reported periodically to the ISIRT under procedures adopted by the ISIRT.


7.5 Security Incident Forensic Investigation.


The Incident Leader or Incident Investigator will supervise and work with Security Team analysts and investigators to determine the extent of damage and the effects of the Security Incident on systems, data, and operations, as well as the threats and vulnerabilities which caused or facilitated the occurrence of the incident.


Information gathered in the investigation of Security Incidents shall be developed and preserved to the greatest extent possible as potential evidence admissible in court in case it is needed in legal proceedings. Whenever possible, any individuals or entities which may be liable for harm caused by the incident shall be identified, and the ISIRT may seek to have damages quantified for possible use in administrative or legal proceedings.


7.6 Suspicious Activities.


ORGANIZATION Workforce and Contractors will report Suspicious Activities to the Security Team, which will publish contact information and maintain reporting functions for such reporting.


The Security Team will investigate any such report appropriately, including followup interviews and log and audit trail reviews.


7.7 Audit Logs.


The Security Team will be responsible for reviewing audit trails and logs throughout the Information Systems. Such reviews will be conducted with respect to a given device or application whenever a Security Incident or Suspicious Activities are reported which may involve unauthorized access to the device or application.


The Security Team shall also review all audit trails and logs pertaining to Critical Systems no less frequently than _____________, and shall review samples of audit trails and logs pertaining to non-Critical Systems no less frequently than ___________, for possible evidence of Security Incidents or Suspicious Activities.


Review may be expedited by use of appropriate analysis tools. Scheduling and sampling procedures shall not be disclosed in advance to personnel not directly involved in the review. Information and observations obtained in the course of Security Team investigations and reviews shall be immediately assessed for indications of the reasonable possibility of the actual or threatened occurrence of a Security Incident or Incidents, using the prudent professional judgment of Security Team staff.

In the event Security Team staff determines that there is a reasonable possibility of an actual or threatened Security Incident they will report this determination to the ISIRT, which will respond in accordance with this Policy. A Security Team determination that upon investigation or review there is not a reasonable possibility of an actual or threatened Security Incident shall be logged and include in the Security Team’s
report to the CISO.

7.8 Security Incident Response Times.

Upon receipt of information indicating the possible occurrence of a Security Incident, the ISIRT shall assign a preliminary rank to the Security Incident and proceed under the following timetable:

Critical Incidents:


  • Assign Incident Leader within _____

  • Control access to all relevant devices and records within _____

  • Notify ISIRT members within _________

  • Commence investigation within __________

  • First report to ISIRT on probable scope of harm and continuing risk within _____

Moderate:


  • Assign Incident Investigator within _________

  • Commence review of all relevant devices and records within ________

  • Report on probable scope of harm and continuing risk to Information Asset Supervisor within __

Minor:


  • Assign Incident Investigator within _________

  • Commence review of all relevant devices and records within ________

  • Report on probable scope of harm and continuing risk to Information Asset Supervisor within __

Suspicious Activity:


  • Security Team conducts preliminary review within ________

  • Security Team review of applicable logs/audit trails within ___________

  • Security Team interview(s) with relevant personnel within ________

  • Security Team determination whether to refer to ISIRT within ________

8. Cross-References

Acceptable Use Policy, Policy No. __

Authorized Security Officers: The authority of the Chief Information Security Officer ("CISO") and other accountable security personnel are set forth in Policy No. __

Policies and Procedures Documentation, Policy No. __

Security Team: The responsibility, authority and organizational structure of the Security Team is set forth in Policy No. __

"Accountable Security Personnel" is defined in Policy No. __

Contractor" is defined in Policy No. __

"Information System" is defined in Policy No. __

"Information Asset Supervisor" is defined in Policy No. __

"Protected Information" is defined in Policy No. __

"Workforce" is defined in Policy No. __