Tuesday, February 13, 2007

The Role of Legal Counsel in Information Security Risk Assessment and Strategic Information Security Decisions

Legal counsel can and should play an important role in information security legal compliance and risk management. While the implementation of many security safeguards requires substantial technical knowledge, the development and selection of specific security policies, procedures and technical requirements for purposes of legal compliance and risk management requires the integration of such technical knowledge with legal interpretation and strategic risk management insight.
Specification of Legal Security Issues.
Legal requirements for security compliance, whether under HIPAA, Gramm-Leach-Bliley, emerging common law or almost any other law, are organizational obligations, not technical specifications. (The California Database Protection Act and comparable laws, which create incentives for encryption of personal information stored in databases, may be an exception. Even in this case, however, the law does not specify the type or strength of the encryption, or make encryption mandatory.) Any given organization may be subject to one or more set of legal security requirements, depending on the kind of activities it engages in and the jurisdictions where it does business.
> Legal Task: Identification of security laws applicable to organization, based on jurisdictions and activities.
As a rule, security legislation and regulations do not have any “safe harbors,” so there is no security control or set of security safeguards whose adoption can be guaranteed to make an organization compliant. Rather, these laws require organizations to assess and manage information security risks, to a degree usually framed as “reasonable and appropriate,” or as applicable to “reasonably foreseeable risks.” Unfortunately, “risk” is a multi-dimensional concept, a factor which always should be but too often is not taken into account in security risk assessment and management.
The usual formulation of information security objectives, which are the objectives against which security risks are determined, is the “CIA triad,” for “confidentiality, integrity and availability” – that is, the extent to which a given asset is protected against unauthorized viewing, use or alteration. In some settings, such as financial system assessment, the additional objective of “accountability,” meaning the ability to strongly identify participants in transactions, may also be a key objective.
These security objectives are frequently in conflict; for example, any process which protects confidentiality by making asset access more difficult will tend to decrease availability to the same degree. When security risk objectives conflict their resolution is a matter for organizational policy.
> Legal Task: Ensure security objectives of organization are consistent with legal obligations of the organization.
Information Security Risk Assessment.
The foundational process for information security is risk assessment. In this process an appropriate professional or team of professionals undertakes a structured review of the security controls and safeguards used in connection with an organization’s processes, physical facilities and technical systems used to receive, store, process and transmit legally-protected data.
The results of a risk assessment may be used (1) to identify gaps or weaknesses which might put the protected data at risk, supporting the recommendation or development of appropriate new or supplemental safeguards and controls to fill the gaps or mitigate the weaknesses; or (2) to confirm an organization’s compliance with security standards. The former type of assessment is frequently called a security “gap analysis,” while the latter is sometimes, but not always, referred to as a security “audit.”
From a lawyer’s point of view both types of assessment are factual investigations, and assessment reports are (or should be) findings of fact. It should be noted, however, that assessments sometimes purport to go beyond findings of fact, to conclusions of law; e.g., that a given organization is or is not “HIPAA compliant.” This is understandable when the objective of the assessment is to determine compliance, but the actual determination whether an organization is in compliance with the law is something only a lawyer is trained and authorized to do. Quite apart from issues of the unauthorized practice of law, the organization might well get an incorrect answer about its compliance status.
> Legal Task: Help develop risk assessment scope of work to ensure focus on appropriate objectives and fact-finding limitations.
A compliance assessment therefore should either be a joint lawyer/security professional project or a two-stage project in which the legal implications of the security professional findings are determined by a lawyer. On the fact-finding level there are a number of possible risk assessment methodologies available, none of which are required as a matter of law for private or state governmental organizations.
Federal agencies are supposed to use the risk assessment methodologies published by the National Institute of Standards and Technology (“NIST”), which has been influential in federal security regulation development and therefore should be taken into account in assessing compliance with federal regulations. A very few industries have developed or are developing their own appropriate methodologies, especially the banking and energy sectors, and the major consulting firms tend to have proprietary methodologies.
Generally, any information security risk assessment will start with an identification of (1) security “assets,” (2) “threats” to those assets, and (3) operational and system “vulnerabilities” to identified threats. While precise definitions vary, generally these terms refer to the following:
• An “asset” may be information as well as operational resources such as software applications, bandwidth and memory, and networked devices and equipment, which the organization is legally obliged to protect, is materially necessary to operations, or is of value to the organization.
• “Threats” are the various agencies which may harm or interfere with assets, including human threats such as hackers and malicious insiders; environmental threats such as facility fires, power outages and burst water pipes; natural threats such as floods, earthquakes, and the like; and technical threats such as computer viruses, worms and spyware (arguably a subset of human threats, since they are of human origin).
• “Vulnerabilities” are those operational and system characteristics which make it possible for specified threats to harm or interfere with specified assets. Some vulnerabilities may be obvious and easily resolved, as with implementation of a firewall to prevent unauthorized external access to a network. Others may be the result of normal or even generally beneficial features of a process or system element, as where remote database access to support telecommuting creates unavoidable (though to some extent reducible) risks that an unauthorized individual will “spoof” an authorized user’s identity to gain network access.
Once assets, threats and vulnerabilities have been identified, the next step in risk assessment is “impact” and “control” analysis, to make a “risk determination.” Risk is typically considered a function of the probability that a given threat will cause harm to a given asset, given the existing vulnerabilities. The finding at this stage is sometimes called the “inherent risk.”
Risk assessment can be a difficult, burdensome and uncertain process. In large or complex organizations and/or systems it may only be practical to assess a sample of the processes, facilities and/or systems, though choosing reasonably representative samples may be problematic.
Assets are usually readily identifiable, but doing so requires defining the perimeters of the processes and systems under assessment, and inventorying the information, devices, and equipment which constitute its assets. It is also important to try to assign values to assets, and in this connection it should be noted that the term “asset” has something of a specialized meaning in the risk assessment context.
Ordinarily, assets only have a positive value, such as the market price at which they can be sold, or their value to the organization in operational support, which might be measured by the cost of replacement. For risk assessment purposes, however, the liability or penalty “value,” meaning the exposure of the organization to liabilities and/or penalties due to loss, disclosure or misuse of the asset should also be estimated.
> Legal Task: Help identify assets organization is legally obliged to protect, e.g. legally-protected information, licensed software and trade secrets, etc.
> Legal Task: Estimation of liabilities and/or penalties associated with loss, disclosure or misuse of assets identified for risk assessment purposes.
Like assets, most types of threats are usually identifiable at a categorical level though some, especially threats caused by malicious software, are constantly evolving. However, the identification of the specific threats applicable to a given asset requires a detailed review of the operational environment in which the asset is kept, used and/or transferred.
Threats can generally be categorized as follows:
• Human threats, from insiders or outsiders (e.g. hackers), who may unintentionally or deliberately access, use, modify, transfer or harm assets.
• Physical facility threats such as power failures, fires, burst water pipes and other event harming the facilities or equipment used in connection with the assets.
• Environmental threats, such as floods, earthquakes and tornados, which may also cause harm to facilities or equipment.
• Technical threats such as computer worms, viruses and spyware (which might be considered a subcategory of human threats since humans create them), as well as system-related issues such as application instability, etc.
Vulnerabilities are also functions of the operational environment, and are identified by the known characteristics of the operating environment, including those of the specific technical systems, buildings and equipment, as well as those of human beings in general. Whether or not a given characteristic is a vulnerability depends entirely upon the assets and threats presented in the given environment.
An assessment also inventories existing security safeguards and controls (two overlapping terms, in this context sharing the meaning of protections against potentially harmful events). These are frequently categorized as administrative, physical and technical, though there is an emerging recognition of governance controls as an important category as well. These categories break out as follows:
• Administrative safeguards are the policies and procedures used to manage operational processes and human activities involving or pertaining to assets and vulnerabilities. These would include policies and procedures pertaining to hiring and employment, authorization for and management of asset access and use, etc.
• Physical safeguards are the policies, procedures and physical requirements applicable to the buildings and equipment relevant to asset management, such as locked-door requirements, key issuance, fire suppression, disaster recovery plans, portable device (e.g. laptop) protection policies, etc.
• Technical safeguards are the policies, procedures and system requirements controlling access to and use of software and information in devices and/or on the network. Technical safeguards include system configuration requirements, user identification and authentication procedures and processes (e.g. password issuance and management), malicious software screening and disposition, encryption of data, etc.
• Governance controls constitute the policies and procedures used to provide security oversight. While it has long been recognized that factors such as demonstrated executive commitment and reporting, accountable security officers and appropriate security training are essential for effective security, governance controls have not tended to be a separate subject of security assessment (though some aspects, such as training, are sometimes assessed as part of administrative safeguards). With the emergence of security as a regulatory compliance issues, governance control assessment is at least prudent if not necessary to avoid penalties and liabilities, including penalties or liabilities applicable to individual officers or directors responsible by law for organizational governance.
The effectiveness of policies and procedures is in many cases at least partially a legal question, as where employees are supposed to be subject to discipline for policy violations, or oversight policies are implemented to avoid or minimize liability and penalty exposures.
> Legal Task: Review legal effectiveness of policy and contractual documents used as security safeguards and controls.
The most difficult step in risk assessment may be risk determination, since this depends upon probability information which may not be available, or if available may not be reliable. There is currently no central repository of threat or security incident information, and no mandatory reporting, so to date there is no robust information on the incidence of most threats.
Some security professionals argue that certain vulnerabilities are so well known and so easily corrected (such as the use of weak passwords) that “due care” requires their correction. This suggests that some specific safeguards may be required, in at least some specific settings, as a matter of law. There is little or no specific law on this point, so the identification of such safeguards would seem to be a matter for determination by properly qualified security experts.
> Legal Task: Work with security professionals to identify safeguards which may be required to meet the applicable standard of care, and basis for such identification.
Impact information may be more available but more problematic, since assessment according to different security objectives (as discussed below) may lead to different impact outcomes. For example, electronic health records (“EHR”) systems are used to store and process personal health information, which is required to be protected under HIPAA and is accorded highly confidential status under not only HIPAA but a variety of other laws. At the same time, an EHR may be used to support critical clinical care, so that a failure of availability might cause erroneous treatment decisions leading to a patient’s serious harm or even death.
Note that in both cases the impact determination is based on a projection of legal exposure. In this case, the differential impacts are that a failure to provide confidentiality protections judged adequate in a HIPAA administrative enforcement proceeding might lead to a few thousand dollars in civil penalties, while a treatment error causing a patient’s death could lead to a multimillion dollar negligence judgment.
This risk assessment step therefore requires legal insight and analysis. And this scenario also demonstrates the reason why a security risk assessment should only be undertaken with a clear understanding of the organization’s risk management strategies and tolerance, and the security objectives of processes and systems under assessment.
> Legal Task: Review and analyze legal implications of risk assessment findings, including alternative liability and penalty exposures under different scenarios.
Strategic Information Security Decision-Making.
Security risk management and compliance decisions will always be subject to second-guessing in hindsight, by regulators or counsel for parties alleging harm caused by a security breach. The only effective response to this is to implement appropriate security risk assessment and management diligently and in good faith.
The information security legal compliance process therefore resembles the processes used by organizational fiduciary officers in compliance with the corporate “business judgment rule,” and to minimize organizational and officer exposures to criminal penalties under the Federal Sentencing Guidelines. Such processes require informed executive oversight and careful documentation. Advice from qualified experts and legal counsel can help demonstrate due diligence, and legal counsel can be helpful in developing the strategy for properly documenting the process for use as defensive evidence, if needed.
> Legal Task: Assist in development of organizational oversight policies and procedures for security compliance oversight and risk management.
> Legal Task: Ensure adequacy of security compliance documentation for evidentiary purposes.
Legal counsel may also be helpful in making hard choices, as where a technical solution is available but expensive and a policy control is under consideration as an alternative. A good security consultant can make appropriate findings identifying security vulnerabilities, and can recommend alternative solutions, but the organization’s accountable executives must make the decision whether or not the risks associated with the policy control alternative are acceptable.
This is fundamentally a governance-level decision, which should be made in accordance with the organization’s strategies for managing its full portfolio of risks – financial, operational, legal, and so on – which includes but is not limited to information security risks. At the organizational level there are four basic risk management strategies, any or all of which may have implications for security management:
• Risk avoidance, a strategy under which an organization determines that its exposure is simply too great in performing some specified activity, and avoids engaging in that activity. For example, a bank might find the lower costs of offshore processing of customer information attractive, but conclude that the lack of adequate oversight of and legal recourse against offshore processors for failing to protect the information makes this option unacceptable.
• Risk assumption is a strategy under which risks are understood and deliberately accepted, as an informed policy decision. Since risks can never be reduced to zero as a practical matter, risk assumption is an inevitable element of risk management. If risks to be assumed can be accurately projected, it may be possible to reserve against them. Any organization which fails to assess its risks is essentially adopting a strategy of assuming all risks by default.
• An uncommon strategy which may be becoming more available is risk transfer, under which the exposed party obtains some coverage for its own risk exposure by having a second party assume some or perhaps all of the risk. Insurance, where available, is one example of security risk transfer; so is an indemnification clause in a contract with a party hosting or otherwise performing services affecting assets.
• The most common strategy, and sometimes the only one recognized as a security strategy by the less sophisticated, is risk reduction. Risk reduction includes the implementation of whatever policies, procedures and technical solutions may be necessary or desirable to reduce identified risks to a level at which they can be assumed.
The precise mix of strategies an organization uses depends in part on what is available, both practically and as a matter of law. Some risks are inherent in an organization’s mission and cannot be avoided; for example, fraud is an inherent risk for financial services, and medical error is an inherent risk for health care providers. And risk transfer, in particular, may or may not be option, depending on the availability of insurance or the ability to transfer risk to other parties by indemnification.
> Legal Task: Assist in negotiation of insurance coverage and/or contracts transferring risk, where available.
The bottom line on an organization’s security strategies depends upon its security risk tolerance. While there have been arguments that there is or can be a “return on investment” from security activities, security is usually perceived as a zero-sum game: Any resources invested in security are taken away from other possible uses. The organization, therefore, must make a policy decision about how much it is willing to allocate to security, based on the availability of resources and the security-related risks it is willing to assume.
This kind of decision must be informed, but cannot be determined by security risk assessment findings. Information security legal compliance and risk management is just one of the portfolio of risks any organization must manage. An over-allocation of resources to security which harmed the organization’s ability to fulfill its mission, for example, could be more detrimental than many security events.
Deciding whether or not a given level of security risk is tolerable therefore depends less on an understanding of specific security threats and vulnerabilities, than on an understanding of their implications for the organizational mission. Potential financial, operational and reputational harms and legal penalties associated with security risks must be balanced against potential harms associated with their prevention, and there is no a priori formula for striking such a balance. Decisions like this are in the final analysis the fiduciary responsibility of the officers and board of the organization, and the role of both lawyers and security professionals at this level is to provide these officers and directors with the information and professional advice they need to make them.
> Legal Task: Provide legal information and counsel to executive officers and board in the strategic management of the organization.
Conclusion.
Lawyers should play an active role at all levels of the information security risk assessment process, from defining the scope of the assessment and determining the legal effects of policies and procedures under assessment, through interpretation of the legal implications of an assessment to advise the officers who must decide what it means to the organization. Technology-dependent organizations should therefore identify (or develop) and make use of attorneys who understand how to work with information security concepts, documentation and professionals, to help them appropriately manage their information security compliance obligations, and manage their security-related risks. Conversely, lawyers serving such organizations should develop appropriate expertise, or identify and make use of outside counsel when dealing with potentially important security issues. Either way, this means involving legal counsel in information security risk assessment and management processes and procedures.

Tuesday, February 6, 2007

InfoSec Risk-Shifting and Consumers

One of my pet peeves (I have quite a few) is the way that we tend to use the term "risk management" as if it had a generally accepted meaning everybody understands. For infosec and most other IT professional purposes risk generally means a "hazard" associated with IT usage, in more formal terms described as a function of the probability of an event with negative consequences occurring and the potential severity of such harm.

From an IT and infosec professional's POV, "risk management" is what you do to reduce the likelihood of an identified, potential negative event or class of events, its harmful consequences, or both. Safeguards and controls are selected depending on whether their associated cost is reasonably proportionate to the expected benefits in reducing risks.

This concept set is a little fuzzy around the edges, but is generally accepted as a viable algorithm for IT management and infosec. (I actually think don't actually think this algorithm works all that well in these areas either, and I think I've got a solution for that, but that's a topic for a future post.) However, I don't think this particular algorithm is recognized and accepted by one very important IT stakeholder group: Consumers.

Consumer advocates will not find the infosec/IT professional cost-benefit model very attractive for a simple reason: It generally shifts residual risks to them. Any cost-benefit-based risk management strategy will inevitably wind up determining that some risks are not worth the cost of elimination. If this model is the legal standard of care - which it in fact is under HIPAA and GLBA, and other laws and standards - that means that an organization which has decided not to protect against such risks is not liable if a negative event in that risk range occurs. If the individual(s) affected by a negative event have no recourse, they have assumed the risk; in other words, the residual risks have been shifted to the consumer.

For an example, consider a mythical ecommerce company which gathers customer data as part of financial services it provides. The company is subject to the Gramm-Leach-Bliley Act, and so must provide security safeguards for this data. It selects these safeguards based on the standard cost-benefit model, and decides it would not be cost-effective to implement, say, two-factor authentication for access to customer transactions data. It then experiences a security incident involving theft and fraudulent misuse of customer data, through an exploit which could have been prevented by two-factor authentication.

Is the company liable to the customers who have been harmed? I would say probably not, if the standard of care is set by Gramm-Leach-Bliley and the company performed a reasonably competent risk analysis whose data supported going with single- rather than two-factor authentication. (Yes, I know Gramm-Leach-Bliley doesn't provide for a cause of action, but trust me I could write up a complaint using the regulatory standard to set the negligence standard of care.) I'd also say it probably isn't exposed to regulatory penalties from the FTC, for the same reason.

If you're one of the consumers harmed by this incident the fact that the company's cost-benefit analysis justified the decision to leave you exposed and then take all the harm yourself is probably not just cold-hearted, it's probably insulting. And if you were one of the consumers, you'd probably feel that way too.

The problem is that when we look at the world as individuals (not just consumers!) we don't do it through cost-benefit lenses, and (notwithstanding Milton Friedman, may he rest in peace) that's probably a good thing. We consider that we have our own rights and interests, and don't want to be harmed (materially) just to save someone else some money. And that's what being on the receiving end of standard model risk management looks and feels like, if you're the victim of residual risk-shifting.

I don't know quite what the solution is for this dichotomy of perspectives; I think it is quite common in many areas - I rather suspect it is the rule rather than the exception. I do know that it makes infosec public policy and legal standards inherently unstable, because use of the standard cost-benefit model means that there will unavoidably be consumers aggrieved at being (or at least feeling) victimized, and so there will be public policy pressure by privacy and victims' advocates to shift the risks back to the companies.

At the public policy level, I think this means we need to have robust discussions about what, exactly, we mean by "risk," and what the trade-offs might be. At the company level, I think we need to be very careful to think through how residual risks might be shifted by the risk management strategies we adopt, and whether that in itself is acceptable.

After all, the more infosec residual risk you shift to consumers, the greater the risk you will create aggrieved plaintiffs and/or advocacy and pressure groups. In the final analysis, a low-cost infosec strategy just might wind up turning the residual risks you tried to shift into negative publicity, lawsuits and regulatory action . . .

Thursday, February 1, 2007

Vista: Secure enough for hospital life support?

I've been wondering for some time about standards for the stability and security of applications and operating systems supporting critical systems, like electronic medical records, and especially those applications providing decision support (e.g. computerized patient order entry). I've tended to punt via disclaimers about not using them for critical systems, which users ignore at their peril (and ignore them they do).

Maybe Vista will set a new standard? Billg seems to thinks so, with a number of (very valid) qualifiers. And we'll have to see what the EULA says . . .

Excerpt from an interview with Bill Gates, from Digg: http://www.our-picks.com/archives/2007/02/01/bill-gates-vista-is-so-secure-it-could-run-life-support-systems/(

Journalist: Let’s imagine a hospital where life support systems are running Vista. Would you trust it with your life?

Bill Gates: . . . The answer to your question is that, absolutely, Vista is the most secure operating system we’ve ever done, and if it’s administred properly, absolutely, it can be used to run a hospital or any kind of mission crytical thing. But it’s not as simple as saying “If you use Vista, that happens automatically”. The issues about patient records and who should be able to see them, the issue about setting up a network, so that authorized people can connect up to that hospital network, the issue about having backup power, so that the computer systems can run even if the generators go down. There are a lot of issues to properly set up that system, so that you have the redundancy and the security walls to make sure it fullfils that very crytical function. So we are working with partners to raise their skills to make sure that when get involved in an installation like that they can make it secure. So I feel better about Vista than any other operating system, but there’s a lot of things that need to be done well, and we’re certaintly committed to step up and make sure these security issues are ieasier and better understood.