Monday, January 29, 2007

Security Incident Response Policy

The following policy is intended to set up a structure for security incident response for healthcare organizations. It takes into account HIPAA as well as state security incident response laws, as well as other federal requirements and the other information security laws of most US states. (It might well be consistent with all of them but I've only had reason to check it against maybe 3 dozen states.)

Obviously it is designed for a larger organization, but should be readily adapted to smaller - the real point is to be sure to identify the tasks which have to be accomplished and designate accountable individuals to handle them. It also takes its place in a broader legal architecture (policy and procedural structure) which includes some defined terms and acronyms whose definitions I haven't bothered to include here - sorry! - but I think they should be easy to figure out by context.____________________________________________________________________

© 2005 John R. Christiansen
Subject to Creative Commons License
Attribution Share-Alike 2.5 License

ORGANIZATION NAME Security Incident Response Policy
Information Security Policy No. __

1. Objectives of this Policy


The objectives of this Policy are to help assure:

  • The confidentiality, integrity and availability of Protected Information held by ORGANIZATION, including but not limited to protected health information as defined by Health Insurance Portability and Accountability Act of 1996 and its implementing regulations ("HIPAA"); and

  • The operational integrity of ORGANIZATION's Information Systems.

2. Scope of Policy.


This Policy is intended to help accomplish its objectives by providing guidance to ORGANIZATION Workforce and Contractors, so that they will be able to:

  • Recognize events or circumstances which may indicate that a Security Incident is occurring or has occurred;

  • Know who is responsible for and authorized to respond to possible Security Incidents; and

  • Know the procedures which should be followed in responding to possible Security Incidents.

3. Recognizing Security Incidents


3.1 A Security Incident is any action or event which:

  • Provides an unauthorized person with access to and/or the ability to use, disclose, modify or destroy Protected Information; or

  • Permits an unauthorized person to modify the functioning of ORGANIZATION's Information Systems, including any equipment or device and any software application or operating system which is a component of an Information System; or

  • Permits a software application which is not authorized under the Acceptable Use policy to access or perform actions affecting Protected Information or the functioning of any Information System or component of an Information System.

3.2 ORGANIZATION Workforce and Contractors are only authorized to access, use, disclose, modify or destroy Protected Information, and to access, use and perform activities on ORGANIZATION information systems, in compliance with ORGANIZATION policies. Any action by a member of the Workforce or a Contractor which may provide access to or affect Protected Information and/or an Information System which is not in compliance with ORGANIZATION policy may therefore be considered a Security Incident.

3.3 Individuals and entities which are not members of the Workforce or Contractors are not authorized to have access to Protected Information or Information Systems without specific authorization by the CISO or other Authorized Security Officer. Any action which may provide access to or affect Protected Information and/or an Information System by an individual or entity who is not part of the Workforce or a Contractor and is not specifically otherwise authorized by an Authorized Security Officer, may therefore be considered a Security Incident.


3.4 Both direct and indirect actions which result in access to or affect Protected Information and/or Information Systems may be considered security incidents. Some possible types of Security Incident therefore include:


  • An employee or Contractor viewing Protected Information in a database the individual is not authorized to access under ORGANIZATION policy.

  • An employee or Contractor downloading software which is not permitted under the Acceptable Use Policy.

  • An unauthorized third party ("hacker") using a falsified user name and password to gain access to Information Systems.

  • An unauthorized third party seeking Information System access control or other information by pretending to be an individual authorized to obtain such information ("social engineering").

  • An email or other communication purporting to be from an authorized party seeking Protected Information or information potentially useful in obtaining Information System access ("phishing").

  • A software virus or worm ("malware") interfering with the functioning of personal computers which are part of an Information System.

This is not intended to be a comprehensive list of possible types of Security Incident.

4. Security Incident Priorities


Security Incidents shall be ranked as follows:

4.1 Categories

Critical:

  • Risks: Exposure to criminal penalties; exposure to major financial losses; potential threat to life, health or public safety; major damage to reputation or operations

  • Examples: Employee theft of Protected Information; disruption of or denial of service by Critical Systems, including clinical decision-support applications, financial reporting systems, and electronic medical records information; unauthorized access to security administrator applications or information

Moderate:

  • Risks: Exposure to minor financial losses; minor damage to reputation or operations

  • Examples: Employee views medical record of fellow employee without authorization; worm causes fraudulent mass emailing from infected systems; website is defaced

Minor:

  • Risks: Exposure to minimal financial losses; minimal or no damage to reputation or operations

  • Examples:"Phishing" email is received; employee accesses prohibited websites

Suspicious Activities:

  • Observations indicate possibility of past, current or threatened security incident, but may be consistent with authorized or non-harmful activities.

  • Examples: Access logs show limited number of unsuccessful attempts by authorized user; employee loiters near restricted work area beyone his authorization; user returns to workstation to find new application started without her authorization

5. Information Security Incident Response Team


The Information Security Incident Response Team ("ISIRT") will be responsible for response to all Critical and Material Security Incidents, and shall develop procedures and delegate responsibilities for response to Moderate and Minor Security Incidents to the Security Team. ISIRT membership shall include Security Team staff, representatives of the principal departments of ORGANIZATION, and representatives of the CIO, the Law Department, Public Affairs and Human Resources. The ISIRT will be chaired by the CISO.

The ISIRT will be responsible for developing and maintaining incident response procedures, and will lead and coordinate responses to Incidents. The ISIRT shall establish contact procedures and responsibilities to ensure that appropriate individuals are contacted for response as needed. Members of the ISIRT shall be responsible for advising and assisting the Incident Leader in response to Critical and Material Security Incidents. At all times, the ISIRT shall have appropriate members on-call to respond to incidents.


The ISIRT shall maintain relationships with and contact information for local, state, and/or federal law enforcement agencies, Internet Service Providers (ISPs), third party contractors, outside legal counsel, managed security providers and technology experts as the ISIRT deems appropriate or helpful.


6. Security Incident Reporting


All members of the Workforce and Contractors are required to report possible or suspected Security Incidents when they observe activities or records which reasonably seem to indicate their occurrence.

6.1 Observed Policy Violations.


Potential or suspected Security Incidents in which Workforce members and/or Contractors are observed acting contrary to policy shall be promptly reported to the Information Asset Supervisor responsible for oversight of the Protected Information and/or Information System element which is implicated, unless the Information Asset Supervisor, an Authorized Security Officer or a member of the Security Team is the individual suspected of acting contrary to policy.

6.2 Records of Incidents.


The Security Team shall be responsible for the review of audit trails, log files and other records of activity involving Protected Information and Information System usage.

6.3 Malicious Software.


All members of the Workforce and Contractors are required to immediately report to the Security Team the possible presence of software viruses and worms, and any spyware which appears present or to be affecting the performance of any personal computer or other device or application they are using.

6.4 Social Engineering.


All members of the Workforce and Contractors are required to immediately report to the Security Team if they receive any communication from an individual requesting Protected Information and/or information potentially useful in obtaining Information System access or use, from any individual whose authority to obtain such information is not known and cannot be confirmed with the applicable Information Asset Supervisor. This requirement applies to all communications, whether face-to-face, by telephone or email, or otherwise.

6.5 Violations by Accountable Security Personnel.


Potential or suspected Security Incidents involving an Information Asset Supervisor, Authorized Security Officer or member of the Security Team shall be promptly reported to the [COMPLIANCE OFFICER/LEGAL OFFICER/COO/OTHER].

7. Responding to Security Incidents


All reports of potential or suspected Security Incidents shall be documented upon receipt. Any actions taken in response to a potential or suspected Security Incident shall be documented in the form provided by the ISIRT. The originals of all Security Incident documentation shall be kept by the ISIRT according to the Policies and Procedures Documentation Policy.


7.1 Malicious Software Incidents.


The Security Team shall respond to all Security Incidents involving malicious software according to the Malicious Software Policy, Policy No. __.


7.2 Information Asset Supervisors.

Upon observing or receiving a report of a potential or suspected Security Incident the Information Asset Supervisor shall:

  • Notify the ISIRT and cooperate with all ISIRT response requests.

  • Document the observation or report.

  • If the observation or report indicates the involvement of a Workforce member or Contractor, suspend the access of the individual(s) involved to the Information System pending investigation.

7.3 Critical Security Incidents.


An Incident Leader will be designated for each Critical Security Incident. The Incident Leader will be responsible for identifying and coordinating responsive actions; identifying and convening the members of the ISIRT necessary or appropriate for response to the incident; coordinating with the Law Department, Public Affairs and other internal parties; and reporting on the incident and responses to the Security Oversight Committee.


The Incident Leader shall consult promptly with legal counsel to determine whether the Security Incident may expose ORGANIZATION to material legal penalties and/or liabilities. If there appears to be a material risk of such penalties or liabilities, the Incident Leader shall promptly ask the ISIRT to consider whether the investigation and reporting should be conducted through or under the oversight of legal counsel. External consultants, technical experts and/or legal counsel may be retained for purposes of incident response upon authorization by the ISIRT.


During a Critical Security Incident response the ISIRT members will meet in a predetermined physical location, by teleconference and/or by electronic communication to ensure that all members are informed of their duties and tasks in connection to the response, and to avoid duplication of effort and loss of evidence.

7.4 Moderate and Minor Incidents

An Incident Investigator will be designated for each Moderate or Minor Security Incident. The Incident Investigator shall be provided with any additional investigative or analytical help which may be necessary or desirable by the Security Team. External resources may be obtained by authorization by the ISIRT. Moderate and Minor Security Incidents shall be reported periodically to the ISIRT under procedures adopted by the ISIRT.


7.5 Security Incident Forensic Investigation.


The Incident Leader or Incident Investigator will supervise and work with Security Team analysts and investigators to determine the extent of damage and the effects of the Security Incident on systems, data, and operations, as well as the threats and vulnerabilities which caused or facilitated the occurrence of the incident.


Information gathered in the investigation of Security Incidents shall be developed and preserved to the greatest extent possible as potential evidence admissible in court in case it is needed in legal proceedings. Whenever possible, any individuals or entities which may be liable for harm caused by the incident shall be identified, and the ISIRT may seek to have damages quantified for possible use in administrative or legal proceedings.


7.6 Suspicious Activities.


ORGANIZATION Workforce and Contractors will report Suspicious Activities to the Security Team, which will publish contact information and maintain reporting functions for such reporting.


The Security Team will investigate any such report appropriately, including followup interviews and log and audit trail reviews.


7.7 Audit Logs.


The Security Team will be responsible for reviewing audit trails and logs throughout the Information Systems. Such reviews will be conducted with respect to a given device or application whenever a Security Incident or Suspicious Activities are reported which may involve unauthorized access to the device or application.


The Security Team shall also review all audit trails and logs pertaining to Critical Systems no less frequently than _____________, and shall review samples of audit trails and logs pertaining to non-Critical Systems no less frequently than ___________, for possible evidence of Security Incidents or Suspicious Activities.


Review may be expedited by use of appropriate analysis tools. Scheduling and sampling procedures shall not be disclosed in advance to personnel not directly involved in the review. Information and observations obtained in the course of Security Team investigations and reviews shall be immediately assessed for indications of the reasonable possibility of the actual or threatened occurrence of a Security Incident or Incidents, using the prudent professional judgment of Security Team staff.

In the event Security Team staff determines that there is a reasonable possibility of an actual or threatened Security Incident they will report this determination to the ISIRT, which will respond in accordance with this Policy. A Security Team determination that upon investigation or review there is not a reasonable possibility of an actual or threatened Security Incident shall be logged and include in the Security Team’s
report to the CISO.

7.8 Security Incident Response Times.

Upon receipt of information indicating the possible occurrence of a Security Incident, the ISIRT shall assign a preliminary rank to the Security Incident and proceed under the following timetable:

Critical Incidents:


  • Assign Incident Leader within _____

  • Control access to all relevant devices and records within _____

  • Notify ISIRT members within _________

  • Commence investigation within __________

  • First report to ISIRT on probable scope of harm and continuing risk within _____

Moderate:


  • Assign Incident Investigator within _________

  • Commence review of all relevant devices and records within ________

  • Report on probable scope of harm and continuing risk to Information Asset Supervisor within __

Minor:


  • Assign Incident Investigator within _________

  • Commence review of all relevant devices and records within ________

  • Report on probable scope of harm and continuing risk to Information Asset Supervisor within __

Suspicious Activity:


  • Security Team conducts preliminary review within ________

  • Security Team review of applicable logs/audit trails within ___________

  • Security Team interview(s) with relevant personnel within ________

  • Security Team determination whether to refer to ISIRT within ________

8. Cross-References

Acceptable Use Policy, Policy No. __

Authorized Security Officers: The authority of the Chief Information Security Officer ("CISO") and other accountable security personnel are set forth in Policy No. __

Policies and Procedures Documentation, Policy No. __

Security Team: The responsibility, authority and organizational structure of the Security Team is set forth in Policy No. __

"Accountable Security Personnel" is defined in Policy No. __

Contractor" is defined in Policy No. __

"Information System" is defined in Policy No. __

"Information Asset Supervisor" is defined in Policy No. __

"Protected Information" is defined in Policy No. __

"Workforce" is defined in Policy No. __

Sunday, January 28, 2007

Why This Blog?

I started this blog to try to help move information security theory and practice forward as both an intellectual discipline and professional practice area. Information security as a discipline is very new, as are the technologies involved and the professional disciplines of computer science, network implementation, and information management upon which information security builds. And computers and networks have evolved rapidly, which has required information security theory and practice to evolve rapidly in an attempt to keep pace.

Because all American statutory, regulatory, and common law is based at least to some extent on experience (and preferably on precedent), information security legal theory is not well-developed. As a result, no settled standards exist for allocating losses and settling disputes when security fails. In many cases nobody really knows how to determine whether someone has been negligent or has failed to meet regulatory requirements; sometimes, it is difficult to determine even what those duties are. Therefore, a legal theory of information security can’t just summarize existing legislation, precedent, and secondary authorities—it must weave together relevant strands of activity and thought.

This is what I want to try to do in this blog, I hope with my readers' help. I've been working on this for some years now, and it's a fascinating and I hope ultimately useful exercise. I plan to post, as and when I can, on not only current events and issues of interest to information security compliance and risk management, but also on some of the history of this field. I think this history has been neglected, to our disadvantage - we've learned a few things already, and it's worth it to try to remember them, even as we invent new tricks to deal with new problems.

Friday, January 26, 2007

The Integrated Information Security Standard of Care?

That's what I call this. It's more fully justified and explained in Christiansen, An Integrated Standard of Care for Healthcare Information Security (2005):

Integrated Information Security Standard of Care


1. An information security duty of care exists when an entity:

a. Uses an information system to create, store, process or transmit information, and

b. The entity is required by law to protect that information against unauthorized disclosure, use, or alteration (i.e., it is protected information).

2. An entity is “required by law” to protect information when such a duty is established by:

a. A statutory, regulatory or contractual provision; or

b. The existence of a legal obligation of the entity to act for the benefit of a party who may be harmed by the unauthorized disclosure, use, or alteration of the protected information.

3. The information security duty of care is satisfied by the implementation of an information security program consisting of:

a. Organizational policies governing the use or disclosure, and/or the protection of the confidentiality, integrity and/or availability of protected information, and/or accountability for transactions or events affecting he confidentiality, integrity and/or availability of protected information, which are consistent with the requirements of applicable law.

b. Information system program policies, procedures, practices, and governance structures (controls) which:implement the foregoing organizational policies, and whose objective is to provide reasonable assurance that the information system is operated and functions so that:

(i) No disclosure or use of protected information is made by an individual, application, or device, unless that disclosure or use is authorized by organizational policy,

(ii) Protected information is reasonably available to individuals, applications, and/or devices in order to serve an authorized purpose under organizational policy, and

(iii) Protected information is not altered by an individual, application, and/or device except as authorized under organizational policy.

c. Administrative, physical, and technical information safeguards which are implemented as part of the information system program and are intended to provide reasonable protection against reasonably identifiable threats to protected information.

4. The controls and safeguards implemented for purposes of an information security program meet the standard of care if:

a. They provide a reasonable assurance of compliance with applicable organizational policies.

b. They are reasonably consistent with the controls and safeguards implemented by reasonably comparable entities using reasonably comparable information systems (peer organizations), unless

(i) The controls and/or safeguards implemented by peer organization fail to take into account known threats that can be readily addressed by reasonably available controls or safeguards, or

(ii) The operating environments and/or operational objectives of peer organizations differ materially from those of the implementing organization; and

c. The costs and burdens of the controls and safeguards are reasonably proportionate to the risks of harm to parties (with legal interests in protected information) that are created by use of the information system for the purposes, or under the conditions, that create such risks.



Of course, this standard doesn't just apply to healthcare.

Thinking About Risk: What Do You Know, and How Do You Know You Know It?

I've been thinking a lot about IT risk issues for the last several years years, and one thing that is becoming clear is that IT security issues are only one category of IT risk. It is also pretty clear to me that we need a deeper and more robust discussion of what IT "risk" is, and how we work with it.

There is a lot of useful work going on in this area, and a lot of groundwork has been laid. (Examples: NIST and CERT specific to IT security; CobiT on IT in general; COSO on enterprise risk; Schneier; etc.) But some of the most insightful stuff I've seen has come out of the environmental sector - I've incorporated a lot of that into my own book.

I found one of the best and most illuminating examples of this in my recreational reading of Jared Diamond's *Collapse*, an analysis of how a number of societies succeeded or failed in dealing with environmental challenges. They range from Easter Island (big failure) to Tokugawa Japan (success). One part I found particularly striking and possibly quite valuable in thinking about IT risk issues was Diamond's "road map of factors contributing to failures of group decision-making."

I would characterize these as "cognitive" risk factors, which can work synergistically with technological, social, financial and other types of threats and vulnerabilities ("circumstantial risk factors?") to make a bad situation worse. I think these cognitive factors are at least intuitively pretty well known, but Diamond summarizes them (and demonstrates their effects) particularly well. I'm going to summarize them, and try to identify the kinds of IT risk situation where each type appears to be a contributing factor.

Diamond's "road map of factors" breaks down into the following four principal categories and their subcategories:

Factor 1. Failure to anticipate a problem before it arrives.

1.1 The group has never experienced the problem.
1.2 The group experienced the problem before but it was a long time ago and there is no accurate recollection.
1.3 Reasoning by false analogy: The problem looks like but in fact is not the same as one experienced before.

Factor 1 IT Comparison: I think it's fair to say that computer and electronic communication technologies don't work like anything human beings have dealt with before. (If they had computers in Atlantis obviously we don't remember!) And physical world analogies tend not to work very well in thinking about IT problems - look at some of the discussions about IP, jurisdictional and privacy issues. (I wrote a piece some years back arguing that "property" concepts implicitly based on physical world analogies break down in healthcare IT network environments.)

Factor 2: Failure to perceive a problem which has arrived.

2.1 "The origins of some problems are literally imperceptible" - the group doesn't have any way to see them developing.
2.2 The problem of "distant managers:" those responsible for making decisions are not on the scene where the problem is manifest, and so don't see it or don't perceive it accurately.
2.3 The problem develops in a "slow trend concealed by wide up-and-down fluctuations." This can lead to "creeping normalcy," in which harmful conditions develop so slowly that they are not really seen (the mythical "lobster gradually boiled to death").

Factor 2 IT Comparison: Legion. The "imperceptible origins" of some problems might include the way that some applications interact to create new, unanticipated vulnerabilties. And probably every CSO I know would say s/he suffers or has suffered from "distant managers" - maybe not physically, but "distanced" from the realities of their security problems. (Also part of the problem with DHS and its take on cybersecurity?) And how about the "creeping normalcy" of an Internet environment full of spam, phishing and other vermin?

Factor 3: "The most frequent and most surprising factor," the failure to "even attempt to solve a problem once it has been perceived." The reasons for this factor are "rational" and "irrational."

The "rational" reasons for not solving problems basically amount to variants on selfishness (or at least less-than-enlightened self-interest), i.e. advancing individual or small-group interests at the expense of a larger group or the community. If the small group or individual receives a large benefit at the expense of "small harms" to each of the members of the large group or community, the small group will have a strong motivation to pursue the benefit, while no individual member of the community will have much motivation to try to stop the harmful activity.
3.1 "ISEP" - "it's someone else's problem" - externalizing the harms caused by your activities onto others or the community as a whole.
3.2 "The tragedy of the commons" (a popular one when I was in college): A consequence of the adoption of rational selfishness by all users of a common, limited resource. If one user starts taking more than his share, all users have an incentive to try to get in there and take more than their shares first.
The resource is degraded or lost.
3.3 "Elite conflict of interest." Another variant, where decisions for the community are made by an elite which pursues its own selfish interests at the expense of the community.

Rational Factors 3.1 - 3.3 IT Comparison: Again legion. Vendors selling insecure products ("it's the users' problem"), spammers degrading the Internet commons, elite executive groups underspending on mission functions to enhance their compensation, consultants "hiding the ball" on crucial information to preserve a work stream, etc., etc.

3.4 "Irrational" attachment to inappropriate values - the actions needed for a solution are somehow inconsistent with the group's priorities or mission. Sometimes a difficult matter to judge, since it may be hard to see why values which led to success before won't work in the future, and it may be hard to predict whether new values will in fact work.
3.5 "Irrational" crowd psychology - doing what everybody else does because they are doing it.
3.5 "Irrational" group-think: suppression of doubts and dissent, premature consensus, shared but erroneous assumptions.
3.6 Denial - 'nuff said?

Irrational Factors 3.4 - 3.6 IT Comparison: On the values side think about corporate cultural/brand identities - Big Blue v. Microsoft on the future of PCs, Microsoft v. open source on the future of software, etc. Crowd psychology - anybody still holding dot.coms stock options? Group-think: dot.com investment gurus? (Maybe that was an elite conflict of interest problem . . . ) Denial - how high was the Dow supposed to go, again???

Factor 4: The problem may be beyond the group's capacity to solve, may be prohibitively expensive to solve, or the problem may be so far advanced that any solution is too little/too late.

Factor 4 IT Comparison: This is the ideal world of risk management. If we can get past Factors 1 - 3, all we need to do is figure out if we can solve the problem or not, and how much it will cost!

Concluding Note: None of this is altogether new - for example, the CobiT "IT Governance Maturity Model" I think is based on a recognition of these kinds of problem, and gives a pretty good structure for their avoidance. The same is true for more general identification and management in the COSO Enterprise Risk Management Framework. But Diamond's scheme was especially striking because it seemed so clearly applicable to issues and problems I have seen over the last several years in many IT settings, even though based on a completely different field of study.

I guess the lesson I take from this (one lesson, anyway, so far) is that organizations (like the societies Diamond studied) can be cognitively more or less competent, and that these cognitive factors need to be front and center in any really thorough risk assessment. This is probably not a comfortable thing to suggest in many settings, since it means broadening the focus from technology- and business process-specific threats and vulnerabilities to include management and governance - a bigger project and a harder and more threatening sell to management.

But I think ultimately this is where IT security risk analysis and management has to go, not just for the sake of compliance but for the sake (in some cases) of the sustainability of the organization. An IT-dependent organization had better be good at understanding and managing its IT - and we need to recognize that this kind of cognitive competence is itself a risk factor which should be assessed and managed in its own right.