I've been thinking a lot about IT risk issues for the last several years years, and one thing that is becoming clear is that IT security issues are only one category of IT risk. It is also pretty clear to me that we need a deeper and more robust discussion of what IT "risk" is, and how we work with it.
There is a lot of useful work going on in this area, and a lot of groundwork has been laid. (Examples: NIST and CERT specific to IT security; CobiT on IT in general; COSO on enterprise risk; Schneier; etc.) But some of the most insightful stuff I've seen has come out of the environmental sector - I've incorporated a lot of that into my own book.
I found one of the best and most illuminating examples of this in my recreational reading of Jared Diamond's *Collapse*, an analysis of how a number of societies succeeded or failed in dealing with environmental challenges. They range from Easter Island (big failure) to Tokugawa Japan (success). One part I found particularly striking and possibly quite valuable in thinking about IT risk issues was Diamond's "road map of factors contributing to failures of group decision-making."
I would characterize these as "cognitive" risk factors, which can work synergistically with technological, social, financial and other types of threats and vulnerabilities ("circumstantial risk factors?") to make a bad situation worse. I think these cognitive factors are at least intuitively pretty well known, but Diamond summarizes them (and demonstrates their effects) particularly well. I'm going to summarize them, and try to identify the kinds of IT risk situation where each type appears to be a contributing factor.
Diamond's "road map of factors" breaks down into the following four principal categories and their subcategories:
Factor 1. Failure to anticipate a problem before it arrives.
1.1 The group has never experienced the problem.
1.2 The group experienced the problem before but it was a long time ago and there is no accurate recollection.
1.3 Reasoning by false analogy: The problem looks like but in fact is not the same as one experienced before.
Factor 1 IT Comparison: I think it's fair to say that computer and electronic communication technologies don't work like anything human beings have dealt with before. (If they had computers in Atlantis obviously we don't remember!) And physical world analogies tend not to work very well in thinking about IT problems - look at some of the discussions about IP, jurisdictional and privacy issues. (I wrote a piece some years back arguing that "property" concepts implicitly based on physical world analogies break down in healthcare IT network environments.)
Factor 2: Failure to perceive a problem which has arrived.
2.1 "The origins of some problems are literally imperceptible" - the group doesn't have any way to see them developing.
2.2 The problem of "distant managers:" those responsible for making decisions are not on the scene where the problem is manifest, and so don't see it or don't perceive it accurately.
2.3 The problem develops in a "slow trend concealed by wide up-and-down fluctuations." This can lead to "creeping normalcy," in which harmful conditions develop so slowly that they are not really seen (the mythical "lobster gradually boiled to death").
Factor 2 IT Comparison: Legion. The "imperceptible origins" of some problems might include the way that some applications interact to create new, unanticipated vulnerabilties. And probably every CSO I know would say s/he suffers or has suffered from "distant managers" - maybe not physically, but "distanced" from the realities of their security problems. (Also part of the problem with DHS and its take on cybersecurity?) And how about the "creeping normalcy" of an Internet environment full of spam, phishing and other vermin?
Factor 3: "The most frequent and most surprising factor," the failure to "even attempt to solve a problem once it has been perceived." The reasons for this factor are "rational" and "irrational."
The "rational" reasons for not solving problems basically amount to variants on selfishness (or at least less-than-enlightened self-interest), i.e. advancing individual or small-group interests at the expense of a larger group or the community. If the small group or individual receives a large benefit at the expense of "small harms" to each of the members of the large group or community, the small group will have a strong motivation to pursue the benefit, while no individual member of the community will have much motivation to try to stop the harmful activity.
3.1 "ISEP" - "it's someone else's problem" - externalizing the harms caused by your activities onto others or the community as a whole.
3.2 "The tragedy of the commons" (a popular one when I was in college): A consequence of the adoption of rational selfishness by all users of a common, limited resource. If one user starts taking more than his share, all users have an incentive to try to get in there and take more than their shares first.
The resource is degraded or lost.
3.3 "Elite conflict of interest." Another variant, where decisions for the community are made by an elite which pursues its own selfish interests at the expense of the community.
Rational Factors 3.1 - 3.3 IT Comparison: Again legion. Vendors selling insecure products ("it's the users' problem"), spammers degrading the Internet commons, elite executive groups underspending on mission functions to enhance their compensation, consultants "hiding the ball" on crucial information to preserve a work stream, etc., etc.
3.4 "Irrational" attachment to inappropriate values - the actions needed for a solution are somehow inconsistent with the group's priorities or mission. Sometimes a difficult matter to judge, since it may be hard to see why values which led to success before won't work in the future, and it may be hard to predict whether new values will in fact work.
3.5 "Irrational" crowd psychology - doing what everybody else does because they are doing it.
3.5 "Irrational" group-think: suppression of doubts and dissent, premature consensus, shared but erroneous assumptions.
3.6 Denial - 'nuff said?
Irrational Factors 3.4 - 3.6 IT Comparison: On the values side think about corporate cultural/brand identities - Big Blue v. Microsoft on the future of PCs, Microsoft v. open source on the future of software, etc. Crowd psychology - anybody still holding dot.coms stock options? Group-think: dot.com investment gurus? (Maybe that was an elite conflict of interest problem . . . ) Denial - how high was the Dow supposed to go, again???
Factor 4: The problem may be beyond the group's capacity to solve, may be prohibitively expensive to solve, or the problem may be so far advanced that any solution is too little/too late.
Factor 4 IT Comparison: This is the ideal world of risk management. If we can get past Factors 1 - 3, all we need to do is figure out if we can solve the problem or not, and how much it will cost!
Concluding Note: None of this is altogether new - for example, the CobiT "IT Governance Maturity Model" I think is based on a recognition of these kinds of problem, and gives a pretty good structure for their avoidance. The same is true for more general identification and management in the COSO Enterprise Risk Management Framework. But Diamond's scheme was especially striking because it seemed so clearly applicable to issues and problems I have seen over the last several years in many IT settings, even though based on a completely different field of study.
I guess the lesson I take from this (one lesson, anyway, so far) is that organizations (like the societies Diamond studied) can be cognitively more or less competent, and that these cognitive factors need to be front and center in any really thorough risk assessment. This is probably not a comfortable thing to suggest in many settings, since it means broadening the focus from technology- and business process-specific threats and vulnerabilities to include management and governance - a bigger project and a harder and more threatening sell to management.
But I think ultimately this is where IT security risk analysis and management has to go, not just for the sake of compliance but for the sake (in some cases) of the sustainability of the organization. An IT-dependent organization had better be good at understanding and managing its IT - and we need to recognize that this kind of cognitive competence is itself a risk factor which should be assessed and managed in its own right.
Subscribe to:
Post Comments (Atom)
1 comment:
It seems to me one other aspect of IT security is "managing expectations". We have all agreed to 'terms and conditions' to use a program, but usually don't really know what they are. No IT system is going to be so perfect that information never leaks. The bar needs to be set at 'good enough', though I agree that the bar needs to be quite high for health information.
Post a Comment