Saturday, April 28, 2007

Data Havens and UFO Hacking

This probably dates me but is too good to pass up. The once-upon-a-time data haven of Sealand has offered political asylum to the guy who hacked into NASA and DoD computers in search of UFO information.

From Personal Computer World (UK)



North Sea 'state' offers McKinnon asylum

Sealand may not be enough to save 'most prolific hacker' from extradition
Emil Larsen, Personal Computer World 26 Apr 2007

Gary Mckinnon, who faces extradition to the US for allegedly hacking into military computers, has been offered asylum by the self-styled breakaway state of Sealand, it was claimed at the Infosec security conference today.

The "state", a World War II fort known as Roughs Tower in the North Sea just north of the Thames, was declared an independent principality in 1967 by a former major called Paddy Roy Bates. He dubbed himself Prince Roy.

Mckinnon sat on a ‘hackers panel’ at Infosec to debate new changes to the Computer Misuse Act. The claim about Sealand was made by one of his fellow panellists, a "security analyst" identified only as Mark.

Mckinnon, described by American prosecuters as the most prolific hacker of all time, spoke only twice, first to introduce himself and then when asked if companies often overstate the value of damage done by hackers.

Mckinnon said they did. He added the US could only have extradited him from the UK, if it could show his the offence was "worth a year in prison in both countries".

He added that to merit that sentence the damage had to amount to $5,000 dollars. The damage he was accused of causing came to exactly that so US military were "obviously not shopping in PC World".

McKinnon's lawyers have said they plan an appeal to the House of Lords against Home Secretary John Reid's granting of a US request to extradite McKinnon.

Tuesday, April 3, 2007

Newbie Lawyers as Enablers of Bad Security

This post is a bit of a rant so please bear with.

I'm currently neck-deep in a variety of projects and a couple of presentations about security for electronic health records. This happens to be a field I've worked in since the early 1990s - though we didn't generally call them "electronic health records" then - so I've seen quite a few initiatives, applications and companies come and go.

The current round of activity is seeing a number of newbie lawyers advising and opining on EHR issues - and there's nothing wrong with that; it's a great field of practice and we all have to start somewhere - but as far as I can tell some of these folks either haven't bothered to research security issues and the history of the field, or don't even know there's something they are missing.

Twice in the last few days I've read a lawyer's assurance that EHR issues are all copacetic because, by golly, the records are going to be password protected. One seemed quite pleased to note this, almost breathless with enthusiasm (if you can be breathless in print).

I've got nothing against passwords - some of my favorite files are password-protected - but what these folks appear not to understand is that by telling me this, you have told me nothing very important. Not as important, for example, as if they had noted that there wasn't password protection - then I would either be interested to know what alternative means of authentication were proposed, or stunned to find out that even minimal authentication was foregone. But passwords are so pervasive and basic (a "due care" safeguard, to use Donn Parker's term) that this is not significant information.

What is meaningful is the suite of security safeguards, including but not at all limited to authentication, used to protect a given EHR. And this is something we've known for quite some time - even lawyers have known it. Or I guess to be more accurate: some lawyers have known it. The ABA's digital signatures/PKI initiatives developed a solid, if smallish cadre of lawyers who are pretty good (some very good) on information security issues, and there have been other groups, projects, and activities which have trained lawyers to deal with information security issues; there are some pretty good law review articles and a few treatises out there. And of course, the HIPAA Security Rule gives a pretty good list of security areas which should be addressed in any EHR. And that's just the resources available without consulting information security professionals and infosec publications, which any lawyer working in this field should do routinely.

One reason this makes me peevish is that this seems to me a tremendous waste of good intellectual capital. We've learned a lot already, some of it very much the hard way - wouldn't it be a good idea to avoid known pitfalls? Especially if you're representing clients who are going to be putting highly sensitive, personal medical information into networked applications with the deliberate goal of enabling remote access?

Which leads to the main reason I'm feeling peevish: I often see this kind of advice used to validate bad security decisions. There's almost always a good argument for bad security: It's cheaper than good security. Look what happened with PCASSO - very good EHR security, patients liked it (they felt secure) but doctors found it burdensome. So what do they want to use? Passwords. Preferably their dog's name. (I got that again a couple of weeks ago - mentioned that as an example of bad security in talking to a doc, whose immediate blush-and-cough I took as an admission of guilt, which he confirmed.) And a lawyer who doesn't know better will validate this choice.

Do I think passwords are never good enough authentication? Certainly not - that's not the point of this post.

The point is that lawyers play a significant role in risk identification and management decisions, such as those affecting EHR implementation, and it behooves us to either get up to speed on the issues before giving advice, or admit we aren't up to it and not fake it. (Sidebar: "Behoove" is a great word which doesn't get nearly enough use.)

If passwords provide sufficiently low-risk authentication, given the client's risk tolerance and in the context of the client's business processes and information systems, then it is the client's decision that it is an acceptable implementation. But this decision should be made in consultation with a lawyer, and if that lawyer doesn't know the issues - and perhaps doesn't even know that s/he doesn't know - that decision is badly grounded. A newbie lawyer may well wind up enabling a bad security decision.

I happen to think that EHRs and health information networking are in general a good idea and will ultimately be very beneficial. But we need to recognize that we are building an untested infrastructure for the storage and management of vast quantities of the most sensitive personal information, with opportunities for privacy, health and safety threats we can't yet forecast accurately. We should make security decisions for this infrastructure with caution, an appreciation for the limits of our knowledge and expertise, and a willingness to learn and figure out new tricks.

Rant over!