Tuesday, May 26, 2009

Stop! Don't Sign that Business Associate Contract!

As readers of this blog (should) know, the HITECH provisions of the stimulus bill include a very significant expansion of regulatory authority over business associates. They also include a very significant increase in penalties for HIPAA violations. The upshot of these changes is that many organizations which were not previously subject to HIPAA penalties will (relatively) soon be exposed to new, important liability risks. This changes the risk calculus for many organizations and individuals which may want to reconsider whether they are, or want to be, business associates.

Previoiusly, for many organizations and individuals in many business relationships with Covered Entities there has been little or no downside to signing a business associate contract. In many cases - I've dealt with a great many of them - Covered Entity staff handling contracts for IT, accounting, legal and various kinds of consulting services, have simply assumed that a Business Associate Contract is necessary. This assumption may or may not have been true, but was put forward without analysis, based upon a misunderstanding of the law, out of an excess of caution, or "because our lawyer requires it." (Of course, lawyers themselves have been guilty of pushing this kind of assumption.)

For the purported Business Associate in this situation the Business Associate Contract has therefore been an obstacle to the deal, but generally a minor one. The Business Associate would not be exposed to regulatory penalties for violation of the contract, and where the Business Associate Contract's terms don't include significant penalties for violation - often the case, especially when the Covered Entity has gone with a simple form - the Business Associate's risk for violation is pretty much limited to termination of the contract. As legal risks go, this isn't a big one. So, many organizations have been willing to sign off on Business Associate Contracts as a condition to closing a deal or relationship with a Covered Entity, even if they really aren't acting as a Business Associate, or could provide appropriate services without doing. so. I know of quite a few IT, legal, accounting and consulting services companies where this is the case.

If you're in that position you probably want to rethink it. As of next February 17, Business Associates will be directly regulated by Health and Human Services, directly subject to audit, and directly exposed to penalties for Security Rule violations, violations of HITECH provisions, and violations of their Business Associate Contracts. And these penalties will be potentially much greater - are a lot bigger than they used to be; think hundreds of thousands or even millions of dollars, if you violate enough provisions and do it with "willful neglect."

Now, just signing a Business Associate Contract shouldn't be enough to make you a Business Associate - though it could certainly be taken as evidence you and your trading partner intend for you to be. I would take the position that even if my client signed a document accepting Business Associate status, it really isn't unless it's done something involving Protected Health Information on behalf of a Covered Entity. But that's a legal argument, and if I have to make it my client will already be under investigation and at risk of penalties. I'd prefer not to go there, myself.

So, if you're a Business Associate, or think you might be, or never really thought about it but signed off on a contract to make a deal happen, you probably want to consider the services you provide, whether you can or want to provide them in a different way if they currently involve Protected Health Information, and maybe whether the existing compensation reflects your soon-to-increase risk.

Me? As a lawyer and sometimes Business Associate, I certainly will be looking at this.

Monday, April 13, 2009

Caselaw: When Bad Security Makes for Invalid Electronic Signatures

Signatures are essential - as in, legally required - for many healthcare records, among them medical records, drug orders and prescriptions. Failure to sign violates licensing and frequently other state law provisions, and in some cases federal requirements and accreditation standards.Federal and state laws - E-SIGN and the Uniform Electronic Transactions Act (UETA, adopted in almost all states) also permit electronic signatures, and these have become a standard part of electronic health record (EHR) and electronic prescribing (e-Rx) systems.

Neither E-SIGN nor UETA specify the technologies which are acceptable as electronic signatures, but instead leave it up to the agreement of the parties. As a matter of law, then, an electrronic signature is any "electronic sound, symbol, or process attached to or logically associated with a record and executed or adopted by a person with the intent to sign the record." For example, when you download a new application you are usually confronted with several pages of license agreement and a "click to accept" button, or something similar. When you click the button, you are executing an electronic process (the results of the click) logically associated with a record (the license agreement) with intent to sign (implied by the fact that you clicked after being asked if you accept the license agreement). From a legal point of view, you have just electronically signed the license agreement.

As you can imagine, this open standard creates many opportunities for error and fraud. You could click to accept without intending to, just because you're fumble-fingered. (This is why double-click is often better solution.) Somebody else could log on to your account, or create an account using your information, and "sign" records in your name - for example, bank transfer authorizations. And so on.

The security of the process used to create an electronic signature is therefore essential to its reliability, and both E-SIGN and UETA have provisions allowing an electronic signature's validity to be proven by evidence of the "efficacy" of the security of the process. Conversely, "bad" security may be grounds to contest an electronic signature, and even have it thrown out.

This recently happened in a Kansas federal district court case, Kerr v. Dillard Store Services. The record there was an arbitration agreement potentially applicable to the plaintiff's discrimination claim against her employer.In Kerr, the employer required employees "to memorialize their arbitration agreements by executing electronic arbitration agreements
through an intranet computer system." The signature process was as follows:

To access the intranet each associate had a unique, confidential password that was created by and known only to the associate. Executing the agreement to arbitrate required the associate to (1) enter his or her social security number or associate identification number (AIN); (2) enter his or her secure password and; (3) click the “accept” option at the bottom of the arbitration agreement screen.

The execution transaction was confirmed by an email to the employee. All in all, a pretty standard electronic signature process, better than many, in my experience.

Dillard, the employer, tried to hold employee Kerr to the online arbitration agreement it claimed she had signed. However, the plaintiff claimed she never executed this process, and the burden of proof was on the employer. The court found for Kerr, reasoning that:

The problem with Dillard’s position is that it did not have adequate procedures to maintain the security of intranet passwords, to restrict authorized access to the screen which permitted electronic execution of the arbitration agreement, to determine whether electronic signatures were genuine or to determine who opened individual emails. . . . Therefore, it is not inconceivable Champlin [the store secretary] or a supervisor logged on to plaintiff’s account and executed the agreement. . . . Dillard’s has not demonstrated the efficacy of its security procedures with regard to electronic signature. . . . On this record, the Court cannot find that it is more likely than not true that plaintiff executed the electronic agreement to arbitrate.

While Kerr is not legally binding authority, as an unpublished district court decision, it does demonstrate the pitfalls of bad security for electronic signature processes as well. Healthcare organizations, which depend on signed records for essential functions associated with some of their most significant liabilities, might do well to consider how their solutions would look in court.

Wednesday, April 8, 2009

Red Flag Rule Board Consent and Policy

If you've surfed this site, you know that one of my tenets is that we're all generally better off sharing key policies - it improves our overall knowledge base and helps set a standard of care.

In that spirit, here are some materials you might consider if your healthcare organization needs to come into compliance with the Federal Trade Commission's Red Flag Rules - which are, by the way, effective May 1, 2009. Of course, documents like these should only be adopted as part of a good data protection program, which all healthcare ogranizations should already have for HIPAA compliance purposes. I'd also strongly suggest having a look at the open source Security Incident Response Policy I posted here in 2007 - it goes well with these.

If you don't know what this is all about, a good place to start is with the FTC's own web site. And as ever, this is educational material, not legal advice. If you think you need to adopt something like this, ask your lawyer! And feel free to share these with him or her.
__________________________________________________________________________________________________________________________________________________________________________

Copyright 2009 © John R. Christiansen/Christiansen IT Law
Creative Commons Attribution 3.0 License
Distribution Permitted with Attribution Retained

CONSENT RESOLUTION FOR ADOPTION OF IDENTITY THEFT PREVENTION PROGRAM FOR [HEALTHCARE PROVIDER NAME]

The undersigned, being all of the Board of [Directors/Trustees] of [Healthcare Organization/Business Associate], a ___________ [ENTITY TYPE] (the "[ENTITY"), hereby adopt and consent to the adoption of the following resolutions:

A. The Board has been advised by ENTITY’s [General Counsel/Legal Department/Law Firm], its legal counsel, with respect to the Federal Trade Commission’s Identity Theft Prevention Red Flag Rules, as codified at 16 CFR 681.2 (“Red Flag Rules”).

B. The Board has been further advised by [General Counsel/Legal Department/Law Firm] that [s/he/it] has determined, upon consultation with ENTITY’s [Chief Financial Officer/Chief Information Officer/Compliance Officer/Billing Department Head/Medical Records Department Head/Consultant/other relevant parties], that ENTITY is a “Creditor” and maintains “Covered Accounts” within the meaning of the Red Flag Rules. [General Counsel/Legal Department/Law Firm] has therefore determined that ENTITY is required to comply with the Red Flag Rules.

C. The Board has been further advised by ENTITY’s [Chief Information Security Officer/Compliance Officer/Security Consultant] that [s/he] has conducted an assessment of identity theft risks associated with ENTITY’s Covered Accounts, and determined that there are vulnerabilities which may present potential financial, operational, compliance, reputational or litigation risks to ENTITY, as well as financial, reputational or patient safety risks to ENTITY’s patients.

D. In order to comply with the Red Flag Rules and address identity theft risks, [General Counsel/Legal Department/Law Firm] and [Chief Information Security Officer/Compliance Officer/Security Consultant] have recommended to the Board that ENTITY adopt an Identity Theft Prevention Program. The [Chief Information Security Officer/Compliance Officer/Security Consultant] has further recommended that the Identity Theft Prevention Program be integrated with ENTITY’s existing [Information Security/Compliance] Program, due to the close relationship between identity theft and prevention and the information protection and compliance goals of the latter program, and in order to implement the Identity Theft Prevention Program more efficiently.

Based upon these findings and recommendations, the Board has resolved as follows:

RESOLVED, that [Chief Information Security Officer/Compliance Officer/Security Consultant], in consultation with [General Counsel/Legal Department/Law Firm] and [Chief Financial Officer/Chief Information Officer/Compliance Officer/Billing Department Head/Medical Records Department Head/Consultant/other relevant parties], is authorized and directed to develop and implement an Identity Theft Prevention Program, as part of ENTITY’s [Information Security/Compliance] Program.

RESOLVED, that [Chief Information Security Officer/Compliance Officer/Security Consultant] and [General Counsel/Legal Department/Law Firm] shall be responsible for updating and revision of the Identity Theft Prevention Program to address changes in applicable law, changes in ENTITY’s operations or systems affecting identity theft risks, identity theft or security incidents indicating new or previously unidentified risks, and other factors affecting the effectiveness of the Identity Theft Prevention Program, in consultation with the [Chief Financial Officer/Chief Information Officer/Compliance Officer/Billing Department Head/Medical Records Department Head/Consultant/other relevant parties], as appropriate.

RESOLVED, that the [Chief Information Security Officer/Compliance Officer/Security Consultant] and [General Counsel/Legal Department/Law Firm] shall report to the Board when the Identity Theft Prevention Program has been implemented, at the next regular meeting of the Board after the effective date of such implementation, and in any case at the next regular meeting of the Board after [TIME PERIOD]. Following implementation, the [Chief Information Security Officer/Compliance Officer/Security Consultant] shall include a report on the Identity Theft Prevention Program along with [his/her] regular reports to the Board on the [Information Security/Compliance] Program.

RESOLVED, that the Board hereby authorizes the [Chief Information Security Officer/Compliance Officer/Security Consultant] to spend up to ______________ dollars for development and implementation of the Identity Theft Prevention Program. Following implementation, the Identity Theft Prevention Program shall be included as an element of the annual [Information Security/Compliance] Program budget, according to ENTITY’s usual procedures.
_____________________________________________________________________________________

Copyright 2009 © John R. Christiansen/Christiansen IT Law
Creative Commons Attribution 3.0 License
Distribution Permitted with Attribution Retained

ENTITY NAME Identity Theft Prevention Policy
Information Security Policy No. 5.4

Objective of this Policy: The objective of this Policy is to provide assurance that neither ENTITY’s patients nor ENTITY are harmed by ENTITY’s receipt, creation, use, processing or disclosure of false or inaccurate personal information, including but not limited to protected health information as defined by Health Insurance Portability and Accountability Act of 1996 and its implementing regulations ("HIPAA").

This Policy is intended to help accomplish these objectives by providing guidance to ENTITY’s Workforce and Contractors, so that they will be able to:

  • Recognize events or circumstances which may indicate that that identity theft is occurring or has occurred;
  • Know how to report possible identity theft;
  • Know who is responsible for and authorized to respond to possible identity theft; and
  • Know the procedures which should be followed in responding to possible identity theft.
Recognizing Identity Theft: All members of ENTITY’s Workforce and Contractors are responsible for knowing how to identify possible identity theft affecting an ENTITY patient.

Identity theft is the inappropriate or unauthorized misrepresentation of personal information for the purpose of obtaining access to property or services. Identity theft is often committed in order to obtain credit to purchase consumer goods, but may also be committed to obtain medical care, drugs and supplies, or payment for care, services or supplies. Identity theft may result in false or inaccurate information becoming included in medical and billing records, and other patient records, and provided to third parties who may rely upon it in making diagnostic, treatment, credit and other important decisions.

The following are examples of facts and circumstances which may indicate identity theft. These are only examples, and many other facts or circumstances may be identity theft indicators.
  • Identification documents which appear to have been altered or forged.
  • The patient cannot provide documentation of identifying information, such as a health insurance card.
  • The patient provides inconsistent identifying information, such as a Social Security Number in a range which does not correlate with the reported birth date.
  • The Social Security Number or other identification or account number provided is already identified with another patient.
  • The patient’s medical history, physical appearance or diagnosis is inconsistent with the same information in the medical records.
  • A report by the patient or insurance company that coverage for the provision of legitimate products or services has been denied because insurance benefits have been depleted or a lifetime cap has been reached, which is inconsistent with known coverage.
  • A patient inquires or complains about inappropriate billing or notices, such as:
  • A bill for another individual, for services the patient denies receiving, or from a health care provider the patient denies receiving services from.
  • An explanation of benefits or other insurance notification for products or services the patient denies receiving.
  • A collection notice or credit report for a debt for products or services the patient denies receiving, or from a health care provider the patient denies receiving services from.
  • The repeated return of mail sent to the patient’s address of record as undeliverable, while products or services continue to be provided to the patient.
  • Notification by the patient, an individual claiming to be a victim of identity theft, any law enforcement agency, or any other person that an account or record has been opened or created fraudulently by ENTITY.
  • The receipt of identification information associated with known fraudulent activity.
Reporting and Responding to Potential Identity Theft: All members of the Workforce and Contractors are required to report possible or suspected identity theft when they obtain information or observe activities or records which reasonably seem to indicate its occurrence.The [Chief Information Security Officer/Compliance Officer/Security Consultant] shall provide forms for such reports. Reports may also be made to the [COMPLIANCE HOTLINE].

Each [BUSINESS UNIT] shall establish written procedures for reporting and initial investigation of potential identity theft, including identification of accountable investigative staff, expected investigative activities, and expected initial investigation response times. The results of each initial investigation shall be documented in writing. Reports and investigation results documentation shall be retained by the [BUSINESS UNIT] for one year. The [Legal Department/Chief Information Security Officer/Compliance Officer] shall review such documentation annually for internal reporting purposes.

In the event an initial investigation determines that there is a reasonable possibility of identity theft, the [BUSINESS UNIT HEAD] shall promptly report that finding to the [Legal Department/Chief Information Security Officer/Compliance Officer]. The [Legal Department/>Chief Information Security Officer/Compliance Officer] shall document any such report and promptly initiate further investigative action. The results of any such investigation shall be documented in writing and retained by [Legal Department/>Chief Information Security Officer/Compliance Officer]for at least one year, and such reports shall be reviewed annually for internal reporting purposes.

Any identity theft confirmed by the Legal Department/Chief Information Security Officer/Compliance Officer] shall be treated as a Security Incident, subject to the Security Incident Response Policy.
WHERE ANY INDIVIDUAL HAS REASON TO BELIEVE THAT POSSIBLE IDENTITY THEFT ACTIVITY HAS RESULTED IN THE RECEIPT, CREATION OR DISCLOSURE OF FALSE OR INACCURATE INFORMATION WHICH MAY BE USED IN CARE OR TREATMENT DECISIONS POTENTIALLY AFFECTING PATIENT HEALTH OR SAFETY, THE POTENTIAL IDENTITY THEFT SHALL BE REPORTED IMMEDIATELY TO THE [APPROPRIATE OFFICER].

Monday, March 2, 2009

I'm now blogging at HITRUST Central

You probably noticed I haven't been keeping this blog up very well. Fact is, I was hoping for some more conversation! So, I've decided to join the blogging at HITRUST Central, a community website maintained by the Health Information Trust Alliance - click the link and look for my blog, if you want.

I will likely post every now and again here, on topics not appropriate for HITRUST Central.

Thanks!

Monday, April 28, 2008

How Do You Amend Your Terms of Use? Hint: Don't Hide the Ball.

I've just caught up to a 9th Circuit case on the adequacy of notification of changes to website terms of use, for purposes of binding users. I'm a bit embarrassed as it's a few months old, but in my defense the holding is rather buried and its implications probably weren't clear to the folks doing the indexing.

The case, *Douglas v. U.S. District Court*, 495 F.3d 1062 (9th Cir. 2007), involved a change to terms of use for an AOL subsidiary providing long-distance telephone services. The subsidiary was acquired by Talk America, which changed the existing terms of use to add a number of new clauses, including a new arbitration provision and a waiver of the right to class actions. The plaintiff, a California resident, filed a class action on a number of claims arising from the change in terms. The defendant moved to compel arbitration based on the new arbitration provision. The District Court granted arbitration and the plaintiff appealed.

On appeal the 9th Circuit held the arbitration clause was not enforceable, as a matter of:

(1) General contract law, because Talk American did not provide adequate notice of the changes to the plaintiff: "Nor would a party know when to check the website for possible changes to the contract terms without being notified that the contract has been changed and how. Douglas would have had to check the contract every day for possible changes. Without notice, an examination would be fairly cumbersome, as Douglas would have had to compare every word of the posted contract with his existing contract in order to detect whether it had changed."

(2) California law, under which "a contract can be procedurally unconscionable [i.e., unenforceable] if service provider has overwhelming bargaining power and presents a "take-it-or-leave-it" contract to a customer — even if the customer has a meaningful choice as to service providers."

I won't say that this is necessarily a dramatic change to existing law, because "existing law" in this area hasn't exactly been clear. The case doesn't analyze existing caselaw on "clickwrap" or "shrinkwrap" agreements, although this is clearly what it deals with. It doesn't discuss the possible distinctions between consumer transactions, which apparently were at stake here, and B2B or other sophisticated party transactions.

What I would say is that this is a caution-and-warning for management of terms of use, privacy notices, website licensing and the like: If you don't push out amendments, they may not be binding in court. Have a look at how you manage these changes. (What, you don't already?)

I would also say that this indicates the importance of jurisdictional issues: Don't assume there are "general principles" which apply to your online agreements (whether you call them terms of use, privacy notices, licenses or whatever); assume rather that the laws of the most Draconian jurisdiction where you do business apply. (Again, you don't")

And finally, beware of court opinions coming out of left field. This was an important decision, which evidently hadn't been adequately briefed on electronic commerce issues, and bubbled up out of arbitration caselaw. Those who like sausages, perhaps, shouldn't watch the laws being made . . .

And my guess is that due to the *apparently* narrow nature of the holding (re arbitration) this isn't a candidate for S.Ct. review.

Monday, April 21, 2008

I Seem to be a Spime: Why Nobody Wants EHRs and PHRs

How's that for an obscure subject line? Please bear with me; I will explain. And if you, like me, have been trying to figure out how to implement electronic health records (EHRs) and personal health record (PHRs) in the face of seemingly unrelenting foot-dragging and friction, you might even find this worth reading.

First off, we need to distinguish EHRs and PHRs from EMRs (electronic medical records). While there is some confusion and overlap is use of these terms, in practical terms an EMR is usually the electronic equivalent of the traditional paper medical record. An EMR, especially in larger organizations, is not a simple electronic "flat file" transformation of the paper record into something like a Word or Excel document, but is a system made up of various applications and databases which store and process patient data.

More to the point, however it is structured an EMR performs well-known, well established and valued functions for a specific entity. Like a paper medical record, an EMR is created, owned, and maintained by a specific healthcare provider (hospital, clinic, physician), and serves the mission-critical business purposes of that provider (patient care support, legal risk management, etc.). The provider does have legal obligations to provide medical record information to patients and other treating providers, but the core concept is that the EMR provides necessary business and compliance support to the entity which pays for it and assumes the burdens of maintaining it.

Terminology in this area is fluid and sometimes the same kind of system, under the same management and serving the same purposes, is called an EHR. This seems to be the core concept in the Stark and antikickback exceptions, for example, though that's not altogether clear. So sometimes an EHR is an EMR. But not always.

EHRs are also (conceptually and some time in the future, if not typically in current practice) something more like a community resource for healthcare organizations and patients, including medical record and other relevant information on patients from all providers (and other healthcare organizations), up to and ideally including a lifelong record of all diagnoses and care from all providers. Sometimes this kind of EHR is said to be owned by the patient, and in this direction the EHR sometimes becomes indistinguishable from the PHR.

The necessary technologies to implement this kind of system already exist (albeit with many, many issues to be resolved on matters such as interoperation, etc., etc.), and the concept is pretty clear: Each of us will some day have a comprehensive, searchable, online electronic record which documents our health history and status. Our physical presence will be complemented by and linked to an online information service, which is used to guide highly customized decisions about our care - the kinds of products and services we might need or benefit from, in a medical sense.

In other words, we'll all become spimes.

Science fiction writer-turned-design critic Bruce Sterling (one of the few people who has a job I want more than the one I have) coined the term "spime" to mean a sort of hybrid manufactured/network object: "[Spimes] are precisely located in space and time. They have histories. They are recorded, tracked, inventoried, and always associated with a story. . . . Spimes have identities, they are protagonists of a documented process. . . . They are searchable, like Google. You can think of spimes as being auto-Googling objects."

I'm not sure what, exactly, auto-Googling might be, but here's Sterling's scenario for how you might encounter a spime, which should give some sense of how the concept applies to EHRs and PHRs:

"You buy a spime with a credit card. Your account info is embedded in the transaction, including a special email address set up for your spimes. After the purchase, a link is sent to you with customer support, relevant product data, history of ownership, geographies, manufacturing origins, ingredients, recipes for customization, and bluebook value. The spime is able to update its data in your database (via radio-frequency ID), to inform you of required service calls, with appropriate links to service centers. . . . "

This same notion can be extended from manufactured objects to biological entities with the greatest of ease. Consider what an ideal EHR/PHR could do and include: Demographic information, details of your physical and mental conditions, medical history, identification and contact information for your healthcare providers and health care payors, details of your available health benefits (and credit history?) - all updated to provide alerts for "service calls" you should make to physicians and pharmacies, accompanied by useful "customization" information about treatments for your specific conditions. Add some family history, maybe some DNA analysis. For some subjects maybe you even do add physical location information, via embedded RFIDs for institutionalized or incompetent patients, prisoners, military personnel, or in hospital patient wristbands, etc. . . .

With the implementation of this kind of system you will be recorded, tracked, inventoried and associated with the story of your own health history - you are the protagonist of the documented processes of your interactions with all the various aspects of the health system you've encountered in your life. So as far as I can tell this would make you, well, a biological spime - a hybrid biological/network object.

Sounds scary and pretty complex, no? Yes, actually, and the complexity may be what keeps it from getting too scary too fast.

Sterling knows that even manufactured spimes take a lot of work, and distinguishes between those who produce spimes (manufacture the objects and create the network space for them) and spime "wranglers," a "class of people willing to hassle with spimes." Spime wranglers have an interesting relationship to spime-producers: They do a lot of the customization work for them, and provide them with feedback about how useful and valuable the spime is."

In the PHR world, then, you might consider Microsoft and Google as spime producers: They enable the network spaces necessary for information to be loaded and maintained, and for interaction with the applications needed for data processing for customized services, etc. But (as Sterling foresees), loading and maintaining this network resource is going to be a *lot* of work. Who's going to wrangle your PHR?

Do I wrangle my own PHR as part of me-as-a-spime? This might be an empowering opportunity for those with the time, training, experience and skills needed to successfully manage their PHR, but it will be an enormous pain in the hind end for those who don't. Most people these days take their health care as something produced by others, over which they have little control, and which they can only choose to consume (or not). (Interestingly, in the perspective Sterling brings, this puts most of us a couple of historical steps back from being able to manage ourselves as spimes.)

Does my physician wrangle my EHR as part of me-as-a-spime? I'm not sure why she would. She's already overburdened by running her practice and providing care to my physical self. As to having staff do it, they're busy too, and she doesn't really have an incentive to pay for and assume the burden of creating a network object which is a community resource: Her practice gets a small part of the benefit in return for assuming all the costs? While it may be true that everyone, including her practice, will be better off once the entire community resource (i.e., EHRs for everyone) is built, until that happens those who do assume the spime-wrangling burden are at a competitive disadvantage compared to those who don't.

The same argument seems to apply to hospitals, health systems, health plans and so on: Each possible candidate is at a competitive disadvantage if it assumes the very real burdens of EHR-wrangling. Unlike the EMR, there's no mission-critical purpose served for any given organization by having this kind of community/network resource available. It may enable the organization to provide better care and maybe even get it some savings, some day, but as long as the EHR/PHR is not the market norm, no market participant is disadvantaged by its absence.

What I take away from all is that until we reach a point at which some critical mass of EHR/PHR spimes is established (and positive network effects take over), somebody is going to have to compensate the spime wranglers. This, of course, is what physicians say whenever the topic of EHR implementation is brought up - and the point is, they're not wrong. Implementing an EMR for their own purposes is a lot of work; implementing and maintaining an EHR as a common resource for the general good is even more, and for less relative benefit (for them). And while some consumers might really like wrangling their own spimes, or find it genuinely beneficial (e.g. for managing chronic conditions), most probably won't find it as rewarding as doing many of the other things you can do on the Internet.

Until we can find a way to make the incentives for EHR/PHR wrangling outweigh the burdens, I'm not sure how the ideal EHR/PHR system gets built (using only market factors). EMRs, sure; EHRs which are essentially EMRs, sure. EHRs/PHRs which are a community resource through which each one of us is a biological spime, I'm not so sure.

Sterling's talk about spimes is at: http://www.boingboing.net/images/blobjects.htm

Bonus aging hippie/geek points if you recognized the source of the subject line as Buckminster Fuller!

Saturday, April 28, 2007

Data Havens and UFO Hacking

This probably dates me but is too good to pass up. The once-upon-a-time data haven of Sealand has offered political asylum to the guy who hacked into NASA and DoD computers in search of UFO information.

From Personal Computer World (UK)



North Sea 'state' offers McKinnon asylum

Sealand may not be enough to save 'most prolific hacker' from extradition
Emil Larsen, Personal Computer World 26 Apr 2007

Gary Mckinnon, who faces extradition to the US for allegedly hacking into military computers, has been offered asylum by the self-styled breakaway state of Sealand, it was claimed at the Infosec security conference today.

The "state", a World War II fort known as Roughs Tower in the North Sea just north of the Thames, was declared an independent principality in 1967 by a former major called Paddy Roy Bates. He dubbed himself Prince Roy.

Mckinnon sat on a ‘hackers panel’ at Infosec to debate new changes to the Computer Misuse Act. The claim about Sealand was made by one of his fellow panellists, a "security analyst" identified only as Mark.

Mckinnon, described by American prosecuters as the most prolific hacker of all time, spoke only twice, first to introduce himself and then when asked if companies often overstate the value of damage done by hackers.

Mckinnon said they did. He added the US could only have extradited him from the UK, if it could show his the offence was "worth a year in prison in both countries".

He added that to merit that sentence the damage had to amount to $5,000 dollars. The damage he was accused of causing came to exactly that so US military were "obviously not shopping in PC World".

McKinnon's lawyers have said they plan an appeal to the House of Lords against Home Secretary John Reid's granting of a US request to extradite McKinnon.