Tuesday, November 24, 2009

Upcoming HIPAA Business Associate Presentations

I will be presenting as follows:

December 4 audioconference, HIPAA Ethics Update: Lawyers in the Compliance Crosshairs

December 15 audioconference with Alan Goldberg and Alan Freivogel, Attorney Business Associates:
February 17, 2010 Is Almost Here – Are You Ready Now?

Requesting Copies of Presentations and Articles

I've had a couple of folks mentioning an interest in copies of presentations or articles I've done. I'm often happy to make them available, but don't really have a way to get them to you if I don't have your email address. Understandably, nobody wants (or should want) to leave their email address in blog comments, so if you also don't publish one as part of your profile and would like to request something here's a suggestion: Go to the Christiansen IT Law website and email me via the "Email John" button.

Thanks!

Friday, November 20, 2009

How to Eliminate the Barriers to Health Information Exchange

I know how to eliminate the principal barriers to health information exchange (HIE): A clear code of safety standards and insurance.

The real barriers aren’t really technical any more. We do have challenges in terms of electronic health record (EHR) interoperability and in some other areas, and they are not trivial. But there is a lot of work going into standards and other requirements to achieve interoperability, and in any case this is more a question of data standards than data exchange. These problems of information content are not really barriers to the technical exchange – the transmission and receipt – of information.

The real barrier to HIE is risk aversion: Health care providers, in particular, are often reluctant to buy EHRs and participate in HIE because they fear they will be held liable if the information they hold, transmit and receive goes astray or is misused. This risk aversion is usually expressed in terms of a lack of legal standards – I’ve been through seemingly endless analyses of federal and state laws potentially applicable to HIE, to try to reconcile them and find a way to assure clients that they can do HIE safely, or at least that we can roughly quantify their legal exposures associated with it. This is a difficult task because the laws are neither written nor organized in ways which tell you the rules for legally-compliant HIE – that is, they don’t describe how a provider can conduct HIE with at least a reasonable assurance that they won’t face legal liability.

The situation is rather as if we had built our existing road system backwards, starting with superhighways and then asking people used to horses and buggies to start driving on it in Corvettes and 18-wheelers. Worse, it’s as if we had partially built our interstate highway system, but hadn’t bothered to figure out things like stop and yield signs, and what speed limits are safe for curves and hills. Drivers who aren’t particularly risk-averse – they don’t recognize the risks, or don’t care about them much – might happily hop into their Corvettes or big rigs and start cruising. After a few crashes, maybe we begin to learn that we need some kinds of road signs and some speed limits, and start putting them up. We might even decide that driver’s education is a good idea, and that drunk driving is to be seriously discouraged.

Over time we’d evolve safety standards for our superhighway. We’d probably put up some useful signs, and they would get more useful over time. Curves where a lot of crashes occur would probably get straightened out, and drivers would learn how to handle their vehicles better. But during the evolution of these safety standards, a lot of prospective drivers would probably figure, I’ll stick with my horse and the back roads until they work out the bugs.

For the truly risk-averse, even a well-designed superhighway with good signage and licensed drivers might still be too daunting. Driving is an inherently risky business, even if you have good safety standards and are diligent about their enforcement; road conditions can vary, even good drivers are sometimes negligent, and unanticipated conditions can crop up. Accidents and intentional malfeasance happen, and the only way to avoid the risk altogether is by avoiding the highway.

This is why every state requires all drivers to have insurance: To pay the costs associated with the statistically inevitable harmful incidents associated with driving. This includes costs of repairs to your vehicle – and you yourself – as well as covering harm to as third parties. The system is no-fault in the sense that coverage does not depend on who is or may have been at fault, so drivers and third parties don’t have to worry about payments being delayed while insurers squabble over who have to pay what amount. Of course, the system isn’t perfect, and insurers still do dispute fault, but at that point it’s really about how the insurers split coverage, not about whether coverage exists. And, usefully for the determination of such disputes, safety standards help decide who, if anyone, was actually at fault.

Safety standards and insurance will not work for the extremely risk-averse, of course. For some, the advantages of swift movement from place to place will not outweigh their fear of a crash – or of the unknown – and they will want to stick with their horse and buggy. But clear safety standards and insurance are likely to be enough to overcome risk aversion in most individuals.

So how would this work for HIE? Well, we’ve already built an Information Superhighway (thank you for the metaphor, Al Gore) – the Internet – which, frankly, does not have a lot of built-in safety features. So we need to come up with standards for its safe usage for HIE (which could and probably should apply to proprietary networks used for HIE too, of course). These standards need to be clear enough to translate into policies and procedures healthcare organizations can understand, implement and explain to users. Users need to be trained in these standards – and maybe we should consider whether users should be qualified in some way as a condition to engaging in HIE. (They already should be by any organization which authorizes them to participate in HIE on its behalf, but perhaps we need broader requirements.)

Standards need to be enforced, and we need mechanisms for learning from accidents, mistakes and deliberate malfeasance. At the same time, organizations need assurance that if they comply with standards they will be safe against penalties and damages – that they will be considered compliant with the law, and with applicable standards of care. Safe harbors and standards maintenance and evolution will be essential.

We should also look into insurance to cover the statistically inevitable. Part of this is coverage for the organizations themselves, for matters such as incident response and breach notification, and remediation. But the really valuable insurance would be against third-party harms – harm to individuals whose personal health information is misused or improperly disclosed in the course of HIE (or EHR usage).

This kind of insurance will take some work to develop. We already have insurance available for misuse of personal financial information. The most commonly known covers credit monitoring and in some cases cure of misinformation, but this kind of insurance is in fact less important than the “hidden” insurance provided by payment card issuers’ guarantees to consumers against credit card fraud. This risk transfer in fact enabled electronic commerce in general, by limiting consumers’ exposure to fraudulently created debts to fifty dollars (and usually not even that). Even risk-averse consumers could, and did, use the Information Superhighway to start buying online, because the issuers assumed their risks of doing so. Of course, over time even massively well-capitalized companies like the payment card issuers want to limit their exposure, and they have in turn started requiring vendors using payment cards to implement specific security requirements – in effect, private safety standards for ecommerce, not an uncommon role for insurers to play.

So, I have the solution for the principal barrier to HIE: We need clear safety standards, and we need insurance.

Now all we need is a good standards body with clear legal standing, and a well-capitalized organization to fund coverage . . .

(Thanks to Peter Winn for the conversation and Kirk Bailey for the tilting at the windmills which inspired this piece.)

Saturday, November 14, 2009

More HIPAA/HITECH and Joint IT Environments: Multiple Account Access

I've had some interesting follow-up from my previous posting about HIPAA/HITECH and cloud computing. One question was about my statement that users authorized by one Covered Entity whose Protected Health Information and applications are hosted in a joint IT environment shouldn't have access to the Protected Health Information and applications of other Covered Entities hosted by the same services provider. A question of particular interest was whether that statement also applied if the access was for purposes of treatment, payment or healthcare operations, or if the two Covered Entities were part of an organized health care arrangement (OHCA).

This is an issue I’ve gone around several times with ASPs and RHIOs, and I don’t think it would be either prudent or permitted by HIPAA for different Covered Entities to simply “leave the door open” for external users to access data. “Access” from the user’s side is “disclosure” from the Covered Entity's side (the HIPAA/HITECH definition of “disclosure” specifically includes “access to information outside the entity holding the information”). So the Covered Entity needs to comply with HIPAA’s disclosure requirements in allowing users from different Covered Entities to access its Protected Health Information.

If the external user is a provider who needs access for treatment purposes, patient authorization is not needed and the minimum necessary rule does not apply. If the external user needs access for payment or health care operations, patient authorization is still not needed, but the minimum necessary rule does apply. That’s all good as far as it goes; you can structure policies and controls to make the appropriate information available (though minimum necessary compliance may be more of a challenge than it appears), but it begs the question, how does the Covered Entity know the purpose of the access, and that the external user is authorized to have it?

The security rule requires Covered Entities to have policies and procedures for granting access to ePHI, including documenting, review and modification of a user’s right of access. Users also have to have unique IDs and the Covered Entity has to have a method of authenticating the user. The security rule also requires security incident identification and response, while HITECH requires notification procedures for security breaches; in both cases the definition of a trigger event includes unauthorized access.

Uncontrolled access would mean that the Covered Entity would be unable to document and review an external user’s right of access, and might not be able to identify and authenticate the user. It certainly wouldn’t be able to identify unauthorized access – i.e., security incidents and breaches. So we’ve got some likely HIPAA violations, and also a lack of controls which is really inappropriate for access to sensitive information as a matter of ordinary prudence. (And I do mean that in the sense that I think it would be actionable negligence.)

When you’ve got an IT environment in which one organization is responsible for user access control, which you can have with ASPs, SaaS or cloud computing environments, that organization can be responsible for user identification and authentication and access control administration. What it probably can’t do is authorize users to have access to ePHI – it has to be told about this by the Covered Entities, who know the roles of the users, etc.

What you need then is a mechanism to make authorization transitive – to allow Covered Entity A to accept the authorization of an external user by Covered Entity B, to access ePHI owned by A on B’s behalf. This is where it often gets sticky, and RHIOs have fallen apart, because generally speaking A won’t (and probably shouldn’t) accept an authorization from B without B assuming liabilities for error and indemnifying A. This can be overcome – I’ve set up a number of arrangements which make it happen – but it can cause serious conflicts between A and B.

I don’t think an OHCA helps with this issue. The real issues are under the security rule, which does not work in those terms. But that doesn’t mean that entities within an OHCA can’t agree to manage data access appropriately under the security rule – e.g. a health system might manage identification and authentication functions on systems it supports, which allow users from associated clinics access to its systems, and in turn the clinics could use the same I&A solution to control access for hospital users accessing theirs. (There are also a few third-party I&A services which can be helpful.)

Friday, November 13, 2009

HITECH/HIPAA Obligations of Cloud Services Providers

Background: HITECH sections 13401 and 13404 now apply certain HIPAA and HITECH security and privacy requirements to business associates (BAs).

Scenario: Company A provides healthcare administrative or electronic health record (EHR) systems through the cloud, or SaaS. Company A is therefore by definition a BA.

Question: Is Company A therefore responsible under HITECH for making sure its covered entity (CE) customers follow any specific policies and procedures for access to the hosted systems? What if the CE wants to do it in a way that violates the HITECH/HIPAA privacy or security rules? Does Company A have any obligation to police its customers?

My Answer:

1. I would characterize cloud services/SaaS as a joint IT environment. This places HIPAA/HITECH obligations on both services provider and customer.

2. One complex part of the answer is that the business associate obligations depend crucially on the terms of the business associate contract (BAC) which HIPAA/HITECH requires these parties to have. This gets into thorny questions I don’t want to address here - for now I would only say that I think you need to draft such contracts very carefully lest you set up regulatory obligations which are neither necessary nor appropriate, and might expose either or both parties to avoidable civil penalties and other liabilities.

3. Apart from BAC obligations, HITECH does create security obligations for BAs with responsibility for joint IT environments. These obligatios might well include an obligation to establish safeguards intended to ensure that users associated with one CE do not access services/PHI owned by another CE. CEs in fact, in my view, ought already to require this – that is my practice, working both with CEs and with vendors which operate joint IT environments for CEs. If Company A provides services in this way, it would have an obligation to stop - and to some extent prevent - CE user activity affecting other CEs.

4. As to policing CE user activity affecting only services/PHI of the same CE, I don’t think there is a per se answer. The BA might take on some safeguard services, maybe such as user registration, which would put it in a position where it might need to enforce CE policies. If CE policies seemed to violate the privacy rule, that might trigger issues for the BA under the new HITECH termination/snitch provision of 13404(b).

Conclusion: BA obligations in this area have to be analyzed specifically in terms of the services provided, with an eye to the obligations assumed by the BA and the BA’s ability to be on notice of an improper practice. In an “ordinary” cloud/SaaS model, the BA probably won’t have sufficient information to be able to identify CE violations, and probably wouldn’t want to assume responsibility for doing so. But avoiding this obligation will often require specific functional analyses of the operational model, and careful drafting of the contract.

In other words, don't try this at home, kids.