The mantra "data driven decision making" gets every service provider into a frenzy. They think they can improve quality of service if only they had more data on which to base their internal decisions. They might very well hire external consultants who encourage them in this direction. Yet it is a bad practice and it should be stopped.
I've had this issue on my mind for some time, but it was a recent visit to my healthcare provider which then precipitated a follow up survey that drove me to write this piece. In that case I got a robocall where the caller ID said it was the healthcare provider calling. In the past, I had gotten solicitations to do these post-visit surveys, but by email, which I find less intrusive. (My Inbox is full of solicitations, mainly from sales reps of companies wanting to sell IT services to the university, seemingly unaware that I've been retired for 10 years. Over time, I've learned to ignore such messages.) Indeed, we get a lot of robocalls by phone and mostly I don't pick up. I do answer the phone for the healthcare provider.
I also want to note that given my ed tech administrator experience and my background as an economist who is well aware of the social science issues in administering such surveys, I'm not just blowing smoke here. Indeed, the course evaluation questionnaires (CEQs) that we use in higher education to determine course and instructor quality serve as a model for me in considering the issues in this post. The CEQs are a holdover from an earlier time, but are held in low regard by students and instructors alike. The reader should keep that in mind with what follows.
Let's work through the reasons for why the post-service-delivery survey is a bad idea.
It disrespects the person who received the service.
This is most obvious when the service is one and done. In the case of the CEQs, the students complete the survey at the end of the semester and won't be taking the course again. How do they benefit from taking the survey? Asking somebody to do something from which the person won't benefit is showing a lack of respect for that person. For the CEQs, if they are administered earlier in the semester and only used within class to make modifications on how the course is taught, then that would be a legitimate use by this criterion. The students could then see how their responses to the questionnaire directly impact instruction, at least if done in a small class. But in a large class, individual responses won't count for much at all, even if the CEQs are administered early in the semester.
For the health care provider, the patient gets no information about the pool of other patients who will be given the survey for the same healthcare provider, nor about prior response rates to such surveys, nor about how survey responses have been used in the past to adjust the healthcare provided.
I want to note an argument that can cut the other way, which happens in an overlapping generations model, and provides the logic behind social security. (See Samuelson's An Exact Consumption-Loan Model.) Completing the survey is like paying a tax. You pay the tax in expectation of a future benefit, when you need the benefit, after you've retired. Likewise, you complete the survey from the healthcare provider under the assumption that your healthcare quality will be improved in the future if all patients complete the survey. Perhaps this is true. However, the analogy breaks down when noting that paying FICA is legally required of all working people. Completing the survey, in contrast, is voluntary. There is a free rider problem involved with completing the survey. One should expect low completion rates as a result. It is conceivable that a sense of social obligation can counter the free rider problem. But let's face it, everyone and their brother are doing surveys of this sort nowadays. There are just too many of them to feel a sense of social obligation regarding completing any particular survey. Given that, asking patients to complete the surveys is an act of disrespect.
The quality of the data collected will be poor.
For most surveys, the response rate is nowhere near to 100%. As long as there is random sampling and who participates and who opts out is also random, the survey statistics have validity (provided the sample is large enough). But there can be systematic reasons why some people participate and others opt out, leading to selection bias. Survey results are far less reliable in this case.
There are two obvious factors to focus on in considering possible selection bias. People with high time value and limited leisure time are more likely to opt out. So surveys of this sort end up over sampling the unemployed, the retired, and they under sample those who are working full time, but also those who don't have an internet connection where they live, who are unwilling to go to a place where bandwidth is ample just to complete the survey.
The other factor is about reasons to want to complete the survey and conversely reasons not too. As a general rule, those with intense preferences, either for or against, are more likely to complete a survey. Those with mild preferences are more apt to sit it out. One therefore should look at reasons for why a person might have an intense preference after the service has been delivered.
From this perspective, routine service provision is apt to generate only a mild preference, although I will give some caveats with that below. Emergency service provision or service provision under dire circumstances is more apt to generate a strong preference from the service recipient.
Even for people who have excellent health insurance, and I count myself as one of those, the business side of healthcare is clunky, at best, and painful, at least some of the time, especially once you've become a senior citizen. I will illustrate with a few different examples.
I turned 65 last January and had a regularly scheduled visit with my primary care doctor, in general, a good guy. I was due for a variety of vaccines/immunizations. Alas, I was also at the cusp where my primary insurance until then was to become my secondary insurance thereafter and Medicare Part B would become my primary insurance. I ended up having one immunization during that visit, but was told to get others at the pharmacy I frequent, Walgreens. Why this makes sense, I don't know. But it is definitely harder to manage having different providers for immunizations. Further, there was no leeway about my birthday, with respect to Medicare covering the payment. If I was 64 and 364 days, I wouldn't be eligible to get coverage for the vaccines that would be covered the next day. When I went to Walgreens I got both the pneumonia vaccine and the first Shingles vaccine. That was on my birthday. I had pneumonia the previous spring, so was a candidate for that. The Shingles vaccine was for anyone near my age. I might have been a year or two behind on that one. In any event, why I had to make two visits to get this done is because that's how the system works. Might I get irate at my primary care physician as a consequence. I might, though I didn't that time. The bureaucracy with insurance and prescriptions is a pain, especially regarding trying to renew one prescription because it is time to renew another. The insurance company will block the renewal of the first prescription. If it were narcotics, I would understand. But I've recently experienced this with eye drops. Give me a break.
It's actually worse with non-routine healthcare. Two years ago, I had three different issues. One ended up being a stress fracture in my foot/bad arthritis there. Another was that a compressed disk in my neck was causing pain and muscle spasms in my left arm. The third, and the scariest, is that I was diagnosed with prostate cancer. As a result, I saw a variety of specialists and reached the following conclusions. Diagnosis is an art, not a science, in the sense that the evidence from the diagnosis may entail some ambiguity. How that ambiguity resolves is of some consequence to the patient. For cost effective diagnosis, it makes sense to begin with less expensive tests and then move to more expensive/intrusive tests. Blood tests and x-rays are comparatively low cost procedures. They are the first step in a potential chain of other steps. Scans, such as MRI, are steps further down the chain. Scans are good at identifying "hot spots" but there may still be ambiguity as to the cause of the hot spot. Scans are more expensive than the first steps and typically require approval of the insurance company before they are conducted. I will talk about that more in the next paragraph. Biopsy, when it is not of something on the skin, is more intrusive than a scan, also more localized, and in my experience more precise. But you can't biopsy every ambiguous hot spot that shows up in a scan. When a biopsy yields a positive result, treatment is called for. That much is understood. When a scan gives an ambiguous result, the next step is negotiated between doctor and patient, but it won't involve treatment. It will either be simply to wait or it will entail some other diagnostic.
The doctors who had to deal with insurance company approval for diagnostic procedures they wanted to recommend all seemed angry and intimidated by the prospect that their judgment would be questioned and their recommendation might be overturned by the insurance company. This is an issue with healthcare that is not getting enough attention. I also want to note that specifically for a cancer diagnosis, a patient new to that will have something done to their head, regarding worrying about the worst case possibility. In my case, the worry was about whether the cancer had spread outside the prostate. I became distraught and quite angry when this couldn't be resolved in short order.
The patient doesn't rate the insurance company. Those post-service surveys are only about the visit with the doctor. One might imagine that they type of distress I felt would encourage an extremely negative evaluation of the doctor visit, even when the doctor actually did everything right within his sphere of control. So the survey response would be inaccurate in this case.
Perhaps more importantly, the healthcare provider must be aware of the underlying issue. The survey doesn't inform on that issue.
I want to close this section with the following about me specifically. I much prefer my healthcare to emerge from ongoing conversations with my providers. It is the relationship that matters. Each visit either bolsters the relationship, maintains the relationship, or tarnishes it some. I try not to have the business side of healthcare matter to me in how I view these relationships. But, if I opt for a different doctor when the prior doctor is not leaving the healthcare provider, that would be a strong signal that I was dissatisfied with the relationship. I've actually never done that. But I want to observe that senior management of the healthcare provider could be monitoring patient turnover. That would be far more informative than the surveys.
The surveys may potentially impact negatively how care is given.
Let's return briefly to consider instructor evaluation via CEQs. George Kuh developed the expression Disengagement Compact, to describe the following scenario. For instructors where CEQ results matter, to keep their jobs and to get salary increases, there is incentive to manipulate those results. For students who care a lot about grades but not so much what they might learn in the course, there is incentive to encourage the instructor to give them high grades. The resulting equilibrium has the instructor teaching to the test, the students performing reasonably well on the tests, and the overall grade distribution quite high. On the CEQs the students indicate they were satisfied with the course. But there has been only surface learning. If the instructor, in contrast, were to seriously challenge the students, there might be deeper learning, but the grades would be poor and the instructor CEQ ratings would be low.
Might something similar happen with healthcare provision and after visit surveys? My sense of this is yes, but it might be a bit more nuanced than as described in the previous paragraph. The issue is how the doctor delivers "bad news" to a patient who might not have been expecting it. In the old days we talked about "bedside manner" and treated it purely as a function of the doctor's style. But the doctor may also make an assessment of how much the patient can absorb, in which case the doctor will be more forthcoming with a highly educated patient. Such a patient might appreciate getting the information in a straightforward manner, even if the news isn't good. Less sophisticated patients might respond better near term if the message is sugar coated. It is the patient's behavior after the doctor visit that's at issue. To the extent that this behavior will govern how the condition proceeds thereafter, the doctor's sugar coating of the message might be pernicious. Yet even if the doctor is well aware of this, to the extent that the patient's survey response matters there can be incentive to sugar coat the message. In other words, the same underlying social dynamics exists here as it does in the case of college instruction.
Wrap Up
Data is not always the answer. And sometimes when there is an attempt to survey people, clearly articulating how the information is meant to help and whether it will help them might very well determine whether they are willing to complete the survey. It is conceivable, now, that individual doctors send their patients surveys after a visit, with the aim that the survey informs their ongoing care. This is called formative assessment and is a sensible thing to do. But it doesn't help third parties evaluate the doctor. Why we need that, I'm not sure. That itself is an indicator that it's not necessary.
No comments:
Post a Comment