In the health care debate, there’s often much talk that so much money is “wasted” trying to save those in the last years of their life. (The statistic that gets hauled out here is that the share of Medicare costs incurred by patients in their last year of life is about 28 percent.) In the New York Times last year, Daniel Altman suggested that major health care costs could be saved simply by, well, letting people die earlier:
End-of-life care may also be a useful focus because, in some cases, efforts to prolong life may end up only prolonging suffering. In such cases, reducing pain may be a better use of resources than heroic attempts to save lives.
On the other hand, the other day Max Sawicky highlighted a quote from a textbook by economist Jonathan Gruber that says—regardless of one’s moral views on the question—that this is unrealistic:
[A] common fact cited by analysts to declare our health care system “wasteful” is that 30% of medical spending is on people in the last six months of their lives so that we are “wasting” money on a population that will die anyway. The problem with this argument is that doctors don’t know in advance who is in the last six months of their lives and who might live for many more years. Only a small share of this spending during the last six months of life is for those we know are at the end of life. The rest of the spending may not be wasteful, in that it may have some chance of significantly extending life
That seems like the more sensible view. It is true that some other countries, like Britain, use the notion of “quality-adjusted life years” to ration care at the end of life. So, for instance, if a treatment costs X amount of money but will likely only prolong life for Y years, and X*Y is under some defined limit—in Britain’s NHS, it’s about £30,000—then the government simply won’t pay for it. (Patients can, however, pay for it with their own money, if they can afford it.)
Now in “U.S. Health Care Spending in an International Context,” Uwe Reinhardt et. al., suggested that the United States was spending an extraordinary amount on health care that yielded very few additional “quality-adjusted life years,” especially since neither Medicaid nor Medicare set explicit limits here, and noted that policymakers should consider explicit limits if they wanted to cut costs. Any sort of public debate on this question would obviously be politically divisive—when do you decide that it’s too expensive to save a person’s life for another six months? What sort of cutoff do you set?
And anyway, is this something we even should consider? Perhaps Britain has it wrong. As Gruber points out, it’s not always clear that a given treatment will “only” extend life by a small bit of time. And even if Medicare and Medicaid currently don’t ration care in the way the NHS does, both programs certainly do ration care right now (to take an easy example: many doctors and hospitals that do particular procedures won’t take Medicare patients at all), and it’s not obvious that further rationing by either program will reduce spending as much as people think.
It also seems like a bad idea to try to cut health care costs by cutting care itself—we are obviously getting something for all of these new health care technologies, and as a society that continually gets richer this seems like something worth paying for. At any rate, there are scores of other inefficiencies in the U.S. health care system—including costly monopolies by the AMA and the drug industry, as well as undue administrative costs—that ought to be addressed long before people even start talking about further reductions in care.