Budget Office Needs to Stop Hiding its Methodology on ObamaCare Analysis

COMMENTARY Budget and Spending

Budget Office Needs to Stop Hiding its Methodology on ObamaCare Analysis

Jul 26, 2017 3 min read
COMMENTARY BY

Former Research Fellow, Center for Data Analysis

Drew focused his research and writing on the nation’s new health care law, including the repercussions for Medicare and Medicaid.

On Wednesday, the Congressional Budget Office released a score for the House-passed version of the American Health Care Act. While there were new discussions about how the CBO will evaluate changes to insurance market regulations, this score differed very little from the original one.

As interest begins to grow regarding what the CBO is and the type of projections it creates, it’s important to remind people that this work is difficult and highly uncertain, especially looking at healthcare analysis of the past. We observe the CBO having to update their original scores of ObamaCare every year to bring them in line with reality. In a Harvard Business school Q&A, past CBO director Doug Elmendorf even admitted their original scoring wasn’t entirely correct. As for the AHCA, readers of the CBO’s most recent score should be aware of this history.

This isn’t to say the CBO is dishonest, inept, or completely unreliable. In fact, the CBO does large swaths of valuable work on tight deadlines every year that helps to inform debate around various pieces of legislation. However, for healthcare, there is a deeper issue surrounding these scores. This issue has to do with what the CBO is willing to share with the public about its modeling practices and assumptions. The most recent AHCA score and the subsequent discussion around key differences from previous scores highlights these issues almost perfectly.

One of the key differences in the newest versions of the AHCA was an amendment that allowed states to waive essential health benefits and community rating. Modeling the effect of a state specific waiver is a difficult prospect, and it’s clear the CBO made an effort to do so.

 

The CBO evaluates these effects in the score on three different groups. One group has no changes to the regulations or pursues no waivers. Another group of states pursues some waivers. The smallest, final group, makes large regulatory changes or pursues aggressive waivers. Using a historical approach, the states, or U.S. population, is placed in each of these groups through a process that the CBO describes. However, this description is incomplete.

For one, the obvious question — which states applied for which waivers — isn’t answered in the score. Unless this is a compilation of scores and not a point estimate, this crucial assumption could easily be shown. When modeling ObamaCare, the CBO made a similar generalization when it came to assumptions around which states would expand their Medicaid programs in the future. These assumptions continue to inform the CBO’s baselines.

Second, it is unclear the nature of regulatory changes in each category. For example, in the moderate category, the CBO states, “Although the changes to regulations affecting community rating would be limited, the extent of the changes in the EHBs would vary widely; estimated reductions in average premiums range from 10 percent to 30 percent in different areas of the country.” What is specifically meant by limited changes, and where exactly are the wide variances occurring? These are questions that could simply be answered here.

Third, in the cases where states drastically reform regulations, premium reductions are only described as “significant.” Once again, if the modeling on this effect was conducted, it is curious to only describe the effect as “significant” without providing even illustrative premium estimates.

Finally, the CBO went even further in saying that premium reductions could lead to increased insurance take-up. But it didn't expand or mention the positive effects of lowering costs later in their document, and the thought didn't influence the estimates in the score. While it is clearly more difficult to score benefits from competition compared to price controls and caps, even a discussion about market forces could serve to provide context for CBO’s statements. CBO has done valuable work on this conversation in Medicare, and would be well served to do more in general when evaluating all health care proposals.

For anyone evaluating a score of legislation, there should not be room for guessing games when it comes to methodology. Yes, the CBO is operating on tight deadlines, but it’s still important for readers of their documents to be able to understand the underpinning assumptions. Ultimately, there is no excuse for large assumptions around a score to be obscured behind general statements.

The CBO could better serve legislators, media and researchers if their models and methods were made public. Lifting this veil would allow more discussion around the effects of various proposals without having to wait for an explicit CBO score. Additionally it would allow the CBO to grow as many researchers could evaluate and provide input to improve its modelling capabilities. Finally, and maybe one of the most important aspects of such a change, this would allow legislators to have real conversations about the effects of their legislation, publically, with less delay. This is not a radical idea as many taxpayer funded models are in fact available to researchers for collaboration, research, and knowledge sharing.

Evaluating large pieces of healthcare legislation is a difficult task, and there is little question that the CBO makes its best effort given the time and other constraints. However, the CBO does few favors for itself when methodologies and models are opaque or simply unavailable.

This piece originally appeared in The Hill on 5/26/17

Donate to The Heritage Foundation

Our more than 100 policy experts and researchers are invited to testify before Congress nearly 40 times a year

DONATE TO HERITAGE