Learning from HealthCare.gov’s Botched Launch: History Repeats Itself and Lessons Remain Unlearned



By Matthew Ainsworth and Jeff Dell

The botched Affordable Care Act launch has faded from the daily news cycles, as stories inevitably do as fresher fare emerges. If history is any indication, the analysis and hand-wringing that generated so much news won’t amount to anything except another spectacular website or call center failure in the future. The technology industry hasn’t shown any sign of learning anything from its previous flops, so why would this time be any different?

What they haven’t learned – and this applies to government and private sector IT operations – is the value of testing websites and call centers under real-world conditions before they go live. Don’t believe it? Think that testing is too basic for anyone to ignore? Then dial back to October 2005 when the federal government launched the Medicare Part D prescription drug program. It was a smaller-scale fiasco than the ACA rollout, but otherwise they were depressingly similar.

The Medicare site, meant to help seniors pick benefit plans, was supposed to debut October 13, 2005. It didn’t go live until weeks later in November. Even then, “the tool itself appeared to be in need of fixing,” the Washington Post reported at the time. Visitors could not access the benefit enrollment website for the first two hours it was supposed to be available. When it finally went online at 5:00 p.m., it performed poorly. Once enrollment began, seniors had problems finding necessary information.

Things weren’t much better on the call center front. Call centers provided by the Center for Medicare and Medicaid Services underestimated call volumes and under-trained service representatives.

So what’s the lesson, besides “history repeats itself”? Or perhaps “the government doesn’t learn from its mistakes”? Both of them, sure, but in a large sense, the ACA website and call center failures were just business as usual in the tech industry, whether you’re talking government or private sector.

Political dimensions aside, there’s a persistent curiosity about how such a high-profile project could have failed so spectacularly. It was possibly the world’s most important IT project to date, yet it performed as if it were rolled out the door without so much as a cursory kick of the tires. That’s because it probably was – and that’s far from unusual.

A recent survey by LinkedIn and Empirix found that at most companies and public agencies, full pre-deployment testing is rare or doesn’t exist at all. When IT organizations do test, they don’t test the system the way customers will interact with it. They test components – Web interfaces, fulfillment systems, interactive voice recognition systems (IVRs), and call routing systems – but not the system as a whole under real-world loads.

The survey of more than 1,000 executives and managers in a variety of industries uncovered contradictory results. On the surface, pre-deployment testing rates are high: 80 percent or better. The deeper picture, however, isn’t as rosy. More than 80 percent of respondents said their companies do not test contact center technology under real-world conditions before go live. They do some form of testing, but it doesn’t mirror how customers actually will interact with the system.

Sixty-two percent use comparatively inaccurate manual testing methods. While this is better than not testing at all, manual testing does not reflect real-world conditions. Manual tests are so resource-intensive that they often occur only during off-peak times, and usually only once or twice. That makes it harder to pinpoint problems – and ensure they are resolved – even if they are detected pre-deployment.

Voice quality can be a significant drag on short- and long-term call center ROI. Contact center agents who must ask customers to repeat themselves because of poor voice connections – or worse, ask customers to hang up and call in again – are less productive than those who can hear customers clearly. In the long term, repetition and multiple calls erode customer satisfaction levels.

The majority of professionals who responded to the survey (68 percent) reported that their companies never monitor contact center voice quality. Only 14 percent continuously monitor voice quality, while the remaining 17 percent periodically monitor it on a daily, weekly, or monthly basis.

With misdirected efforts and testing rates like these, it’s no wonder people aren’t surprised when a major initiative like online healthcare enrollment goes off the rails or customers calling a contact center are funneled down a blind alley in the IVR system.

This kind of neglect isn’t necessary when there are end-to-end monitoring solutions that offer deep visibility into complex customer service technology environments. Business and government agencies have the means to quickly and economically reduce the time it takes to understand the source of a problem and fix it before customers ever notice the glitch.

Or they can let history repeat itself again.

[From the April/May 2014 issue of AnswerStat magazine]