While the subject of provider directory accuracy has received a great deal of attention, much less has been written about the user experience (UX) of provider directories. As students of UX know, good design starts from a deep and empathetic understanding of the user’s goals and mindset; in the design process these are captured through user personas and user stories respectively. Well designed software, and more broadly, well-designed services, make the path for the user to arrive at their goal as simple as possible. Currently provider directories are currently failing at serving one of consumers’ most important use cases.
The AMA’s recent whitepaper on provider directory issues touched on issues of consumer goals, noting that healthcare consumers often start their “customer journey” by looking for a doctor using search engines like Google or Bing, or within a site that has information about doctors such as Physician Compare, or offline via a recommendation from a friend. Once the user has the name of a provider of interest, they are interested in a simple “Yes” or “No” answer to the question: “Does this provider accept my insurance plan?”. Getting the right answer to this question is pivotal, and while “yes” is desirable, in fact the most consequential answer for a user is a clear and correct “no” as having this information allows the consumer to avoid hefty charges for out-of-network care.
Unfortunately, most provider directories in provider are built around “find a doctor who is covered” rather than “see if this doctor is covered”. That design in turn goes back to the original form of provider directories – printed lists of doctors in each network who are covered. That design makes it very difficult for a user to get clarity on whether the provider they’re interested in is truly covered or not, even when the underlying data is totally accurate (which it most often is not). In the remainder of this article we’ll examine these design problems and how they are compounded by data accuracy problems.
Consistent with their selected use case, provider directories are not designed to produce a response of “no, this provider is not covered”; rather they respond with “no results found” for the specific search the consumer executed. But “no results found” can easily occur for reasons other than non-coverage of the doctor, leaving confusion as to *why* no results were found, and making it very difficult to obtain a clear “no” or find a “yes” that might not fit with the filter approach of the provider directory.
For example, if a consumer limits her query (intentionally or unintentionally) to within 25 miles of her zip code, an in-network provider who is further than 25 miles away will not be listed. The distance filter could easily have been applied without the consumer noticing, as most provider directories by default include some form of distance filtering (more on this below). A savvy consumer may try expanding the distance range, but in addition to requiring extra knowledge this adds work in trying more searches.
A related example is when consumers are asked to choose a specialty of interest. Medical specialties are tricky – a provider often has many specialties (e.g. all gastroenterologists are also internists) and may practice one kind of medicine in one place or another in another place. And because there are no uniform conventions for grouping specialties together into broader categories, the same gynecologist may be listed as a “Primary Care” in one directory and “Specialist” in another.
This means that to achieve high confidence that a doctor is or is not covered, the consumer has to try different searches in case their criteria accidentally filtered out the doctor of interest. If you think the extra work to do an additional search is trivial, try executing it systematically with different filters set or unset over and over again – it is truly grueling.
Why are provider directories built around all this filtering?
One key reason is to avoid “name collisions”, i.e. returning numerous providers who have the same name as the one the consumer is searching for but are not the doctor of interest. Name collisions are a real problem – within the NPPES database there are 420 providers named “Jennifer Smith”, and over 3900 providers have the last name “Smith, J” (as one might type in a search box). To reduce the amount of name collision, provider directories typically try to limit the search universe through filters such as geographic localization, asking users to identify type of doctor they are looking for, and other ways to constrain the list of searchable names.
Following on the legacy approach of “find a doctor who is covered”, the list of known providers in nearly all provider directories (often called the “search universe”) is usually limited to only those providers who are in one of the networks serviced by that provider directory. In other words, if the provider is not in any of those networks the provider directory does not have any knowledge of that provider’s existence. So when a consumer types in a name of a provider he cares about, and that provider is not in any networks serviced by the provider directory, then the provider directory will either produce a result of “unknown/invalid name” or “no results found”. Because of the lack of a clear answer, there is no way for a consumer to differentiate non-coverage from not meeting the criteria of the search. Consumers seeking more clarity must search again until they either find the provider, develop sufficient, confidence that the provider is not covered, or give up.
The problems of unclear search criteria leading to misleading results are compounded by ambiguities and inaccuracies in the data. For example, when the insurer’s list of provider addresses is inaccurate or outdated and the insurer applies geographic filtering, the only way a consumer could successfully search for a doctor who is in fact covered but has inaccurate addresses would be to use counterintuitive distance filters to account for potential bad address data. Many provider directories do not allow consumers to search a network without any geographic filtering; in this case a consumer would have to try alternate starting points (usually city or zip code) to search for the doctor they are interested in. Most insurers have specialties that are captured as free text rather than through systems like the NUCC taxonomy codes, leading to arbitrariness in how the free text entries are grouped into specialty categories listed by consumers. Sadly, the specialty information in NPPES is often worse, as many providers never update their specialties after obtaining their NPI.
Another common problem concerns listing of individual providers vs organizational/group providers. In our work with Exchanges we have noted a great deal of variation in how insurers deal with individual providers versus groups. The issue of provider-group affiliations is one of the central reasons for data inaccuracies in provider directories, as insurers generally enter contracts with groups rather than individuals and must then try to keep an updated list of providers within those groups.
Beyond this issue, there is a great deal of variation in whether a given provider directory lists certain types of providers individually or not. For example, some insurers will list most or all provider types individually, while other insurers will list only a subset of providers individually and use group listings to indicate coverage of the remaining provider types (e.g. nurse practitioners, physical therapists, PEAR providers: pathologists, emergency medicine, anesthesiologists, radiologists, etc.). The implication of this group listing is that *every* provider that a consumer might see as part of the listed group is covered by the insurers, which is likely to be a somewhat over-inclusive list at times.
Consumers frequently do not know the name of the groups with which their provider is affiliated. As such a listing of groups is so burdensome as to be almost useless: the consumer has to look up the name of their provider’s group, making sure to correctly capture the specific group pertaining to the location where they see their doctor, then look that group up in the provider directory. Provider group names are also very similar to one another, and it is easy to mismatch those names. All of this work is likely to follow from an unsuccessful name search, and few consumers have the skills or hassle tolerance to do this kind of searching.
A notable exception in terms of knowing group names is for settings like Federally Qualified Health Centers, where the consumer may know the name of the FQHC and perhaps the first name of a provider they see commonly, but not necessarily the full name of the provider. We have not investigated the relationship between NPIs and HRSA identifiers for FQHCs and look-alikes, but we believe that being able to find these groups as part of provider directory searches is very important for consumers, especially in cases like Medicaid to Marketplace coverage transitions.
We conducted a brief data investigation of the difference between providers who are listed directly versus those who are implied via participation in a group. Our method used PECOS Medicare reassignment data (filtered for recent revalidation dates) to identify individual providers affiliated with group NPIs. We found increases of 10-80% over the number of providers than were listed individually, with wide variation both by insurer and specialty. While this method has significant limitations due to the low data quality of the PECOS reassignment data, we still think the results make the case for closer inspection. We intend to refine this methodology over time.
Still another problem is that when using provider directories, consumers must often choose a network, product line, or plan in order to search the correct list of covered providers. Many plan names, even within the same insurer, are highly similar to one another and choosing the correct one is very difficult for consumers. In other cases, consumers are presented with network names they have never seen before, or that have some portion of their plan’s product name. Under the best of circumstances this is a burden for consumers, and can easily lead to wrong answers. We also note that the advice to “call your provider” is often unsound, as provider offices (particularly staff who are not closely involved with reimbursement) may give answers like “we accept Aetna” without specifying that they accept broad-network PPO products but not narrow network products. We have looked at these problems in more depth in our proposal for a Consumer Network ID (here).
What can be done to improve user experience in provider directories?
- One very important advance would be to create a system to uniformly identify insurers’ provider networks. The idea is to create a publicly listed code that represents each network that members and other parties (e.g. application developers) could use to unambiguously query a provider’s participation in a network (via NPI + Network ID). We have explored the concept of a Consumer Network ID in depth here.
- HHS should compel providers to keep their NPPES records up to date, including accurate specialties based on NUCC taxonomy codes. As a central database that creates authoritative identifiers, NPPES is the obvious and most efficient place to collect this sort of information. If legal/regulatory authority is lacking, we would suggest that this could be addressed with small-scope federal legislation. However, this should not be left to the state level nor to voluntary compliance by providers.
- We applaud the efforts of ONC and FHIR workgroups in developing the ONC FAST National Healthcare Directory. If combined with strong oversight and governance structures, this system could dramatically improve the accuracy of the underlying provider-organization affiliation data and provider directories at large.
- Our intuition is that making this system compatibility with organizational entities with Tax ID Numbers (TINs) that insurers use to contract with organizations will require some restructuring of how non-individual NPIs are assigned, as these do not map cleanly to TINs at the moment. Alternately a TIN-compatible record type could be integrated into NPPES, with a “pseudo-TIN” identifier for providers whose TIN is their social security number.
- State oversight, including licensure and credentialling processes and should be fully integrated into this system. While some state licensure data is merged into NPPES to populate the “additional identifier fields” of individual provider records, the data on timing of these updates is still limited. Making licensure data available will improve the ability of app developers to ensure that a doctor is in fact licensed to practice at the addresses listed within the directory. An even broader approach might be to build a common technology platform for managing the health provider licensing process and offer this to the states as part of participating in a FAST National Healthcare Directory.
- The FAST National Healthcare Directory system should come with a requirement of daily updates from providers. Such updates could be accomplished by automatic nightly data loads from systems such as payroll, practice management, EHR, etc. Given that there is now nearly-universal adoption by providers of some or all of these systems, having nightly data loads would move the burden of compliance from providers themselves to their technology vendors.
- Our initial review of Provider Directory APIs mandated under CMS transparency rules indicated that many could not be directly queried using an individual NPI, but rather required a step of looking up an insurer’s internal provider identifier and then using that identifier to look up network affiliation. We believe this will be both burdensome for application developers and may expose them to throttling or utilization limits. We encourage CMS to require insurers to permit searching provider participation by NPI and network ID.