Medical Professional Liability Ratemaking: Extending the Scope of Predictive Analytics

In today’s insurance landscape, predictive modeling has become a familiar term. With insurance companies now using predictive models to improve claims handling, select target markets, optimize retention, and more, the applications of advanced statistical modeling have extended far beyond traditional uses in personal auto and homeowners ratemaking. Yet even though the implementation of modeling techniques such as generalized linear models (GLMs) has become the industry standard in developing personal lines rating plans, the use of predictive modeling from a ratemaking perspective has gained little traction in many commercial lines of business. Of note in particular is the medical professional liability (MPL) industry, where most insurers look to relatively unsophisticated rating plans using only a small handful of variables—namely physician specialty and territory—to price risks.

While there are many challenges that may deter MPL writers and other commercial lines carriers from building predictive models for ratemaking purposes, a variety of solutions exist that can allow insurers to still use modeling techniques to better price risks and increase profitability.

Getting the most out of small data

Perhaps one of the largest issues facing MPL writers when using predictive modeling for ratemaking is the data itself. Unlike personal auto and homeowners, where even small to midsized insurers may have millions of policyholder records, many MPL writers have sparse and volatile data. This can make segmentation—separating the good and bad risks in a portfolio—difficult, especially when trying to incorporate variable interactions in the modeling process.

For instance, say an insurer has a data set that has 50 unique territories and insures 10 different physician specialties. If the actuary models territory and physician specialty separately, the resulting model will produce 60 rating factor estimates (50 rating factors for territory and 10 rating factors for specialty). If the actuary thinks that there is an interaction between the variables—that is, that the difference in losses by specialty varies across territory—the actuary increases the rating factors estimated by the model from 60 to 500 (50 possible territories x 10 possible specialties)! Slicing the data into this many groups could potentially jeopardize the credibility and stability of the results of the model.

It should be clear, however, that it is not necessary to have Allstate-sized data sets in order to achieve great modeling results. Meaningful results have been achieved using data from small and midsized companies. Further, companies can gain meaningful insights into interactions between variables by grouping the variables in such a way that credibility is not compromised. For instance, in the example above, the modeler could group physician specialty into five groups, which when interacted with territory would result in 250 rating factor estimates produced by the model. This would allow for the interaction to be tested without cutting the data into as many groups.

Long-tailed lines present a long line of challenges

In addition to the lack of available data, MPL writers face challenges related to immature and undeveloped loss data. Because MPL claim amounts can drastically change over time, it is exceptionally difficult to estimate ultimate claim settlement costs, especially when a claim is newly opened. The modeler should be aware that loss development techniques that may work for personal auto and homeowners may not be appropriate when applied to MPL data.

Additionally, traditional loss development methods used by reserving actuaries rely on the principle that a group of claims in aggregate will develop the way claims historically have developed in the past. But a predictive model is built based on data at the individual risk level, so losses must be developed at the individual claim level. What is the most appropriate way to handle this when the claims department is more confident in its estimate of the case reserves for some claims but less certain in its estimates of others? Should the modeler explicitly account for this? It is important for the modeler to research and understand the company’s reserving practices—how case reserves are established, how the claim settlement process differs by claim type, etc.—in order to apply the most appropriate assumptions when developing claim costs.

A further complicating issue is the treatment of incurred but not reported (IBNR) claims. In contrast to personal auto and homeowners claims, MPL claims may not be reported until well after the policy expires. This is especially true for occurrence policies, where coverage is provided for claims that occur during the policy year, but also can affect claims-made policies (for instance, some insurers may initially record reported claims as “incidents,” which later may convert to claims once they meet the company’s claim definition).

Traditional actuarial loss development methods take into account IBNR claims. However, because predictive models rely on individual risk data, the application of traditional actuarial loss development methods to predictive modeling data would result in an overstatement of the ultimate losses for claims currently in the data. The provision for IBNR claims would essentially be spread across the claims that have already been reported. The results of the model would be skewed in that risks that have already reported claims would appear to be worse than they actually are, and risks that will eventually report claims to the insurer will appear to be better than they actually are.

A potential remedy to this problem would be to exclude the most recent immature accident years from the analysis. However, it may take many years before some claims are reported. Therefore, even if the most recent years of data are removed from the data set, it is still important for the modeler to take into account the IBNR claims and adjust accordingly when developing ultimate claim costs.

Somewhat related to immature and undeveloped data, MPL writers also face issues regarding the responsiveness of the data. Because of the long-tailed nature of medical liability, claims may take years to be reported and even more years to close. This long report-to-close lag means that recent changes in an insurer’s book of business, such as a change in the mix of business written, may not be fully reflected in the usable data. Further, as previously stated, it may be necessary to remove more recent accident years from the data, which may remove any observable effect of the insurer’s recent change altogether. In such cases, the modeler may want to include the recent years of data in the analysis—making sure to adjust the claim development for IBNR claims—and reduce the weight given to each record in the modeling process to reflect the immaturity of the data. This would allow the insurer to reflect the changing book of business in the modeling process without skewing the results.

MPL and competitive pressure (or lack thereof)

A final inhibitor to using predictive modeling techniques in determining rating plans could be the lack of competitive pressure to do so. Because the use of predictive modeling has gained little traction in the MPL industry, insurers may be unconcerned about future adverse selection that can occur when competitors implement more sophisticated rating plans.

It is imperative that insurers not wait until profitability declines as a result of adverse selection to consider enhancing their rating structures. If an insurer is not currently collecting risk information beyond what is used in its current rating approach, it may take many years before the company has the usable data for predictive modeling purposes, in which case it will be years behind its competitors. Therefore, it is important that companies do not become complacent with their market positions. Even if an insurer is currently not considering changes in its rating structure, it should be collecting the data that could be useful in changing its rating structure in the future.

The challenges faced by medical professional liability insurers—as well as carriers in other similar lines—are vast when developing rating plans using predictive modeling techniques. However, for almost every problem there is a workable solution. Companies prepared to meet these challenges head on to improve the segmentation of their business may have the opportunity to reap enormous benefits and establish themselves as the industry leaders while leaving ill-equipped companies behind.

Share this post

Eric Krafcheck

Eric Krafcheck

Eric P. Krafcheck FCAS, MAAA Consulting Actuary Eric is a consulting actuary with the Milwaukee office of Milliman. He joined the firm in 2011. Eric’s area of expertise is in property and casualty insurance, particularly in predictive modeling, loss reserving, ratemaking, and data management/programming. He has experience in numerous lines including professional liability, general liability, workers’ compensation, as well as commercial and personal property. Since joining Milliman, Eric has used generalized linear models (GLMs), segmentation analysis, and other modeling techniques to enhance pricing, marketing, and underwriting strategies for multi-line personal and commercial insurers.

scroll to top