20 October 2014

68867

Shaun Russell - Informatica

8 | posts 15,970 | views 0 | comments

6 steps for data quality competency for an underwriter

30 June 2014  |  2704 views  |  0
The concept of data quality in an insurance environment is a tricky one. The cost in acquiring good data is often weighed against speed of underwriting/quoting and model correctness. This can have a huge impact as incomplete data will force defaults in risk models and pricing or add mathematical uncertainty. If not corrected, risk profiles can also be wrong with potential impact to pricing and portfolio shape. And correcting data requires substantial personnel resources for cleansing and enhancing.

So to avoid costly errors let’s talk about the six steps for data quality competency in underwriting.  When done correctly they can be intelligent and adaptive to changing business needs.

 

Profile– Effectively profile and discover data from multiple sources

We’ll start at the beginning. First, you need to understand your data. Where is it from and in what shape does it come – both from internal and external sources? This will identify problem areas such as external submission data from brokers and MGAs, which is often incomplete. This is then combined with internal and service bureau data to get a full picture of the risk. Once data is profiled, you’ll get a very good sense of where your troubles are.  Then continually profile as you bring other sources online, using the same standards of measurement.  This exercise will also help in remediating brokers that are not meeting the standard.

Measure – Establish data quality metrics and targets

As an underwriter you will need to determine what the quality bar is for the data you use. Usually this means flagging your most critical data fields for meeting underwriting guidelines. See where you are and where you want to be. Determine how you will measure the quality of the data as well as desired state. And by the way, actuarial and risk will likely do the same thing on the same or similar data. Over time it all comes together as a team.

Design – Quickly build comprehensive data quality rules

This is the meaty part of the cycle, and fun to boot. First look to your desired future state and your critical underwriting fields. For each one, determine the rules by which you normally fix errant data. How do you validate, cleanse and remediate discrepancies? This may involve fuzzy logic or supporting data lookups, and can easily be captured. Do this, write it down, and catalogue it to be codified in your data quality tool. As you go along you will see a growing library of data quality rules being compiled for broad use.

Deploy – Native data quality services across the enterprise

Once these rules are compiled and tested, they can be captured in the organisation. Your institutional knowledge of your underwriting criteria can then be reused to cleanse existing data, new data and everything going forward. 

 

Review – Assess performance against goals

Remember those goals you set for your quality when you started? Check and see how you’re doing. After a few weeks and months, you should be able to profile data and run reports and see that the needle will have moved. You can now also identify new issues to tackle and adjust those that aren’t working. One metric that you’ll want to measure over time is the increase of higher quote flow, better productivity and more competitive premium pricing.

Monitor – Proactively address critical issues

Now monitor constantly. As you bring new MGAs online, receive new underwriting guidelines or launch into new lines of business you will repeat this cycle. You will also utilise the same rule set as portfolios are acquired.  It becomes a good way to sanity check the acquisition of business against your quality standards.

 

In case it wasn’t apparent your data quality plan is now more automated. With few manual exceptions you should not have to be remediating data the way you were in the past. In each of these steps there is obvious business value. In the end, it all adds up to better risk/cat modelling, more accurate risk pricing, cleaner data (for everyone in the organisation) and more time doing the core business of underwriting. Imagine you can increase your quote volume simply by not needing to muck around in data. Imagine you can improve your quote to bind ratio through better pricing. This is the real magic that lies within data quality.

TagsRisk & regulation

Comments: (0)

Comment on this story (membership required)
Log in to receive notifications when someone posts a comment

Latest posts from Shaun

Recognise and empower the best customer-facing staff

07 October 2014  |  1431 views  |  0  |  Recommends 0 TagsPaymentsTransaction banking

3 Data Challenges That Frustrate Chief Risk Officers

30 July 2014  |  1599 views  |  0  |  Recommends 1 TagsTrade executionRisk & regulation

6 steps for data quality competency for an underwriter

30 June 2014  |  2704 views  |  0  |  Recommends 0 TagsRisk & regulation

How Customer Centric Are You?

28 May 2014  |  2696 views  |  1  |  Recommends 0 TagsTrade executionTransaction banking

Which comes first: innovation or analytics?

17 April 2014  |  2608 views  |  0  |  Recommends 0 TagsTrade executionInnovation
name

Shaun Russell

job title

Head of Financial Services UK&I

company name

Informatica

member since

2014

location

London

Summary profile See full profile »

Shaun's expertise

What Shaun reads
Shaun writes about

Who is commenting on Shaun's posts

Paul Love
Alex Johnson