Join the Community

22,158
Expert opinions
44,204
Total members
426
New members (last 30 days)
206
New opinions (last 30 days)
28,722
Total comments

The Crucial Role of Experimentation and Testing in Today's Competitive Banking Landscape

Hi all, 

Nice to see you again.

In this topic, we would like to continue with an exploration of the problem and potential solutions to improve loan portfolio analytics.

Introduction

In today’s high-stakes banking industry, continuous innovation, strategy implementation, product development, and operational efficiency enhancement are more than just buzzwords; they’re the lifeline to staying competitive. New products or processes may be introduced with little to no testing in certain circumstances, and if led by visionaries like Steve Jobs, could still hit the mark. However, for most organizations, testing is an essential part of the decision-making process, ensuring that resources are not misallocated.

The Predicament: How to Validate a Change Proposal?

To validate whether a proposed change is advantageous, businesses might rely on extensive analysis and benchmarking. For instance, if a mobile phone company introduces a phone without buttons and achieves success, competitors might follow suit. Business school theories and strategies can often guide these decision-making processes, but unfortunately, there is no magical product or software that does the work for you.

On the other end of the spectrum, some companies adopt an experimental approach, rapidly testing multiple strategies to identify what works best. Google, for example, continually conducts numerous tests to optimize search result outputs, unbeknownst to end users.

Given their nature, banks need to strike a balance between analysis and rapid-fire experimentation. The ideal scenario would be to have a robust framework that allows banks to pilot new processes or business strategies, assess their performance, and keep all stakeholders informed about progress. This would reduce losses from unsuccessful trials, boost profits from successful ones, and enable testing with minimal friction.

Let’s explore this further with two hypothetical examples:

Example 1: Modifying the ‘Apply’ Button on a Form

Suppose the marketing team proposes changing the shape of the ‘apply’ button on a bank’s application form, believing that a slightly rounded design could improve the click-through rate by 0.5%.

Example 2: Testing a New Loan Approval Model

Meanwhile, the risk modeling team has developed a new loan approval/decline scorecard that they believe is 10% more discriminative, potentially enabling the bank to accept 5% more customers. While historical validation looks promising, the team understands the crucial nature of the acquisition model’s performance — if the new model fails, it could significantly impact the business.

A

Journey into Testing

A/B testing — a direct comparison between two versions to determine which performs better — is often a viable method for these types of changes. However, A/B testing may not always be applicable, as it assumes that groups can be divided randomly without bias and that differences in outcomes are solely due to the variable being tested.

In our first example, A/B testing could be utilized, showing some customers a rounded button while others continue to see the previous design. Similarly, the second scenario could involve using the current acquisition model for group A and the new model for group B. Other advanced testing methodologies, such as Bayesian testing and Thomson sampling, are also available, but their discussion warrants a separate article.

Measuring Test Success:

Success measurement varies by scenario. For the first, the success of the rounded button can be defined simply by measuring the click-through rate, a metric that’s quickly and easily accessible.

For the second scenario involving the new loan model, the process is more complex. Metrics like convergence rate, approval rates, and default rate performance must be assessed. While approval rates are quickly accessible, default rates might take months to manifest, thus slowing the decision-making process. Ideally, each business decision’s dollar impact should be calculated; however, many challenges can arise in the process.

As the number of metrics needed for analysis increases, so does the complexity of the process. A system capable of tracking all of these metrics for each test group is needed, ideally capable of generating automated reports, notifying decision-makers, and even automatically adjusting tests if necessary.

Building a Flexible and Robust Testing Framework

In an industry where agility and speed are paramount, an efficient testing framework plays an instrumental role in driving continuous innovation. This framework should be designed to be frictionless and rapid, enabling various departments within a banking organization to perform small-scale tests autonomously. Such an approach boosts organizational agility, enhances decision-making efficiency, and facilitates faster implementation of successful tests.

Take, for instance, the marketing department’s desire to test a new design feature, such as rounded buttons on a digital application form. Such a small-scale change, expected to have minimal risk but potential for significant customer engagement improvement, should not necessitate a cumbersome approval process involving the CEO or other high-level executives. Instead, empowering the respective departmental head to greenlight these tests can streamline the process and encourage innovation.

To maintain control over potential risks while promoting operational efficiency, a tiered approval policy could be the key. This policy establishes different approval levels based on the potential impact and cost of the test.

For example, a test that costs under $10,000 and might affect up to 1,000 customers could be approved by the head of the relevant department, such as marketing. This approach encourages experimentation while still mitigating risks, as potential losses are capped and manageable.

On the other hand, a more impactful test, which could lead to significant credit losses — say $10 million — would require more scrutiny. Such a test would need the consensus of the CEO and other C-suite members, ensuring that the potential risks have been thoroughly evaluated by the organization’s top decision-makers.

Another crucial aspect of this agile testing framework is defining clear success and failure criteria for each test before its commencement. These criteria are vital to prevent drawn-out debates and uncertainty, helping the organization to quickly react to the test outcomes.

For example, if a newly introduced rounded button results in a 10% decrease in the application rate, the system should be programmed to recognize this failure and send alerts to the main stakeholders. An early warning system like this allows the organization to promptly halt unsuccessful tests, minimizing potential financial losses and customer dissatisfaction.

On the flip side, should the test result in a significant increase in application rates, the system should notify the relevant personnel, such as the head of marketing or product management. This automated, real-time feedback enables the organization to fast-track the implementation of successful changes, maximizing their benefits.

Summary

Banking, a highly competitive industry, necessitates continuous innovation and strategic implementation. Ensuring proposed changes or innovations are beneficial requires a robust testing process, typically involving extensive analysis or experimental approaches. The ideal solution would be a framework allowing banks to pilot new strategies, assess their performance, and keep all stakeholders in the loop.

Two hypothetical testing scenarios in a bank were explored. The first involved changing the shape of an ‘apply’ button on an application form, and the second involved implementing a new loan approval model. A/B testing was suggested as a viable method for these situations, but its limitations were also acknowledged. Other advanced testing methodologies exist but were not discussed in detail.

Measuring test success is straightforward in some instances, like checking click-through rates for a redesigned button, but more complex when it involves assessing metrics like approval and default rates. An ideal system would track all metrics, provide automated reports, and automatically adjust tests as necessary.

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,158
Expert opinions
44,204
Total members
426
New members (last 30 days)
206
New opinions (last 30 days)
28,722
Total comments

Now Hiring