The transformative effects of big data, machine learning and AI systems on financial services are undisputed. Algorithmic trading, robo advice and automated underwriting are just a few of the emerging services
While the rapid pace of change creates new opportunities, regulators, academics and consumer groups have raised concerns about access by underserved markets and unfair discrimination that could arise from these advanced technologies.
In a
Fair and unfair discrimination
To some extent, insurers need to discriminate — it’s a core component of the business. In life insurance, for example, gender and age are necessary variables to accurately assess mortality risk. This type of discrimination is considered fair.
However, unfair discrimination can occur when the variables used in risk classification or underwriting cause unjustified price differentials relating to a protected group under the federal anti-discrimination laws. In response, many states have enacted restrictions or prohibitions on the use of variables that would cause unfair discrimination, such as race.
The increased use of big data and algorithms in classifying risk raises the concern that the system will learn to substitute seemingly neutral variables to stand in for race and other protected characteristics. In legal parlance, this is called
For example, data from mobile applications that share location and location check-ins on social media can serve as a proxy for zip codes, which correlate with race. Use of zip codes for life insurance underwriting is generally limited in nearly 80% of states, although if mobile location data stands in for geography, it tests the limits of existing regulations.
This can be problematic for consumer outcomes because the complexity of the algorithms and the evolving nature of learning systems may make it harder to identify if and when proxy discrimination is occurring. Moreover, since many insurers use third-party algorithm developers, even the insurers themselves may not be aware that the system is indeed discriminating.
The current regulatory infrastructure was not designed with AI-enabled underwriting in mind. While many states have limited or prohibited insurers in using certain characteristics of individuals in underwriting, it’s impossible for regulators to predict and evaluate every possible factor that is now available through novel big data opportunities.
Financial inclusion
In addition to legal compliance issues, the use of big data may impact access to financial products and services by underserved communities. One concern is that the data itself may be biased.
For example, while credit scores are commonly used in underwriting, historically, Black and low-income communities have had limited access to credit or have been actively discriminated against when accessing credit. The data will reflect this bias.
Moreover, when financial services companies use big data to identify markets where they wish to promote their products, historical bias as reflected in that data might lead to exclusion. Such
Navigating the path ahead
While the financial services industry is not expected to fix the societal issues that cause existing and potential bias, it should be aware and responsible for how those factors impact their own products and processes.
Trust is key. As the National Association of Insurance Commissioners noted in their August 2020
AI practices should reflect corporate culture.
Industry action should support financial inclusion. Spurred by the recent societal focus on race relations in America, numerous financial services firms have championed a strategy for financial inclusion in the coming years. Industry leaders can develop standards and best practices to further the broader mission of expanding access to under-served and under-represented groups for financial services.