Wealth Think

How big data risks ‘proxy discrimination’ in financial services

"The increased use of big data and algorithms in classifying risk raises the concern that the system will learn to substitute seemingly neutral variables to stand in for race and other protected characteristics," write Sophia Duffy and Azish Filabi.
Luke MacGregor/Bloomberg News

The transformative effects of big data, machine learning and AI systems on financial services are undisputed. Algorithmic trading, robo advice and automated underwriting are just a few of the emerging services pushing the industry into uncharted territory.

While the rapid pace of change creates new opportunities, regulators, academics and consumer groups have raised concerns about access by underserved markets and unfair discrimination that could arise from these advanced technologies.

In a paper published in March, we discuss these ethical and regulatory challenges and what industry organizations and regulators are doing about it. We also propose a framework that companies can use to navigate some of these challenges. Our paper focuses on life insurance, but algorithms used for underwriting and risk classification in other products, including consumer credit and home mortgages, raise similar ethics concerns. Importantly, in the wake of George Floyd’s murder last year, advisory clients have shown heightened interest in the social policies of the companies they underwrite, as expressed in the growing popularity of socially conscious and ESG investing.

Fair and unfair discrimination
To some extent, insurers need to discriminate — it’s a core component of the business. In life insurance, for example, gender and age are necessary variables to accurately assess mortality risk. This type of discrimination is considered fair.

However, unfair discrimination can occur when the variables used in risk classification or underwriting cause unjustified price differentials relating to a protected group under the federal anti-discrimination laws. In response, many states have enacted restrictions or prohibitions on the use of variables that would cause unfair discrimination, such as race.

The increased use of big data and algorithms in classifying risk raises the concern that the system will learn to substitute seemingly neutral variables to stand in for race and other protected characteristics. In legal parlance, this is called proxy discrimination.

For example, data from mobile applications that share location and location check-ins on social media can serve as a proxy for zip codes, which correlate with race. Use of zip codes for life insurance underwriting is generally limited in nearly 80% of states, although if mobile location data stands in for geography, it tests the limits of existing regulations.

This can be problematic for consumer outcomes because the complexity of the algorithms and the evolving nature of learning systems may make it harder to identify if and when proxy discrimination is occurring. Moreover, since many insurers use third-party algorithm developers, even the insurers themselves may not be aware that the system is indeed discriminating.

The current regulatory infrastructure was not designed with AI-enabled underwriting in mind. While many states have limited or prohibited insurers in using certain characteristics of individuals in underwriting, it’s impossible for regulators to predict and evaluate every possible factor that is now available through novel big data opportunities.

Financial inclusion
In addition to legal compliance issues, the use of big data may impact access to financial products and services by underserved communities. One concern is that the data itself may be biased.

For example, while credit scores are commonly used in underwriting, historically, Black and low-income communities have had limited access to credit or have been actively discriminated against when accessing credit. The data will reflect this bias.

Moreover, when financial services companies use big data to identify markets where they wish to promote their products, historical bias as reflected in that data might lead to exclusion. Such exclusion further exacerbates the racial wealth gap, leading to continued inequalities. Seemingly innocuous factors like social media activity and data from wearable devices, like heart rate and sleeping habits, will be limited by the unavailability of broadband access in lower income communities.

Navigating the path ahead
While the financial services industry is not expected to fix the societal issues that cause existing and potential bias, it should be aware and responsible for how those factors impact their own products and processes.

Trust is key. As the National Association of Insurance Commissioners noted in their August 2020 framework for the ethical use of AI, public and stakeholder trust is critical for an organization’s success. Companies should consider how trust is impacted when new and potentially less credible sources of data are used.

AI practices should reflect corporate culture. An ethical culture should reinforce ethical decision-making with regards to the use of data and AI. Fairness in internal processes and leadership’s transparency and openness to feedback will create a foundation for ethical decision-making.

Industry action should support financial inclusion. Spurred by the recent societal focus on race relations in America, numerous financial services firms have championed a strategy for financial inclusion in the coming years. Industry leaders can develop standards and best practices to further the broader mission of expanding access to under-served and under-represented groups for financial services.

For reprint and licensing requests for this article, click here.
Big data Artificial intelligence
MORE FROM FINANCIAL PLANNING