Unfair loans with AI? Don’t just come to us, say fintechs and online lenders

As banking regulators once again question the fairness of artificial intelligence in lending decisions, online lenders and fintechs agree the technology should be used carefully, transparently and with testing bias and disparate impact.

Rohit Chopra, director of the Consumer Financial Protection Bureau, recently notified that artificial intelligence in lending decisions could lead to unlawful discrimination. Online lenders say that with the right collateral in place and the right incentives, lending decisions made using AI are fairer than traditional underwriting systems. Additionally, technology loans expand access to credit, they argue.

CFPB director Rohit Chopra said his agency would crack down on any lender deemed to be unlawfully discriminatory.

New regulatory scrutiny from a Democratic administration could bring new rules and enforcement that affect fintechs and banks that use AI in lending decisions, and the industry is okay with that.

“AI isn’t perfect,” said Jason Altieri, general counsel and chief compliance officer at online lender Happy Money and former general counsel at LendingClub. “If you have humans designing something, biases will creep in. This may be due to the designers or the limitations of the data they use. »

Still, Altieri says AI is less biased than human loan officers.

“The way it was done for years was you would walk in, ask your banker for a loan, and they would look at you and with absolute bias say: Do I know you, do you have an account here and do you look like me?” Altieri said. “That’s not how you make loans anymore. you don’t even know you were just a series of data points. This made it better for everyone. Artificial intelligence also allows the use of non-traditional data such as the current state of a borrower when it comes to their rent and utility payments, he said.

“I absolutely think that’s the way to go,” Altieri said.

Upstart and the CFPB

This debate started five years ago, when Richard Cordray was the director of the CFPB and online lenders that use AI in their underwriting, like LendingClub, Prosper, OnDeck and Upstart, were growing rapidly. The office entered into an agreement with Upstart whereby it gave the company a no action letter (essentially a promise not to sue the company for its use of automated underwriting) in exchange for quarterly data on the lender’s loan approvals and denials and its handling of fair lending rules. The no-action letter was renewed in 2020.

“I would say there has been no significant change in [the no-action letter] process in the last three administrations,” said Nat Hoopes, head of public policy and regulatory affairs at Upstart. “It’s data sharing – what are the model variables, all the information that the CFPB [requests] to push and push on it – and get feedback and be pretty transparent about everything we do. And then taking their input and moving on to the next phase.

The CFPB is getting a lot of transparency out of this process, “not just from Upstart, but about AI in general, and how AI can benefit borrowers of color, in particular,” Hoopes said.

In 2019, the CFPB compared the results of Upstart’s AI-based loan decision model to a more traditional model and published the results.

“The tested model approves 27% more candidates than the traditional model and generates a 16% lower average [annual percentage rates] for approved loans,” the CFPB wrote. “This reported expansion of access to credit reflected in the results provided occurs across all race, ethnicity, and gender segments tested, resulting in the tested model increasing acceptance rates by 23% to 29% and lowering average APRs by 15-17%.

Under the Chopra-led CFPB, “You have to be prepared for more scrutiny,” Hoopes said. “You really need to check your algorithms and make sure they’re compliant with the Equal Credit Opportunity Act and disparate impact rules and that you’re doing this whole test the right way. But I also see a huge concentration [from Chopra] on any incumbent supplier that has a de facto monopoly, where the consumer is not getting a good deal. »

This time around, regulators are more educated, thanks in part to the CFPB’s work with Upstart.

“The machine learning analysts they spoke to know this very well,” said Teddy Flo, general counsel at Zest AI, a former online lender that now provides banks with AI-powered lending software. “They’ve really improved their understanding of the technology and are now asking much more specific questions: Are you testing disparate impacts? Are you testing disparate treatments? What method of explanation do you use?” Regulators are also taking a closer look at the data sources used to train the models, and whether they are consistent, he said.

“Algorithms can never be bias-free”

Chopra said that algorithms can never be free from bias and may result in unfair credit determinations for consumers.

AI lending technology providers agree that there are dangers in using AI in lending decisions.

“Left to their own devices, there is no doubt that these algorithms will discriminate when encountering a subpopulation that is not well represented in the data,” said Kareem Saleh, Founder and CEO of FairPlay, a developer of software that tests loan decisions for signs. prejudice, disparate impacts and discrimination. People of color, for example, have always been excluded from the lending system and therefore less data is available on the performance of their loans.

“When you come across these people, having very little data about them, the algorithm will necessarily assume they are riskier,” Saleh said.

Algorithms need to be tested to see if their results are fair, Saleh said. But there’s also an opportunity for an algorithm to pick up on clues that disadvantaged people look like good candidates who have been vetted.

FairPlay’s software can take a second look at rejected applicants using information about underrepresented populations. For example, if the model encounters someone with an inconsistent job, rather than automatically assuming they are more risky, it can recognize that women sometimes take career breaks, and that doesn’t make them irresponsible borrowers.

When FairPlay reviews loan decisions, “We find that 25% to 33% of the time, top performers of color and women who are turned down would have performed as well as riskier white men than most.” lenders approve,” Saleh said.

All of this assumes that lenders using AI have good intentions, which is not always the case.

“There are players who don’t do it well, who don’t explain their decisions with precise methods, and who don’t focus enough on fair lending,” said Zest AI’s Flo. That is why he and others agree with the principles set out by the CFPB.

“It’s not the AI, it’s the intent to reduce racial disparity that’s important,” said Chi Chi Wu, an attorney at the National Consumer Law Center.

Call AI a black box

Lorelei Salas, deputy director of control policy at the CFPB, warned in a recent blog post that the office plans “to monitor the use of algorithmic decision tools more closely, given that they are often ‘black boxes’ with little transparency.”

This is not a new review.

Jason Altieri was general counsel at LendingClub in 2014 when the Securities and Exchange Commission asked the company to include its lending algorithms in the S-1 document it was preparing before its IPO.

Altieri had two objections: first, no one would understand him. Second, “it’s like asking Coke to make the recipe for Coke available to the public. You can not do this. You have to respect that these companies are there to provide credit on better terms to more people, but they are companies. And so you’re going to have to tolerate not knowing how it works, but then trust that the [fair-lending] the tests take care of that.

Transparency comes from testing individual variables and groups of variables to ensure there is no proxy for a protected class, Hoopes said. And all outputs from the model must be subject to fair lending testing and disparate impact testing.

“You need to have acceptable fair loan outcomes, but you should also try to expand access to credit,” Hoopes said. “According to the Urban Institute, only 20% of black borrowers have a credit score above 700, compared to 50% of white borrowers. So if you rely on a traditional underwriting model with a credit score threshold of 700 , you are denying a chance to so many minority borrowers who have never defaulted on their loans, but who have somewhat lower scores.”

Companies using AI for lending today believe that the real black boxes are the traditional models used by banks to calculate credit scores and weigh credit decisions.

When Zest produces a model, Flo said, it comes with a 50 to 100-page report explaining each feature of the model, how it works, and its importance for each decision. It also provides a 20-page Fair Lending Report that outlines each variable’s contribution to a disparate impact.

“It checks that there is no patchwork processing,” Flo said. “This is what machine learning practitioners are putting in the hands of lenders. Whereas FICO gives you a three-digit number and one of 20 codes explaining the decision, with no transparency on how they FICO did not respond to a request for comment by press time.

One company working to eliminate bias in its models is Happy Money in Tustin, Calif., an online lending partner of several banks and credit unions. The company implements Fairplay’s software alongside its lending algorithms and, like its peers, regularly tests for bias. Most lenders today test their lending decisions for possible fair lending violations on a quarterly basis, Altieri said.

“I can take a potential template, walk through it, and be able to see in real time if I inadvertently stumble upon something where people of color are denied, and if so, not deploy it and redesign it” , Altieri said. “That’s the key thing here: very fast feedback. Historically, it took a long time to see what the real effects of a model were.

About Jermaine Chase

Check Also

Note to money lenders – Government proposes to lower usury rate caps by end of 2022

This is an offense under the Money Lenders Ordinance (Cap. 163;”MLO”) to lend or offer …