HUD’s new housing rule has an A.I. loophole that’s bad for America

Visits: 8

There is growing support for the idea that artificial intelligence can help remove human bias from situations where institutional discrimination has persisted for decades. California recently passed a fair hiring resolution to encourage the development and use of algorithm-based technologies in job hiring decisions. AI has potential to help end decades of discrimination in the home lending market too, but a new proposed rule change by the U.S. Department of Housing and Urban Development may lead to the wrong kind of AI outcomes.

div > div.group > p:first-child”>

For years, policymakers have sought to reverse centuries of bias in our housing and lending markets to create a more inclusive economy. Despite some progress, housing in America remains more segregated today than it was in the 1920s. The black homeownership rate remains at crisis levels. At 40.6%, it is lower than it was in 1967 when redlining was legal — and far behind the 73.1% rate for non-hispanic whites. This disparity drives severe inequities: median white households have a staggering ten times the wealth of the median black household.

More financial institutions are turning to AI and machine learning algorithms to make underwriting decisions because they use more data and sophisticated math to spot good borrowers who might have been overlooked or are too hard to score using traditionally exclusive methods.

Lenders have been prohibited from intentionally discriminating against people of color and from creating what is known as disparate impact — lending policies and decisions that unnecessarily harm borrowers protected by our fair housing laws, even if the discrimination is unintentional. Together, these two standards — intentional discrimination and disparate impact — have been the pillars of our nation’s fair lending laws. They have compelled lenders to continually examine their policies and develop new practices to lessen discriminatory outcomes.

In recent years, these standards have led more lenders to AI. AI can provide access for communities that have been locked out of housing, credit, and other opportunities because of discriminatory barriers. AI models can also be unpredictable, and if used incorrectly they threaten to perpetuate bias and unconscious prejudices that have plagued these markets for decades; the very harms disparate impact was designed to counteract.

The HUD proposal threatens to squash the requirement for fair and transparent AI, and endanger critical legal tools for fighting lending and other forms of discrimination. That would work against the millions of Americans of color who face structural and systemic barriers when they attempt to get mortgages and other loans.

Few are thinking about this AI side effect of the rule change, but if lenders are allowed to adopt artificial intelligence in their underwriting and pricing without strong guardrails to ensure its fair and responsible use, it could end badly. And when it does, the backlash against AI will further set back the promise of this technology to drive financial inclusion.

HUD’s proposed rule would make it exceptionally harder for consumers to challenge discriminatory practices, making it virtually impossible to show that a lender is unnecessarily denying people credit. The changes would immunize lenders that use unfair credit underwriting models as long as they don’t use clearly discriminatory factors in making or rejecting a loan.

The problem with this change is that a number of underwriting criteria that appear harmless in isolation — such as length of credit history, or number of credit inquiries — can turn out to discriminate against borrowers of color, especially when combined with other factors.

In response to existing fair lending laws, companies like ZestFinance have developed new credit scoring and underwriting technologies that use machine learning to identify bias in lending models and mitigate its impact without increasing risk. AI can also reduce the cost and difficulty of complying with fair lending requirements by allowing lenders to produce less discriminatory models automatically without added risk.

We need strong fair lending guidelines to ensure the development of the good kind of AI that’s fair and transparent. Maintaining strong fair lending standards is also good for financial institutions. At a time when overall loan growth is slowing for the financial industry, finding ways to lend safely and profitably to underserved borrowers can help lenders increase their revenue, by responsibly expanding their market. Our laws should support institutions that promote inclusive growth, not ones that relegate people of color to a permanent economic underclass.

Now is not the time to erode requirements around fair lending. Our government and financial institutions should be focused on adhering to and expanding fair lending principles, not destroying them. High fair lending standards encourage the development and use of better tools to analyze risks and improve loan profitability and fairness while expanding market opportunities. That’s smart business, good social policy, and essential to our nation’s economic growth and success.

By Lisa Rice, president and CEO of the National Fair Housing Alliance, and Douglas Merrill, founder and CEO of ZestFinance and former CIO of Google

The U.S. Department of Housing and Urban Development is accepting public comment on the proposal until Oct. 18.

Read More Go To Source