Zillow Intros Tool to Mitigate Bias in AI-Powered Conversations


Because many AI tools disregard fair housing requirements and, when deployed, can perpetuate bias, Zillow is releasing a Fair Housing Classifier that establishes guardrails to promote responsible and unbiased behavior in real estate conversations powered by large language model (LLM) technology.

This tool focuses on mitigating the risk of illegal steering — the practice of influencing a buyer’s choice of communities based upon the buyer’s legally protected characteristics under federal law, Zillow says in a release.

“Since 2006, Zillow has used AI to bring transparency to home shoppers, powering tools like the Zestimate,” says Josh Weisberg, senior vice president of artificial intelligence for Zillow. “We’ve made it our business to increase transparency in real estate — open sourcing this classifier demonstrates that advancements in technology do not need to come at the expense of equity and fairness for consumers. We’re offering free and easy access so that others in civil rights, tech and real estate sectors can use it, collaborate and help improve it.”

The Fair Housing Classifier acts as a protective measure, to encourage more equitable conversations with AI technology. It detects questions that could lead to discriminatory responses about legally protected groups in real estate experiences, such as search or chatbots.

The classifier identifies instances of noncompliance in the input or the output, leaving the decision of how to intervene in the hands of system developers.

“In today’s rapidly evolving AI landscape, promoting safe, secure and trustworthy AI practices in housing and lending is becoming increasingly important to protect consumers against algorithmic harms,” says Michael Akinwumi, Ph.D., chief responsible AI officer at the National Fair Housing Alliance. “Zillow’s open-source approach sets an admirable precedent for responsible innovation. We encourage other organizations and coalition groups to actively participate, test, and enhance the model and share their findings with the public.”

Photo: Steve Johnson

Notify of
Inline Feedbacks
View all comments