Front Office

In this month’s issue, we have two articles that discuss the importance of accurate valuations for REO properties. In “The REO Landscape Is Shifting, But The Formula For Success Is The Same,” Brent Taggart of Green River Capital says having accurate valuations “calibrated to the properties and changing market conditions is critical to the REO sales process,” while in “Servicers Seeing More Rural Properties On Their Books,” Keith Guenther, CEO and founder of USRES, says the accurate valuation of REO properties “is of the utmost importance” to servicers. (However, this can be a challenge when properties are isolated in rural areas).

I think the timing of these articles couldn’t be better, as there have been cries as of late that the appraisal process is too subjective and that appraisal management companies and their partners should be making better use of available data when performing traditional appraisals. I realize that the preferred valuations tools for REO properties are the broker price opinion and automated valuation model, but servicers also use traditional appraisals, when needed, because it is the only way to comprehensively assess the current condition of an REO property.

A recent post on CoreLogic’s blog argues that the appraisal process is too subjective, particularly with regard to comparables, and that technology can play a larger role in standardizing the process so that appraisals are more defensible. Blogger Michael G. Bradley suggests the task of comparing properties and making the appropriate adjustments based on those comparisons is simply too complex for an appraiser to carry out accurately. In other words, why make the comparison based only on a sampling of surrounding comparable properties? Why not use all of the data that is available on ALL the properties in the surrounding area?

“How, exactly, does the appraiser determine which property features to adjust and how much those adjustments should be?” Bradley asks rhetorically. “Machines, aided by huge amounts of data and advanced statistical models, do a much better job of performing these tasks.”

Bradley suggests that an appraiser cannot take into account all of the data on comparables in any given area because the human brain simply doesn’t have that much computing capacity: It’s like when Garry Kasparov went up against IBM’s Deep Blue computer in chess and ultimately failed, because Deep Blue already knew all the possible outcomes.

He points to the results of a test conducted during the 2014 CoreLogic RiskSummit that revealed significant inconsistencies when two or more appraisals were performed on the same property. As such, “There seems to be no defensible rationale for the adjustments appraisers make to comparable properties when performing their work,” Bradley writes. He concludes, therefore, that it “might be time to reengineer the process appraisers use to make adjustments to comparable homes.”

Considering that practically every other aspect of the mortgage process has more or less been “standardized” through regulation at this point, I agree that now is a good time to take a closer look at the appraisal process - in fact, all valuations processes - to ensure not only improved accuracy, buy also better consistency. There’s no question in my mind that utilizing more data - and getting some of the subjectivity out of the process - will aid all stakeholders in the industry, including servicers tasked with valuing bank-owned homes.

Front Office

Front Office

































Sidebar Headline