(This is part two of a two-part Q&A focused on how automated underwriting systems have evolved and how they have changed the mortgage process. To read part one, click here.)
The automated underwriting system (AUS) has evolved to become an essential tool in the mortgage lending software ecosystem. Typically integrated with a lender’s loan origination system (LOS), the AUS and related “decisioning engines” work in conjunction with auto verification systems to quickly and automatically render accurate preliminary approvals, or recommendations, on behalf of mortgage applicants based on data culled from a wide variety of public and private databases, coupled with lender, investor and regulatory guidelines. As such, these systems are crucial for automating the online mortgage process – which has advanced by leaps and bounds in recent years – as well as helping lenders gain operational efficiency by significantly reducing the need for manual underwriting.
For part two of our series on recent advances in AUS technology, MortgageOrb interviewed Ben Wu, executive director at LoanScoreCard, a provider of automated underwriting and compliance solutions, and Joey McDuffee, director at Wipro Gallagher Solutions, a developer of mortgage lending systems that also offers a custom AUS.
Q: What is an AUS? Where does this software typically reside, and is it always integrated with a lender’s LOS?
Wu: Basically, an AUS analyzes loan data and borrower credit information to recommend a mortgage approval decision. The term “automated underwriting” can mean many things: It could refer to the agency systems, such as Fannie Mae’s Desktop Underwriter (DU) or Freddie Mac’s Loan Prospector (LP), to more limited engines that are part of an LOS or an end-to-end AUS for non-agency loans, such as LoanScoreCard’s Custom AUS. Our AUS is cloud-based and typically integrated with a lender’s LOS. But it can also run outside an LOS, as long as you can import 1003 data and credit information into it.
Lenders originating both conventional and jumbo products will often use multiple engines – one to underwrite agency and government loans, such as DU or LP, and one to underwrite non-agency loans. Five years ago, 100% of loans being originated were agency loans, but since then, the non-agency space has grown significantly. Although it’s still a small portion of overall mortgage originations – representing less than 20% of all mortgages – it grew approximately 40% last year. A custom AUS allows lenders originating non-agency loans to customize credit decisioning and provides them with an assessment report for these loans so that they have an audit trail for underwriting and ability-to-repay decisions.
McDuffee: Automated underwriting is simply a systematic, computer-based, algorithmic loan underwriting decision. Historically, it took lenders sometimes 45 to 60 days to manually underwrite loans through very laborious investigative processes. Using their extensive datamarts, automated underwriting engines were introduced by the secondary market in the early 1990s – most notably Freddie Mac and Fannie Mae. They incorporate aggregation of data from multiple sources, along with product and investor guidelines, to formulate an automated loan decision.
Underwriting engines are available online for submitting loans to investors, which is the most conventional use. However, LOS also incorporate these engines to help users in originating, processing and closing loans. There are varying levels of LOS sophistication; some are basic and rely on the investors’ underwriting engines, while other LOS providers build logic into their systems that assist with compliance, loan eligibility and product eligibility.
Q: Besides Fannie’s DU and Freddie’s LP, how many different underwriting engines are currently being used in the industry?
Wu: A lot of companies claim to offer automated underwriting engines, but they’re really just providing rules engines. We’re currently the only alternative to DU and LP that offers lenders true customization.
Interestingly, before the crisis, Fannie Mae offered a product called Custom DU. According to some of our customers who remember Fannie’s Custom DU, the difference between our Custom AUS and Fannie’s Custom DU is that Custom DU takes the DU guidelines and then customizes them with overlays, making them stricter. The problem with that is that they’re still working off a base that is an agency program. Our Custom AUS has no agency “scaffolding” or content that needs to be dismantled first. All rules and all messaging are built from the ground up.
McDuffee: Customizations of underwriting engines depend on the lender’s propensity to originate marketable loans. If a lender delivers its loans to the secondary market, customizations may not be a priority. However, for lenders that portfolio loans and wish for their systems to provide assistance and control with their originations, a customizable solution should be considered.
There are a number of examples that are available today. Our LOS platform, NetOxygen, was released in 2001, and our first client solely used it as a decision engine to assess risk when purchasing loans based on any number of criteria. As data was updated or acquired, the system automatically updated status, rate, price and all associated parameters with no manual intervention. With NetOxygen’s extensibility, any lender or customer can add any parameter to the system and calculate an overall risk score or risk-based price and assign conditions dynamically to the loan.
For example, during the subprime years, many institutions either built their own automated underwriting engines or leveraged a third-party LOS like ours that incorporated a rules-based AUS engine that mainly leveraged credit report data to calculate risk. As AUS progressed, lenders used any and all data about the loan, including borrower, credit, property and product eligibility thresholds, to create custom risk scores and risk grades in order to determine customer and loan viability. Using these parameter-based technologies, lenders could quickly come to a decision. Corresponding workflow routing and risk-based pricing would then be systemically applied based on these custom models and scores.
Q: How are automated underwriting engines becoming faster and “smarter”? Is it processing speed? Ease of integration with other databases? Better algorithms?
Wu: Over time, processing and response times get faster because computers get faster. It’s just a natural evolution. But one technology that has really made AUS faster (which we utilize with all of our engines) is cloud computing. The cloud is elastic, meaning that we can instantaneously add computing power to our cloud servers to ramp up and stay ahead of demand. So, if we start to sense that things are slowing down (i.e., it’s taking four or five seconds to return a response rather than three seconds), we can quickly adjust and immediately return to our prior performance benchmark. We’ve applied this to our QM Findings Engine, which delivers a QM Findings Report that provides immediate assurance to both lenders and investors that loans meet QM guidelines. Since its release in August 2013, we’ve generated 4 million QM Findings Reports. So, we’re used to high-demand processing.
In terms of becoming smarter, that’s also a natural progression as we build the more complex underwriting analysis logic that our clients demand of us. Because what we offer is customizable, we continue to innovate alongside our customers. As they develop and tweak their credit policies (hopefully based on sound historical data), we develop ways to automate what it is they want to calculate or base approval on.
Rendering decisions quickly is important for borrower satisfaction, as well as improved efficiency and productivity. The only factor that could really slow you down is if the lender’s Internet or cloud goes down, but then that’s a classic concern with all Internet-based computing.
McDuffee: Today, all types of technologies are becoming smarter and quicker as more and more data is made available to such systems. As we add new fields and real-time data to the engine, developers are able to build rules that can incorporate this new data and make smarter decisions. Using the right rules, the engine will have more quality information to make faster decisions, expediting processing times. As the mortgage industry continues to embrace big data and analytics, more loan, borrower, collateral aspects and economic factors will be taken into account. As a result, risk profiles can change as the loan is processed so that each lender knows the “up-to-the-minute” risk associated with providing this loan.
Although the mortgage industry is still in the beginning phases of exploring the “next frontier” of big data, you can see that machine learning, robots and neural networks are already in use in other industries, and as a result, decision-making is becoming much more precise, based on a given set of circumstances.
Q: Today, how easy is it for a lender to change its lending parameters (overlays) around specific products within an automated underwriting engine’s interface “on the fly”? Is this something that most lenders can do on their own – or do they typically need to go back to the vendor?
Wu: Lenders that are originating agency products often have overlays to increase their comfort levels, beyond what they can sell to the government-sponsored enterprises or the Federal Housing Administration. Because non-agency products are governed by what investors want, not by what an agency can buy, their overlays aren’t usually associated with non-agency underwriting engines.
These programs stand on their own. We can do overlays, but that’s not what a truly customizable AUS is supposed to do. As I mentioned before, we build our AUS from the ground up to align with our clients’ lending criteria. They hand us a 200-page PDF, and we give them back their own AUS, equivalent to DU or LP, that is branded, that shows their loan program name and their company logo. We are not custom coding on our end by any means. We have a true engine. Everything is parameterized. We make changes on behalf of the lender. We test it on our end. We give it to the lender to test. And then we mutually decide to roll it into the market. On our end, it is analysts flipping switches, not engineers writing code. So, we can make changes quickly.
McDuffee: In many cases, it is dependent on the platform used. When lenders have developed a sophisticated, behemoth AUS in-house, it is almost solely controlled by IT departments and developers and requires an “IT event” in order to deploy new changes.
However, with many third-party, off-the-shelf technologies, these platforms are designed for the business user to change parameters “on-the-fly” via simple point-and-click interfaces. As with all higher-level software systems, more and more businesses want to quickly and easily deploy changes that reflect the rapidly changing business conditions that affect each lender on a daily basis.
Q: How “templatized” are today’s mortgage underwriting engines? Are there basic designs that can be purchased or “borrowed” and then further developed to meet a lender’s specific needs?
Wu: I wouldn’t say there are any “templatized” automated underwriting engines for non-agency loans. But it would be worthwhile for the industry to define a standardized non-agency loan program to create liquidity. And it will take somebody with significant clout to do that. But once it defines it, the thing that will make it work end-to-end is an AUS so that lenders don’t have to publish a 200-page PDF that will never be read and then do updates every three months with different guidelines that will also never be read.
Originators just can’t keep up with all of those changes. It would be great if all you had to do is give them an AUS engine and then every counterparty within that supply chain could run the exact same AUS engine. That would give Wall Street investors the confidence and transparency they need to confirm they’re buying reliable, consistent assets.
McDuffee: There are some systems that are more robust than others, but for the most part, each third-party LOS will have an out-of-the-box solution that can be configured by the lender. The more advanced systems will more than likely be able to handle a larger diversity of products, data points and rule sets to handle a wide variety of product eligibility rules and different investor guidelines.
Fannie and Freddie handle their own sets of guidelines and eligibility rules that are specific to themselves. Extensible LOS such as ours also handle other investor guidelines and also are able to handle different channels of business, such as correspondent and wholesale lending.
Q: Can today’s automated underwriting engines “self update” based on new rules and regulations? Is that generally handled by the vendors via software updates, or is it mostly manual?
Wu: It depends on what you mean by “self updating.” Because we’re a custom AUS, we listen to whoever’s defining the loan program. So we do not add or delete rules. We do exactly what our clients’ credit policies ask us to do – whether they change everyday or once a year. We make the changes. And then they are instantly made available to their third-party originators because they’re in the cloud.
Once those new guidelines are in effect, when that broker/correspondent tries to hit the engine, it will bring back the new guidelines and a different set of prescriptive instructions. So, it’s self-updating from that standpoint, which makes it easier for the end originator to keep up with changes. They don’t have to send memos with updates to a 200-page document. They don’t have to do anything.
McDuffee: As mentioned previously, as more and more sophisticated AUS are developed and modified, these systems will leverage other non-traditional data sources to learn about a given borrower and loan. In some cases, the data accessed will be continuously updated from various sources. Others will still leverage manual sources.
With sophisticated technologies, such as neural nets and self-learning systems, being incorporated into more and more systems, actual scorecard thresholds can be updated automatically, as well, based on historic loan performance.
Q: How much can today’s automated underwriting engines “self learn”? Is this seen as an important functionality?
Wu: Our job as an AUS provider is to enable our clients to roll out the credit policy they want. We need to facilitate a consistent manufacturing process, show evidence of adherence to those program guidelines, and provide transparency to the buyers and sellers of that asset. In that sense, we are agnostic about what is “good” credit policy and what is “bad” credit policy. Our only job is to make the asset that is being created consistent so that in the end, when you have a pool of a billion dollars worth of jumbo loans, you’re comparing apples to apples. And there are no oranges in there. Lenders derive their own credit policies through their own experiences – whether they have copied somebody bigger than them that they admire, or they have invented something new, or they got burned 30 years ago. We simply help them ensure a consistent loan manufacturing process and enable the entire supply chain (originator, aggregator, investor, etc.) to get to the same final answer.
Q: What safeguards are typically put in place to ensure that an automated underwriting engine remains properly programmed? What types of checks and balances are used to ensure system automation does not go “awry”?
Wu: DU and LP are black boxes. Our Custom AUS is just the opposite: It’s a clear box. Let’s say, for example, an underwriter is given an approval on a particular agency loan. It has no idea why. There’s nothing in its published guidelines that says this loan would be approved. So it says, “Nobody touch it,” because if you sneeze, it might say “not approved.” That’s the mentality with the current DU and LP.
Custom AUS is completely transparent in that our clients know exactly what the engine does, every single data point that the engine will respond to, every calculation it’s going to make, every message that it may or may not fire, and so on. To ensure that the engine is working properly, we give the engine back to our clients so that they can test it within their own environment before they go to market. They can test it against historic loans that they manually approved or denied to make sure it’s behaving as it should be.
Depending on the institutions, and how many resources they have and how careful they want to be, some of our clients test it for a few days, and others will take months testing. We’ll go along with whatever they’re comfortable with.
McDuffee: In the more modern AUS systems, segmentation of responsibilities is built in, as well as very granular auditing. In this manner, each change can be tracked as to who made the change and what it was, plus these changes can be rolled back if necessary. Typically, a number of environments are deployed to move changes through development/configuration, staging, testing and, finally, into production.
Additionally, for most rules engines and AUS platforms, a separate set of data is available to run “champion/challenger” models to predict the outcome of any change or of a given portfolio before moving into production. Using scrubbed data that reflects actual historic loan data, lenders can better predict how a small change in a scorecard threshold or the addition of another attribute can affect automated approvals/declinations/referrals on their current pipeline.
Q: What basic things can be done to prevent AUS from rendering “false approvals” and “false denials”? How often does this happen, and how can these errors be caught quickly and rectified?
Wu: The idea that an AUS renders an approval or denial is a common misconception. AUS never actually render an approval or denial decision – that’s the underwriter’s job. They simply give recommendations to the underwriter. So an underwriter can always override the decision. It doesn’t happen all that often, but it does happen occasionally – usually on the more complex loan decisions. AUS handles the vast majority of the loans, and this frees up the underwriter to spend more time with the “exceptions.”
McDuffee: It all starts with the concept of “garbage in, garbage out.” The actual data entry of initial data must be validated, whether a person is entering the data or it is being acquired from a third-party source – for example, ensuring that dynamic validation rules are placed on a data entry field, such as the loan amount, when the loan officer or borrower enters the data for the first time. This ensures that the appropriate scorecards are run against that loan, product, borrower, property scenario at a given point in time.
Extensive testing – both manual and automated test scripts – must be leveraged in order to ensure all scorecards perform as expected. With automated test scripts, thousands of permutations can be easily generated to ensure the correct outcomes.
Well-written business and technical requirements prior to development would be another beneficial step in the prevention of inaccurate responses. An intelligent underwriting engine should have logic built in that detects problems and creates “hard stops” that prevent the engine from making a false approval.