Letting Data Drive

Sam Wilkinson
nested.com
Published in
5 min readAug 6, 2018

--

When I joined Nested as a Data Scientist, back in October 2017, the Data Science team’s sole focus was our home valuation model. Affectionately dubbed the Automated Valuation Model (AVM), it was centre stage on our website. Customers interested in taking the first steps with Nested would complete a brief form. This would then return an estimate for the value of their home and an invitation to book a call with our team by filling in another longer form. Automated home valuations are commonplace in the online real estate industry, so we assumed that the AVM provided value to our customers. We also assumed that improving its accuracy would increase that value, so soon after I joined Nested we kicked off a big push to improve the accuracy. It was only after this significant investment in the AVM that we started to challenge these assumptions. The process of challenging them taught us three things:

  1. There’s no substitute for directly measuring customer value.
  2. Even if something adds a ton of value, don’t get tunnel vision.
  3. Don’t overthink. Ship something and get feedback.
Left: The original, AVM-centered onboarding flow. Right: The current onboarding flow.

Over two months, we improved the AVM’s architecture, transparency, and performance. We started with an MVP from the early days of Nested, which used a model trained on an dataset that had since been lost to time. The performance of the model wasn’t documented, it was served by a monolithic endpoint, and the only way to see if it was working was to go to our website and try to get a valuation. To address issues with architecture, we moved the AVM to its own microservice, and refactored the model code to allow new features and algorithms to quickly be implemented. To provide some form of transparency, we added methods to automatically record performance metrics during model training, and a live dashboard showing how the model was performing in the field. To improve accuracy, we implemented a robust research workflow built around Cookiecutter, and added new features to the model.

By the end of the project, we had a significantly better performing AVM running on a robust architecture that we could keep an eye on in real time. With a median absolute deviation of 9.1%, our AVM was better performing than Zoopla’s (10.7%), so we felt pretty good about ourselves. However, this was when we first started asking questions of the AVM as a product. Did the AVM provide value to customers? Feedback from user testing seemed to suggest not. Did the AVM help with conversion? An A/B test between the AVM output page with and without the actual valuation showed no significant difference in conversion.

Based on these findings, we decided that the AVM wasn’t the right choice for our onboarding flow. We had been blinded by a metric we assumed was a proxy for customer value. Even if we had been adding value, we had lost sight of other, simpler ways to add value to the business. Since the Data Science team’s focus had solely been on improving the performance of the AVM, when we dropped that focus we found lower hanging fruit elsewhere in the business. These were simpler, easier, more direct wins for customers.

We now had the challenge of redesigning our onboarding flow in a post-AVM world. This gave us an opportunity to implement some of the lessons learned. From user testing along with A/B testing, we found that replacing the two AVM forms with a single simplified form significantly increased conversion. This is not to say that we completely deprecated the AVM. There are situations where a customer can get value from an automated valuation, it’s just that our onboarding flow wasn’t one of them. For example, several marketing campaigns still link customers to our new valuation page, with positive results. The success of the AVM in this situation started us thinking about how it might add value as an informational tool for customers, rather than as an onboarding tool. Whereas previously we may have jumped into improving the AVM’s performance, we started thinking about how we can supplement the valuation with other information about a customer’s property.

A current mockup of an Area Report page

This lead us to the idea of an “Area Report” page, which would show our customers a summary of the property market in their area. Again using lessons learned from the development of the AVM, we took a more incremental approach to the Area Report. We used multiple rounds of user testing, each with progressively more complete mockups, to understand how to provide the most value to customers. When we’re ready to launch, continuous monitoring and iteration of the product will make sure we avoid previous mistakes with the AVM. It’s important to think about how to maximise customer value at the start of a project. However, you should make sure that it doesn’t delay action too much. The best way to make sure you’re maximising value is by getting something in front of people and collecting feedback, whether that’s through user testing or by monitoring metrics. Keep the customer in mind, but have a bias to action.

With the AVM, we learned the hard way that there’s no substitute for directly measuring customer value. When we found something that provided more value to customers in the Area Report, we made sure to balance its development with other, simpler opportunities across the business. The best way to maximise that value is to constantly measure it as directly as you can, and to use that data to drive your product decision-making. To do that you have to get something in front of customers. There’s plenty of wisdom out there that seems obvious when you hear it, but often it takes real experience for it to be driven home.

--

--