TECH: Michelangelo, Leonardo and machine learning

A few months ago, Jeremy Hermann and Mike Del Balso from Uber’s masterful engineering team published an article that caught our eye: Meet Michelangelo: Uber’s Machine Learning Platform. And not just because we’re Sistine Chapel fans, but because machine learning is awesome too.

In a nutshell, Uber found their business demanding more and more machine learning to meet the needs of its expanding domination of the rider world. Rather than start from scratch building and deploying machine learning services every time the company wanted to productize a new model, the engineering team decided to invest in an internal machine learning as a service platform to do it at scale. The result? Accelerated time to model.

At Dexibit, this story rung a bell for us. Maybe it was all those Uber rides to and from museums, the Renaissance nod or the reference to machine learning anti patterns. Whatever the case, when we start a new machine learning research project, our data scientists usually use a tool like R to create their models locally on their own machine, but once we’ve got the result we’re after, we then need to recreate it into our product environment. Our challenge is to do this at scale, so that the model can work across all our museums. The people of this world might take over 300 million rides each year, but we also make a billion visits to museums! Productization of machine learning to support scale comes at time and cost, which like Uber, we were wanting savings on. Plus, as more models are developed, it begs the question of the need to standardize how these are created and whether various models can begin to share common features between them.

With our development program around our new museum visitation forecasting module launching out of beta this month, we set out to create the core infrastructure, tools and systems needed to support the needs of our data team. Whether they’re using extreme gradient boosted trees, random forests or neural nets – as these projects are moved from beta mode at the conclusion of the research assignment and built out in the product, we can then create the ‘at scale’ version quickly, with a common capability.

Similarly to Uber, we built our machine learning as a service layer on top of our ingestion engine and data lake that houses all of a museum’s data such as ticketing, presence and revenue, plus global feeds that we provide to support our customers, such as tourism and weather. Our technology stack is a little different from theirs, but essentially the architectural principles are the same. We’re built on Amazon Web Services (AWS), using their container service and S3, plus we use a bunch of other stuff like Airflow, Docker and libraries like scikit-learn and XGBoost. This allows us to have a pool of compute resources on top of which our model dispatcher can deploy Docker containers, to keep things isolated but repeatable. This means we can pull down models, take a standard image and spin up containers, overseeing what models are being run and managing dependencies, appointing resources, configuring parameters, storing metadata and logging the results. Soon, the dispatcher will have a more mature role, helping with data access too and making more machine learning libraries available. 

As our models require, the supported flow then facilitates the model through data integrity management where we check the data we have from the museum is forecast suitable, training to educate or update our models on the unique and often changing circumstance of the museum’s performance, testing to take a look at fit and accuracy, then deployment for predictions. We monitor the model’s fit (providing this to the museum user so they have transparency on how much they can rely upon predictions) and monitor accuracy as we see out a prediction (which for the museum user can also provide a comparison of actual against plan for management purposes). The platform also has to point out where a museum might not have enough data to be model ready, especially so for new build museums, or where a little human intervention might be required to add more contextual insight for the model to call upon, such as exhibitions and events happening in and around the museum that can impact visitor behavior.

Lastly, time for a name. We wanted to pay homage to the original architects at Uber, but we weren’t sure if they’d named theirs Michelangelo after the artist or the Ninja Turtle, so to be on the safe side, we put a bet either way and called ours Leonardo. It feels rather fitting, given da Vinci’s world straddled art and mathematics. Plus, has a cool ring to it.

Love museums and machine learning too? Come and work with us!