3 minute read

The Journey of a Machine Learning (and learning) Project

The promise of AI for a business is irresistible. Increased efficiencies, cost savings, improved customer satisfaction are too attractive to ignore. So of course your board has just set a mandate of “integrate AI into the business”.

We partner with our enterprise clients to create training data for their machine learning projects. Having been through this experience many times, we have a good sense of how the enterprise manages ML projects today, and we have developed a nomenclature for describing project phases.

Most companies start by first proving that an AI solution will in fact cut costs, improve customer experience, or in some way be a differentiator for the business through a proof of concept (POC).

POC's are typically carried out on simple algorithms using off-the-shelf training data or internally labeled data. Showing that an algorithm can be trained to address a particular use case with a small amount of training data is usually all that is necessary at this stage.

A successful POC gives the data science team the evidence and momentum it needs to go after funding for a full-blown ML project. And with funding in hand, the project typically moves to a pilot phase. The pilot lives between the POC and the project in production; companies don’t turn off any other systems or change their staffing. The pilot runs alongside existing systems as adjustments are made to the algorithm as it is trained.

The pilot's duration is partially determined by the level of model confidence that is required for production. Some applications - an autonomous vehicle application, for example - obviously require extremely high confidence levels. Other applications can show positive ROI at significantly lower levels of confidence. 


Model Confidence vs Training Data2


We’ve found that early success can lull some teams into a false sense of security that they can do the data training preparation for the whole project themselves. This, not surprisingly, is the stage when we most often get introduced to the company.  As the image below suggests, training the algorithm for the many additional use cases that must be part of a production system creates a demand for dramatically, often overwhelmingly, larger amounts of data.

For example, if during the POC the algorithm demonstrated its ability to recognize faces photographed in the same light, at the same distance and angle, during the pilot the algorithm will need to be exposed to variations in lighting, distance, angle, skin tone, gender, and more. Which implies, of course, way more data. 


Model Confidence vs Training Data3


From what we've seen, putting a model into production is really just a symbolic threshold from a training data perspective. Unless the algorithm's problem space is very simple or completely static, training will never end. Problem spaces evolve. New use cases evolve. And pressure from competitors who are also trying to create differentiation from ML means that organizations have to expose their models to evermore obscure use cases. And at already-high levels of model confidence each 1% incremental increase is staggeringly expensive in terms of training data. We call this post-production, post-ROI phase, The Tyranny of the Edge Case.  


Model Confidence vs Training Data4



Proof of Concept. Demonstrating the potential value of an AI project with a proof of concept involving a simple set of challenges.

Pilot. Add capabilities to the algorithm with the goal of getting it to a level of confidence that yields positive ROI. 

Production. Evolve the algorithm to a point where it can be a cost-effective substitute for human judgement. 

Post-Production/The Tyranny of the Edge Case: Ongoing response to changes in the problem space and competitive pressure. Constant training, typically on increasingly rare use cases. 

Want to discuss training data? Drop us an email.



Learn More About Our Annotation Solutions