2 minute read

Alegion in WSJ & Forbes!

The Wall Street Journal and Forbes take notice that manually labeling data has become a huge bottleneck in developing AI solutions. 

The Wall Street Journal’s AI reporter, John Murawski, talked with our CEO, Nathaniel Gates, to learn more about the challenges posed by data labeling and our solution. DIY data labeling is a complex, onerous process, as anyone who has done it will tell you. Most companies lack the time, staffing, and know-how to properly scale their efforts. Muraski reached out to our customer, TVision Insights Inc., and CEO and founder Yan Liu told him, “We first tried to do this by ourselves—a bad idea. We’re clearly not an annotation shop.”

The importance cannot be overstated: a model is only as accurate as the data on which it is trained.  Overall performance of even the most sophisticated model can be easily compromised if it is trained on data that is poorly labeled or does not accurately reflect the target values. This is the “garbage-in, garbage-out” concept. Errors in labeling target values or misclassifying the targets lead to errors in the model. These errors also proliferate as they flow through an ML applications. Failing to maintain accuracy as data labeling scales from proof-of-concept to production can bring a project to a halt. Building the ability to generate and scale high quality training data over time is a means of protecting the organization’s ML investments and mitigating the risk of compromising an otherwise successful model. For more on this topic, check out our white paper Supervised vs unsupervised learning

We are able to deliver ground truth data for enterprise AI and ML projects, because we combine human and machine intelligence. Our offering operates at massive scale, combining a ML-augmented data and task management platform with a global network of trained data specialists to handle use cases ranging from the straightforward to the highly complex. Request a demo to learn more about our solution.

Forbes has also taken notice of this growing challenge and quoted our global survey of hundreds of data scientists and other AI professionals who are conducting AI projects in large companies:

“Nearly eight out of 10 enterprise organizations currently engaged in AI and ML report that projects have stalled, and 96% of these companies have run into problems with data quality, data labeling required to train AI, and building model confidence; only half of enterprises have released AI/ML projects into production; 78% of their AI/ML projects stall at some stage before deployment; 81% admit the process of training AI with data is more difficult than they expected; 76% combat this challenge by attempting to label and annotate training data on their own; 63% go so far as to try to build their own labeling and annotation automation technology; 71% report that they ultimately outsource training data and other ML project activities.”

Check out our white paper, What Data Scientists Tell Us About AI Model Training Today

 

Learn More About Our Annotation Solutions