2 minute read

TRICK OR TREAT!

Treat - The AI project mandated by the board just went into production without a hitch and is exceeding its forecasted bottom line impact. You’re a freaking superhero. Your CEO is on the cover of your industry’s biggest publication. You and your team have gotten bonuses and promotions. Sweet.

Trick - Your team failed to deliver on the company’s most visible, most strategic project, a board-driven game changer and the business’ first foray into AI. It’s not clear who will stay and who will go, but all of your reputations are mangled. Your PR team is in full-time damage control mode in response to the brand-killing headlines. Not sweet.

null

Gartner tells us that over half of all enterprise AI projects fail. From our front row vantage we see three gotchas that put budgets, timelines, and projects in jeopardy over and over:

  1. Underestimating the training data challenge. You need much, much, MUCH more training data than you imagine. You can’t buy it off the shelf in the quantities you need, with the use case-specific annotation your algorithm requires. You almost certainly can’t produce it internally, either. You definitely can’t expect your data science team to do the job. This actually is the most common scenario we step into: the data science team is overwhelmed with training data preparation and the entire project is on the precipice. Goosebumps!
  2. Ignoring the value of Agile. Traditional application developers learned a long time ago that the waterfall method of development - where even huge applications are planned, architected, coded and tested as a monolith - is impractical. Today, agile methodologies, where smaller chunks of functionality are built and tested iteratively, dominate software development. And yet in our experience, most enterprise AI projects follow the old waterfall method. The result? The AI project is all cost and no benefit until every aspect of the model is at the required level of confidence. If that ever happens. Chills!
  3. Failing to keep bias out of your model. A lot has been written about bias in machine learning, and for good reason. Models do what they are taught to do, and if they’re trained on biased data, their behavior will reflect that bias. Fortunately, rooting bias out of training data is a well-established discipline, even if data scientists themselves aren’t always data bias experts. Ignore this issue at your own peril: with biased training data, your facial recognition algorithm makes embarrassing mistakes and your autonomous vehicles fail to distinguish white trucks from cloudy backdrops. Hair standing up!

If you want to talk to someone who can make these three scary gotchas, well, vanish, we’re just an email or phone call (+1 855.410.5500) away.

 

 

Learn More About Our Annotation Solutions