Machine learning (ML) enables computers to “discover patterns and relationships in data instead of being manually programmed.” This science to date is already impacting the way we live, “driving everything from Netflix recommendations to autonomous cars.” However, the more experiences that are built with ML, the more obvious it becomes that UXers are still in a learning phase when it comes to controlling the technology.
In the Google UX community, an effort has been started called “human-centered machine learning” (HCML) to help focus and guide the conversation, the goal being to “see how ML can stay grounded in human needs while solving them in unique ways only possible through ML.” The Google team works with UXers to “bring them up to speed on core ML concepts, understand how to integrate ML into the UX utility belt, and ensure ML and AI are built in inclusive ways.”
Google has addressed specific challenges to aid designers in navigating the complexities associated with designing ML-driven products. One example of this is “don’t expect Machine Learning to figure out what problems to solve.” ML and AI are hot topics right now, but don’t just jump “right into product strategies that start with ML as a solution and skip over focusing on a meaningful problem to solve.” Focus on the human need, then do the hard stuff—the legwork: “contextual inquiries, interviews, deep hanging out, surveys, reading customer support tickets, logs analysis”…to determine if you’re “solving a problem or addressing an unstated need people have.” ML doesn’t figure out what problem to solve, that still requires definition.
Once you’ve defined the problem, determine if ML will provide a unique solution.
To read the complete article, go to: https://medium.com/google-design/human-centered-machine-learning-a770d10562cd