Health sensors and trackers are getting more and more sophisticated, yet most of our health data is still locked in a black box. While it’s important to keep patient information secure and out of the wrong hands, we also need to ensure it does get to the right hands. As we enter a new age of algorithms that can effectively guide or coach us with the right data, will we fully trust an AI-backed smart system to insure our health and wellness, or will we insist on a human to provide it?
We know that patient trust in physicians has been declining for decades as healthcare systems are strained and doctors are able to spend less and less face time with patients. Additionally, frog’s research with hundreds of patients and caregivers has shown that people diagnosed with a chronic condition who “knew something was off” yet didn’t seek help from a physician, often cited lack of trust as the main reason.
On the other hand, artificial intelligence-based systems are absorbing massive amounts of data to solve some of the world’s most challenging problems, including personalized health coaching. Furthermore, people’s comfort with sharing data like vitals, fitness and lifestyle with their healthcare provider is growing, as is their comfort with leveraging AI to receive care. As AI-based systems are already showing the ability to see patterns in the data their human counterparts aren’t, a partnership like this could make a sizable dent in the projected number of chronically ill. Given that we already trust AI to recommend music and movies, the best route home, or even our potential mate—can’t we imagine trusting them to monitor our wellness?
In this new paradigm, the technology is enabling a human-centered care ecosystem in which AI-empowered physicians and patients build co-accountability to achieve personally tailored health goals that foster proactive wellness over reactive medicine. As designers, we are excited by the technological potential here, but know the key to success will be designing for trust in these moments. The physician/AI platforms must learn to best engage with each individual they support. In order to really scale and reach mass adoption, these systems will need to find the right blend of human touch and AI assistant coach for each person. Additionally, these systems will need to ask permission for, and employ a broader range of additional data inputs to learn which motivational approaches each person will best respond to. It’s a balance of effectiveness and creepiness—knowing just enough to ensure results, but not too much that we feel imposed upon. Designing these systems will be challenging, but the payoffs could be significant.