Sri Raghavan, senior global product marketing manager at Teradata, answered a few questions on how analytics are impacting the field of health care, and approaches to dealing with privacy and false positives in that industry.
It seems like there is a lot of excitement around health care and AI. Where do you see that field evolving, and is there any other area that you think holds a lot of immediate promise?
In general I think health care is one of our biggest frontiers. Part of the reason health care is so big is because the outcomes really make a difference in our lives. If I went to an entertainment company where I could use artificial intelligence, the impact is something which affects the quality of life in that you’ll get better entertainment, which is a terrific outcome. But it’s not quite as appealing and effective as looking in someone’s eyes and telling them that heart disease is something you they shouldn’t be concerned about. From that standpoint, while every industry is important, there are certain industries that have much more immediacy to our public good and from that standpoint I fully believe it’s health care.
With health care, how do you ensure the data is everywhere it needs to be to inform everyone in the chain, from the patient to the provider to an insurance company?
To me the question is not so much to make the data available everywhere. It’s not the data that needs to be available — it’s the insights. The data availability part is important in the sense that the guys that are doing the analytics, they can can pick that data from any source or location. But insights, the ability to be able to provide insights in many different form factors and visualizations and tables and what have you in a manner that every idiosyncratic audience group understands — that makes a big difference, and that’s a key thing we are ensuring with the availability of ways insights can be communicated. That’s something we try to do a lot of.
What about privacy? That seems like a much bigger factor in this industry.
You know, it seems like one of those cliched issues like, “Oh, we need to do security and privacy.” But the question of health care really makes a big difference.
I’ll give you an example. I was talking to a customer a couple days ago, and they said, “Look, I have a lot of information about diseases and afflictions and prescriptions and treatment plans and demographics and other contextual data, which really allows me to be able to, on the one hand, provide enough information to medical care providers to tell them what kinds of treatments work with which kinds of people.” Great, but they said, “I also happen to know about your lifestyle. I also know that you’re are eating six gallons of ice cream.”
Yes, it’s bad to be eating six gallons of ice cream, but that is not your business. However, health care companies, because there’s this risk of using your lifestyle information with your affliction information, you could potentially easily envision a scenario where I can go to you and say, “Hey I know you are doing some things like smoking three times a day. Maybe you shouldn’t do that.” The problem with that is it impinges upon your privacy, so that’s why we’re making governance processes to make sure those kinds of instances not happen, that certain kinds of data aren’t collected by law and you are limited by using that information.
What is the baseline standard of what level of what false-positive rate is good enough to roll out these technologies?
Some thresholding has to happen. And the thresholding happens at two two levels. It’s very contextual. Every patient is different and every disease is different. There are certain thresholds which may work in certain cases that don’t work in other cases. It’s very specific to the kinds of treatment areas that are being worked on.
Then, of course, there is this huge legal implication, because now certain decisions are going to be made by data that is generated by machines. Which means that now the onus seemingly is on an inanimate object, which cannot be sued and nothing can be done to it. So it seems like the patients are left out on a limb, because no one is to blame when certain errors are being made. So I think certain legal costs and protections can come into place to ensure there is.
People should not be worried that artificial intelligence is going to take over doctors. No, it won’t, because it’s practically impossible for us to do that. We need someone to go back to as a recourse for certain things, and that someone has to be responsible within reason.
Are you seeing overlap in your models with medical false positives and, say, the models that are working for fraud detection and banking that focus on champion/challengers?
Yes, except in certain industries, the false positives are more palatable than others. Predicting the demand for certain eyeliner products and I make a mistake: No big deal. Someone bought wrong eyeliner, who cares? But when you are looking at heart disease and concluding, “There is no heart disease, and the arteries are really not blocked. Don’t worry!” you have a different set of problems.
But, to your point, it does get better over time, absolutely the models do. But an enormous amount of work needs to go into that data to deliver those types of numbers just to establish a threshold of success. If 90 percent of the time you are able to say this case acted fraudulently, that’s damn good in banking. But you still need precautions in place for the 10 percent.