Pulse – November 2019

November 4, 2019

Play

This month’s Pulse, featuring Dean Paul Rothman, looks at artificial intelligence and machine learning, how they’re being used in initiatives at Johns Hopkins, and how the probability of bias in computer designed paradigms is being addressed.

Hosts: Elizabeth Tracey and Dr. Paul B. Rothman

Program notes:

0:14 AI and its impact on medicine

1:14 Images are low hanging fruit

2:14 Accuracy improves with physicians and AI

3:14 No one wants to totally rely on a computer for healthcare

4:14 Big data analysis starts with quality

5:15 Inference stronger as we improve data

6:17 Inherent bias in collection of data

7:21 Really important to carefully look at biases

8:42 End

Full Transcript:

00:00 Elizabeth Tracey: Welcome to this month's Pulse. I'm Elizabeth Tracey.

00:03 Paul Rothman: Hi. I'm Paul Rothman. Good to see you, Elizabeth. I'm the dean and CEO of Johns Hopkins Medicine.

00:08 ET: Wonderful to see you. Today we're talking about AI, artificial intelligence, and its impact on the field of medicine. Here at Johns Hopkins, that's a pretty multifactorial kind of an impact, my understanding is.

00:19 PR: I don't think we know the impact yet. I think, as we are getting to collect all this data on patients, which has occurred really in medicine since the electronic medical record became ubiquitous after Obamacare, you have a lot of data. The question is, how can we harness that data to better our patients?

Clearly, with the advent of artificial intelligence and machine learning, we think that computers can help us analyze that data and better take care of our patients. I would say it's in the early days of AI in medicine. I think it's going to be, in the long term, very important, very disruptive to how we used to do business, but it's just in its beginning right now.

01:02 ET: Is it your appreciation that right now AI really is most useful when it comes to looking at images, and discerning patterns, and so forth, in those?

01:13 PR: Yeah, I think absolutely. I think, the low hanging fruit, AI, machine learning's been very good in other industries to take digitalized images and analyze them. I think either in imaging, or pictures of skin, whenever you can digitalize an image, I think folks are going to be pretty good at using AI to better understand those images and compare them to what they know and see how they are. I think that's the
early game right now, but it'll probably go way beyond that in the future.

01:38 ET: I'd ask you to comment on a study that just came out in the Lancet. It took a look at AI-assisted diagnoses versus those that were made by experienced physicians, largely diagnosis of things like skin cancer, where images really are very relevant to the diagnosis. AI was only able to improve upon the accuracy of the diagnosis by about 2%. The experienced physicians were really pretty good.

02:05 PR: I think that's true, but other studies have demonstrated, if you take experienced clinicians and AI, you get much better accuracy than either one by itself. It's not like AI is going to supplant physicians today, but I think the combination and using AI as an aide has real power.

02:25 ET: Are there any concerns that you would offer to patients with regard to those kinds of diagnoses that are assisted by AI?

02:31 PR: I don't think anyone clinically uses AI for diagnosis today. I think one of the things we have to work through with AI, that people in other industries, are some biases in what you teach. Certainly, that is some ethical concerns that people have about the use of AI. I think that's going to be something that's just beginning to be explored in many industries.

I think today, no one uses AI by itself. Reminds me, when I was younger, when computers started to be used in analyzing EKGs. That was 30 years ago. EKG machine for the hospital, the computer would give you a read of the electrocardiogram and tell you what they thought was going on. Even in today's world that reading's overread by a physician, and that's 30 years ago.

I think no one wants to totally be reliant on a computer for their healthcare. I'm not sure that's going to change in the near future until the machines get really accurate and very good. I don't think patients today have to worry about a computer giving them a definitive diagnosis without some physician input.

03:30 ET: Good to hear. I feel reassured.

I have a two-part question. Of course, here at Johns Hopkins, we have our inHealth Initiative, and we're all really very excited about that. Antony Rosen said to me ... When I asked him about that, we were talking about populating the data and how you've already cited, in the electronic health record, we can a huge amount of data. The question is, what do you integrate into your model so that they ended up being really useful? He cited the phrase, "Garbage in, garbage out." I'd ask you first to comment on that, and then to tell me how you feel AI is integrating within health today.

04:06 PR: As we think about the use of the electronic medical record and think about how big data analysis, not just AI, there are the types of big data analysis that are going to be used to look at health care, it all starts with the quality of the data that you're analyzing. We're very proud of the data at Hopkins. We're really focused on precision medicine centers of excellence.

One of the reasons we focus our precision medicine on disease states is we think that, a select set of clinicians all seeing patients together, that there'll be much more consistency in the data that we put into the electronic medical record. When you look at that, how you analyze it, I think natural language processing is coming along and big data is going to get better, looking at what we put in the electronic medical record to help.

AI is just beginning to think through that data. I think the quality of the data is going to be really important. I think a lot of slowness in using big data to analyze medicine has been that inconsistency of the data. You can overcome that if you have enough data, and people are trying that too. You'll see some that, just, if you have so much data, that you can make some inferences from that. But I think the emphasis would be stronger and probably more salient as we improve the data that goes in.

05:19 ET: Where are we right now with regard to the utility of this in an inHealth Initiative?

05:25 PR: We're in the infancy in the use of big data in medicine. I will say, many of us, we've invested a lot in our electronic medical record, not only the investment of the computer systems, but the investment of time that physicians, and nurses, and other providers have put into putting the data into the electronic medical record. That's been a huge investment.

Patients, too. We owe it to our patients to leverage that investment, and we owe it to our providers, our physicians, nurses, and others to be able to harness that investment to better take care of our patients. I think that's a challenge we have.

I actually have a very strong belief that we're getting there, but we're really at the infancy of it. I think, in our lifetimes, we'll see it explode.

06:09 ET: You brought up another issue that I think is really important, and in fact I just saw today in a nature briefing about the idea of bias, about inherent bias in collection of data, and then biases that could be integrated when it comes to interpreting them. What they cited in there was this study in Chicago that took a look at patients in different zip codes, and trying to predict who would end up being hospitalized for longer.

I'm sure, as you and I would both have predicted, it was folks who were in lower income zip codes. But, then, the intervention was, "We want to target the patients who are going to get out sooner because we want to get them out of the hospital sooner." Oops. That ends up introducing a bias against care of people from lower income places. What do you think about that?

06:57 PR: The variety of healthcare disparities in our delivery systems, driven by a number of factors including gender, socioeconomic, and race, since we already have those biases and disparities in our system, it is not surprising that, as we teach computers the way we care for patients today, that those same biases are going to be baked in to the paradigms that the computers come up with.

I think it's really important at this point in time, where we're just beginning this journey on how computers can assist in the care of our patients, that we carefully look at the biases that come out of these machine learning paradigms to make sure that they are not reinforcing some of the healthcare disparities that already exist in our delivery systems.

07:43 ET: If you were a betting man, what would you say about full integration of these capabilities in patient care?

07:50 PR: When people look at disruptive innovation, what I've always heard is it occurs much slower than you think, but it has much more profound effects than you ever thought. I think that's going to be true here. I think it could take longer than people have thought it's going to take, but I think eventually it'll really have profound effects about how we take care of patients.

I have no timeline here. We are already using it, AI, looking at trying to diagnose sepsis earlier in inpatients. We're doing that in all our hospitals. Eventually these paradigms are going to get stronger. Data analysis is going to get much better. We hope we'll improve the care for patients in the decades to come.

08:24 ET: On that optimistic note, then, thank you so very much. That's this month's Pulse.

08:28 PR: Thanks, Elizabeth.

Previous post:

Next post: