AI IN HEALTHCARE

AI is reshaping healthcare in ways we couldn’t have imagined a decade ago. This episode explores new use cases of AI in healthcare research — helping doctors predict treatment outcomes and streamline operations by managing information flows more efficiently. 

PROTAGONISTS

Dr. Jack Gilbert

jack gilbert close up

PhD. Co-founder of BiomeSense, Professor, Department of Pediatrics  - UCSD

LinkedIn

Serafin Bäbler

serafin babler close up

Head of BPS Operations for Continental Europe at SPS 

LinkedIn

TRANSCRIPTION

HOST – INTRODUCTION: We all know about it, but it’s really surprising to see just how much AI it’s transforming industries, especially healthcare. What if AI could predict cancer recurrence? Or help doctors make personalized treatment plans in real time? Today, two out of three physicians are using AI in their practice—a 78% increase from just last year. In fact, 64% of healthcare executives say AI is already delivering a positive return on investment. 

In this episode we will listen to specific use cases from scientists and specialists working hand in hand with AI to change the future of healthcare. Welcome to The Power of Possibility. 

Today, we begin with Dr. Jack Gilbert, Professor of Microbial Oceanography at Scripps Institution of Oceanography, who also holds a position in the Department of Pediatrics at the University of California, San Diego. He is the co-founder of the Earth Microbiome Project, the American Gut Project, and was featured in the popular recent Netflix documentary Hack Your Health. 

Additionally, Dr. Gilbert is the co-founder of BiomeSense, a company focused on microbiome research, using AI and other technologies to improve health outcomes. 

Jack, thank you for being here. 

HOST: So, I would like to start with BiomeSense, you are the co-founder of BiomeSense, Can you tell me how AI is playing a key role in this project? 

 

“We’re using these dense longitudinal observations over time and AI to identify intersections over time between the data layers and predict the likelihood of recurrence of illnesses.” - Jack Gilbert

 

JACK: Well, AI is playing a key role in the implementation of microbial technologies into biomedical sciences and clinical application quite broadly. AI is a very, very large umbrella term for describing a whole suite of technological innovations that allow us to handle large data sets, complex interacting data layers to enable us to identify strategies that can optimize outcomes for various modalities. So, we use this in many different layers.  

BiomeSense is a company explicitly designed around creating longitudinal data. That means time series. So, we collect every single poop from a single person, but they deliver every bowel movement they deliver, over a period of months. And then we combine that data layer, that very, very dense longitudinal observation of the microbial community in that individual with everything from their heart rate to fMRI results, so brain analysis, to their metabolic output, to their blood chemistry, diet habits, exercise regimen, you know, sleeping, everything you can imagine. 

And then we're using AI to look for trends, intersecting modalities between the different data layers that can be used to predict particular outcomes. So, for example, you can imagine if we were looking at individuals who were going through breast cancer treatments, so they're being diagnosed with a breast tumor, and they have to go through a series of treatment steps, right? So first, a lumpectomy to remove a surgical procedure to remove the breast tumor. And then usually for women, they will also go through an egg harvesting stage after surgery. So, we'll artificially induce egg production and then harvest the eggs because then they have to go through chemotherapy to remove any traces of the cancer and prevent recurrence downstream. So that's a very very dense and quite intense treatment process and we want to understand over time how the different features of that woman, her metabolism, her neurological processes, her endocrine processes, her hormones and her microbiome, how all of these intersect with her lifestyle to either facilitate beneficial outcomes from each of those steps or lead to detrimental outcomes. And obviously the worst outcome is the recurrence of the cancer.  

And so, we're using these dense longitudinal observations over time and AI to now as to identify intersections over time between the data layers to predict the likelihood of recurrence, to predict the outcome of egg harvesting, you know, like if it's going to work. 

To ensure that we can better understand the type of chemotherapy that a woman will tolerate and what the likelihood is that she will have a negative outcome associated with that. So we are, we're using AI and microbiome layers to add in to the dense, complicated series of clinical data that are currently available, so that we can better treat people, we can better understand their outcomes. 

HOST: So, I guess, as you mentioned, AI is central to your research process, right? 

JACK: Absolutely. We use AI modeling applications to rapidly accelerate clinical adoption of current technology. So, you can imagine microbial community scale metabolic modeling. So, this is where we take a microbiome data set, where we look at all the different species that are present in your poop. And we try and recreate the metabolic interactions between those species. So, who's feeding off of who's products, and what does that mean for their growth or their decrease in abundance. 

And so, using this, we can predict metabolic phenotypes for the whole microbial community and use that to identify, let’s say, an ability to increase the prediction of short chain fatty acid.  

So, what food should Miguel eat or Jack eat in order to facilitate an increase in the production of beneficial compounds like short chain fatty acids by our microbiomes. And so that we can allow individualized dietary recommendations based exclusively on our fundamental understanding of how microbial metabolism exerts itself in the intestine.  

That's another great example. 

HOST: In your opinion, where do you see the greatest long-term impact of AI in healthcare or biomedical science? 

You know, it's funny because when you remember the beginnings of when we're discovering the genomes, et cetera, mean, decoding, it took a lot of time to do that, kind of processes, but nowadays, I mean, I think it's for sure is so much faster. Where do you see the greatest long-term impact of AI in healthcare or biomedical science? 

 

“We're using large language models to take people that are extremely busy - and often not necessarily well selected for their ability-  to rapidly communicate with patients and provide them with a tool which can facilitate better communication with patients”. - Jack Gilbert 

 

JACK: Okay, I've got a couple of examples where I see this, right? One of the first ones is the use of large language models. Chat GPT is a great example of a large language model. How do we use these? Well, number one, we can use large language models to enable better communication. So, we trialed a use of a large language model here at University of California San Diego, where we randomized doctors to either use a large language model to reply to their patients in the chat function on our health apps that we use to communicate with patients, or to just use their normal communication skills via email or these things. 

The patients that received ChatGPT, well wasn't a ChatGPT, but large language model responses from their doctors,  felt that they were more listened to, that they were getting better information. They also felt that they trusted their doctor more, that they were being supported more by the healthcare system. So, we're using chat GPT to take people who are extremely busy, or large language models to take people that are extremely busy and often not necessarily well selected for their ability to rapidly communicate with patients and provide them with a tool which can facilitate better communication with patients.  

And I think people when they're scared, they want good communication. And this leads to a second piece: We're using large language models as resources to facilitate coding and improving bioinformatic workflow tools that specialize in microbiome analysis. But we're also using it to create two front ends to our data ecosystem.  

So, the AI can help to facilitate the machine that analyzes all the data from a patient's chart, from a patient's analytical framework, their microbiome, their blood pressure, their blood chemistry, et cetera, et cetera. 

We can use that machine, process all that data with AI and large language models, for example. But we can also place GUIs, graphical user interface style, levels on the outside of that so that a patient can ask simple questions via an app of their own data and place that data in a larger data and knowledge ecosystem that enables them to say,  or ask the questions they feel too embarrassed or they feel are inappropriate for them to ask their doctor. 

Imagine this situation where you get a cancer diagnosis. I hope that's never happened to you,  It happened to me and my family I trust me,  you're awake at three o'clock in the morning that that night asking Google what that is, and if that is what it means, and you can go down some very dark holes very quickly because the information being provided is not very accessible or scary or is overbalanced with very negative outcome stories and not enough positive stories or just wrong information.  

So, imagine if we were to give you an app. Your doctor gave you an app and said, instead of using Google, I'd really like you to use this app. This app has been trained on all of the available scientific data, and I can guarantee this: this app will give you the best answers. It's the answers I would give if I had the time and I was awake at three o'clock in the morning. So, use this app to answer your questions. 

Doctors need to be able to interpret and interact with data. They rely very heavily on their training to interpret an fMRI image or to understand the different levels for different blood chemistry markers, right? They have a lot of training in that. It's great. But imagine being able to ask us a clinically relevant question of a much broader data set—to say, in my experience, these features I see in my patients should be interpreted as XYZ, as a cancer outcome, or they are suffering from type 2 diabetes. What other disease outcomes could these markers be predicting, based upon the available literature? So, that's where it becomes really exciting. 

 A patient can interact with their own data via a large language model, but so can the clinicians, because the clinicians are still not experts in everything. 

And one clinician cannot be an expert for all diseases. It just doesn't work. But we can use AI to help facilitate that kind of capability. Using classification regression tasks, attention models, and even transformer models, we can really start to dig down into creating machine learning models such as Random Forest or vector machines, which are standard workhorses in our system to train datasets that could delve down and build out things like digital twins so that I could create a digital twin of Miguel and use that digital twin to test out different treatment scenarios, right? Based upon all your available data and find out which treatment scenario is most likely to best work for you within a predicted outcome barrier to use, to predict counterfactual scenarios such as dietary response. What diet should you be eating while you're taking your medicine? These features can be incredibly powerful in helping us to drive home the value of using these very powerful AI tools to facilitate outcomes such as treatment and response variables. 

HOST: I guess that we all know AI has its limitations, like hallucinations. What lessons have you learned about its practical challenges, and how do you see them being addressed in the future? 

JACK: Yeah, hallucinations are a real issue, especially in large language models. We can reduce that by only enabling underlying data frameworks to facilitate statistical outcomes. What I mean by that is, a large language model hallucinates because it's trying to take text, which is a hallucination of the person who wrote it, and combine it with other pieces of text, and use that to predict more text, right? But when you're using statistics, which AI can facilitate if trained appropriately to interpret data, literally ones and zeros associated with defined quantifiable metrics, then you can get a much stronger, more effective outcome, which has limits, right? You can't hallucinate beyond math. 

Math is hard to hallucinate in because it has defined boundaries. I'm sure there's mathematicians who might disagree with that, but for the bog-standard clinical microbiome and biomedical scientists like myself, we have limits. So, we can use AI to test that hypothesis within those limits. Also, by defining the boundaries of the data layers that the system can use, we can more effectively train an AI to be more likely to provide a result that is correct. What I mean by that is a lot of the hallucinations that a system will see is by taking a manuscript from the internet, right? And trying to interpret how the text and the information in the figures, what it was meant to say. And remember, it's not using it's deep knowledge of this environment to do it. It's just trying to pull information and compare it to other data source sets. it's almost trying to learn instantaneously what this information might be. So, we can take the data layers and make them more AI ready, even for large language models, right? We can provide that large language model with a data layer which reduces the likelihood of that hallucination. So, for microbiome data, we're doing this by improving not just the density of observations of the microbiome so we can see what's happening up and down, you know, in terms of changes over time, but also making it quantifiable. So, instead of having a percentage of a particular bacterial species in the body, we're now leveraging the abundance of a bacterial species or a function as cellular units per gram of feces, right

So you can say...For Halobacterium pronitii, you may have 12.3 billion cells per gram of feces. That's now a quantifiable metric, which is comparable across very large data sets, if they're all quantifiable. So, by revolutionizing how we analyze the microbiome data and making it much more similar to blood chemistry results, which are quantifiable, or fMRI, CAT scan results, which are quantifiable, we can place these data layers in a comparable quantifiable metric, which significantly reduces the likelihood that an AI can misinterpret an outcome based upon some kind of level of hallucination. Obviously, I'm skipping over a vast array of potential implications there. But the other major concern of ours is the ethical consideration of this. 

Everybody in our study here where we're using large language models to improve clinician communication with patients, the patients knew if they were being communicated with, well, they knew the likelihood is the possibility was that they would be communicated with a large language model. They were aware of the paradigm of the trial they were in. They had to sign off on it because that's an ethical need. But as we move into the rapid adoption of these technologies, when you are very vulnerable, when you're a patient and you are putting your life in the hands of a surgeon or potentially now a surgical robot, your human trust level has to be commensurable with the level and ability of that technology. And quite honestly, even if a robot might or an AI system might do a better job than a human, we're much more likely, due to our social, genetic, evolutionary programming, to trust the human, even if the human would do it worse, right? So that's a really important piece to understand that humans are ostensibly fallible, but we are genetically and evolutionary programmed to trust humans over robots. 

We're working in an ethical framework whereby we need to be able to keep humanity in the system in order so that in a clinical setting we can have more effective support networks. 

HOST: One last question, what’s the most promising AI application in healthcare research you've seen this year? 

JACK: Well, it's a very big question because I mean, there's AI in settings that are available now, right? 

In the clinical setting and then there's AI in the research space that isn't quite ready for implementation yet. So, I'll take the first one first. I mentioned the likelihood that we would be able to use AI to increase the financial efficiency of a hospital setting and that's how the setting is used to how charging is made to ensure more effective and supportive turnover of our patient populations to make sure that we reduce the risks to patients while optimizing the financial sustainability of a setting. And that is already being used, to varying degrees. 

People tend to use it effectively in some settings and not effectively in others, and some groups are less likely to adopt it. But there are many consultancy groups out there that are now working with hospitals to implement these technologies and to test them out and demonstrate their effectiveness. So I'm very confident that within five years, most hospital settings, especially in the US, will be leveraging some kind of optimization software to ensure that their financial sustainability and efficiency is at the heart of the operations of the organization.  

The chief financial officers and chief technology officers are working very closely to facilitate that, right? And it's an important piece of creating sustainable healthcare systems. 

The other piece being used already is in image detection and analysis. So, using AI frameworks to improve the likelihood that a CT scan or an MRI scan can be reliably detected, right? But they're still being very well scrutinized, the results of the outcome of those AI models are being well scrutinized by doctors and technicians to ensure the significant reduction in the likelihood of a false negative.  

Because a false positive is okay, right? Like we think there's something there, okay, we'll run it through another couple of scans: okay, actually, it's not true … you're fine, sorry for the scare, but a false negative is potentially life-threatening. Imagine that AI did not detect a brain tumor: You actually have a glioblastoma. This is a major problem. I'm very sorry. We waited six months and now it's there…. so, you don't want false negatives, you want to make sure that if anything, the AI system is being a little bit over, eager in its prediction. 

And that's an important piece of this. But those technologies are being implemented. They are being adopted at a level which is quite effective. On the research front, it's a lot more in the AI system, as I said, everything from improving our bioinformatic workflows and the AI platforms that we employ at BiomeSense, for example, are a very explicit example of that, all the way through to improving the annotation structure and making sure things are more accurate in the microbiome space.  

But for patient care, it's lot of it's in predicting the ability to optimize healthcare outcomes in an individual in the research setting. So everything from being able to accurately predict the diet that an individual should use in order to achieve a particular physiological outcome. Like for example, should you eat blueberries? When should you eat blueberries? What concentration of blueberries should you eat? You know, and can we optimize that for your physiology? can we test that in your digital twin, for example? So these processes are very exciting. 

And one of my close colleagues who is also one of my ex-graduate students, Sean Gibbons, is at the forefront of developing some of these technologies. Other leaders in this field are Rob Knight here at UCSD, Curtis Hutton-Howard over at Harvard, and people like Geran Segal at the Weisman Institute in Israel, who are really putting the at the frontier of using some of these technologies to predict how these systems work effectively to support health and optimization of health in our research populations. But again, still in research space. 

HOST: Okay, thanks a lot, Jack, for your collaboration and congratulations for your work.  

 

“This is where AI can unlock huge potential. AI's strengths in our reviews are the ability to capture, classify, and extract data from these formats, turning all these documents into actionable information. With AI, not just faster, but if you do it right, more reliable, and in any case, much more scalable and time and resource independent.” - Serafin Bäbler 

 

HOST: Today we also have Serafin Bäbler, he is the Head of BPS Operations for Continental Europe at SPS.  Thanks, Serafin for being here today.  

HOST:  So, we've seen the use of AI in the research process, but we’re hearing more and more about a big revolution coming regarding information flows in Healthcare, right? 

SERAFIN: Yes, absolutely. So, what we see right now in the field is a real AI healthcare invisible revolution. Why this? Let me start with a picture. We often see AI as invisible robots, maybe in healthcare performing surgery or doing models detection on clinical stuff like tumors. But beneath all that, there is a quieter, equally powerful transformation happening right now. And SPS is part of that game. The revolution is in how information flows in healthcare.  

We think in healthcare of vast volumes of documents going back and forth between providers, insurers, patients. These can be forms, referrals, cost claims, medical reports, lab results, you name it. And most of that data or documents are unstructured. Much of it is still paper-based or buried in various PDF forms.  

This is where AI can unlock huge potential. AI's strengths in our reviews are the ability to capture, classify, and extract data from these formats, turning all these documents into actionable information. With AI, not just faster, but if you do it right, more reliable, and in any case, much more scalable and time and resource independent. 

HOST:  Okay, and from your point of view, from your perspective, where do you see AI having the most immediate operational impact? 

SERAFIN: Yeah, great question. Immediately. I, as a private person, sometimes a patient of healthcare institutes see is a huge potential in this whole inbound, outbound communication that I, but the various institutions in healthcare have. So, communication with each other based on documents. 

And a healthcare institution, and I'm pretty sure about that, is flooded by emails every day, by scanned documents that they receive or provide post letters with all documents that I mentioned before. And they need efficient triaging. So, this means, you know, that it reaches the person needs to do something about it and they need interpretation and that you can follow up with the surgery or with the treatment. So, this is where we see an immediate potential to help patients to get their treatment faster and with less communication back and forth. These aren't the robots that I mentioned earlier doing surgery. No, they are more like the background AI applications, but they are in our point of view as mission critical as surgery robots. AI can reliably analyze these documents that, for example, I sent to a healthcare institution to get treatment, more or less in real time.  

When I send them in, they can be classified automatically. And if you do it right, even prioritize that the highest priority pop up in the to-do lists at a respective point. By using AI in this field, we are absolutely sure that we are saving a huge amount of hours and manual work of healthcare professionals every day. 

HOST:  We all know that there are many processes that AI could improve, increasing productivity or changing how we do things. But which processes in the healthcare sector do you think benefit the most from AI-driven transformation? 

SERAFIN: Yeah, yeah. So we had a look at the volumes per process type and of course also where an immediate potential is in automation. And we came up with also in talks to our customers with the following list of processes where we believe the AI potential is quite handy and available.  

So first, in the claims handling, as a claims means there is more or less an invoice that needs to be paid out. This is a high volume process which follows certain rules, certain business rules, and by automating the business rules, we see that you can come sooner and more reliably to a successful claim handling. 

Then the whole referral management is something which takes up a lot of time of healthcare professionals and by automating that we can free up these resources to talk to actual person and less being buried in paperwork.  

Same thing happens with medical report indexing. There is a huge potential of making the indexing more precise and therefore being able to free up time from the medical experts to actually look at the report and not look for them. And the healthcare industry is also a spiderweb of contracts. So, by having proper contract life cycle management supported by AI, we absolutely believe and see in a few cases that this can increase the efficiency of the whole healthcare ecosystem. 

 

“The revolution is in how information flows in healthcare […] By using AI in this field, we are absolutely sure that we are saving a huge amount of hours and manual work of healthcare professionals every day.” - Serafin Bäbler 

 

HOST:  As far as I know, you are working also with health insurers in projects like, for example, Sanitas. What key learnings can you share from these kind of projects? 

SERAFIN: Yeah, that was for me personally a very interesting project because just right after I joined SPS, we had this great opportunity to help this Swiss healthcare insurance company Sanitas on their inbound management, inbound document management. And one of the success factors was clearly that it was from the beginning, in the integration project, a collaboration between Sanitas as our client and ourselves. The aim and what we achieved is to automate the entire inbound document flow for this insurance company. And as you can imagine, that's paper being sent as letters, but also email, uploads through portals, et cetera. 

And the challenge at this point in time was that the extraction, classification and follow-up processes, that were not the same for all inbound channels. Let’s make an example, when an information arrives via letter, it took this way, it got extracted to this level. When it came through another channel, it went through another way. So, what we achieved in this process is that however any information reaches Sanitas by the letter or via email, it follows the same classification and extraction steps. 

That allows for very much lean and efficient back office processes because you don't have to care whether it was a letter or whether it was an email. 

And that was basically possible because we didn't just automate what we saw. We really first set and simplified the whole process steps before we actually went to the implementation phase with IT and our business analysts. So, we really looked at the situation and re-engineered the processes together with Sanitas, we redefined the categories, streamlined the case types across these inbound channels, and then, let AI take over the repetitive work. However, AI is used to do this job.  

Another insight that is really important, is that this whole human in the loop design was critical to meet the level of quality that is necessary to process the information at Sanitas in their processes which come after our extraction. We guarantee typically 99 % + correct results and this is in this case, even after a lot of improvements, still a combination of the right artificial intelligence together with the human in the loop for manual corrections and quality control. 

HOST: Okay, I guess applying this is always more effective with a list of advice or key strategies. What should healthcare leaders consider before starting an AI journey? 

SERAFIN: Choose a field or business case which is, I would say, small enough that one can manage it with a reasonable team but choose it strategically. I make an example: for a health insurance factory that would probably be, as I mentioned, the claims process.  

Choose an area which is, so to say, painful enough that it has the visibility within the company and the importance. And second, choose your outsourcing technical landscape in a way that it fits well with your own core systems.  

Choose a partner which has proven interfaces to the common core systems to make sure that at this interface the data is imported-exported correctly.  

And third, speak with your outsourcing partner, for example, SPS, about success metrics. What is really important for a successful collaboration at the end of the day? Is it a saving that you would like to achieve? And if so, in which area should the saving be visible and measurable? Is it the turnaround time that you would like to improve, for example, on specific document flows and if yes and which, or is it that you would like to improve your customer satisfaction?  

Having defined success metrics helps not only to streamline the outsourcing partner like SPS with your own goals, but also to communicate the AI project well internally and externally. 

HOST: I also asked Jack this question, and it's great to compare overall thoughts on AI opportunities for the future. Where do you see the biggest current opportunity with Gen AI? 

SERAFIN: Yeah, Generative AI with ChatGPT, but all these other large language models which have evolved tremendously only within the last couple of months is very powerful when it comes to summarization and interpretation of old documents. 

I spoke before a little bit about extraction and shaping data, etcetera, but generative AI with large language models can add a whole another level to process improvements with AI.  

Imagine with generative AI compared to classical AI, a tool that reads multiple reports with various pages and gives them an operational summary of it or an answer to certain questions we have been prompted or even can create draft responses to customers in natural language and in every language because next to the topics that I just mentioned, Generative AI, I think, has also improved translations a lot from English to any language and vice versa 

It's really a huge step forward with Generative AI. We are already at SPS testing because we're super interested in those capabilities with partners. But especially when you share critical data. It's super important that it's in a controlled, supervised way, because generative AI with large language models, they always have a feedback. They always have an answer to the question. 

And we believe the human in the loop for quality control together with the new power of Generative AI is the industrialized way, how it can add value to healthcare processes, etcetera. So with specialized AI, not general purpose chatbots, we will win in regulated fields like healthcare considering their special needs for controlled and supervised ways. 

HOST: Okay, that was the last question for today. Thanks a lot, Serafin, for being here. And thank you all for listening to this episode. I invite you to subscribe to our podcast if you haven't already, so you’ll be informed about upcoming episodes. Remember, you can listen to us on Spotify, Amazon, and Apple. Thanks again, and see you in the next episode! 

 

RELATED CONTENT

Placeholder image

Health

Empowering your talent to prioritize patient care with innovative automation technology

Placeholder image

Healthcare

SPS facilitates the shift to digital information and document processing in the healthcare sector.