All right, good morning everyone.
I'm Paul Mesange.
I'm leading the solids department at LabCorp.
And today the purpose of my presentation would be to talk about pixels, mainly the purpose of these pixels and how they can be used in image analysis and clinical trials.
So looking at clinical trials, we have 7 phases.
We first need to find a compound; we need to go into early phases.
We need to go into translational research to bring them into clinical trials.
Phase 1-3, hopefully get it approved and then monitor our patients.
That entire process can take well 10 to 15 years with an average of 12.
Looking at the costs, the more you go into the phases, the more expensive your studies will be.
So what if there was a way to decrease those costs and those timelines?
So here we will be focusing about the clinical trial phase, and I will take the example of a central lab.
But this is also true for sites, what they're doing on sites with their patient.
If we're looking at a clinical trial with Histology samples, we have several steps.
The first one is the sample collection at site from the patient.
The samples will be then shipped to a central lab or to other departments to be analysed.
And I will be talking about Histology.
But the work that can be done with the samples could be multiple things.
We could have flow cytometry, genomics, tonnes of things, but in the context of Histology, I might be talking about IHC assays, special stains, FISH or simple routine processing.
And the standard at least until 2017 was to take those slides to pathologists so they could read them under a microscope.
We will see that this has changed.
Anyway, following the pathology evaluation, the goal is to have a report that is providing to sites or to our clients the goal to succeed in our clinical trials.
So looking at what's being done right now, and I went on a clinicaltrials.gov website, I was seeing approximately 9000 clinical trials involving histology procedures, histology work.
Among those numbers, almost 2000 studies that are recruiting and almost 500 they're still to recruit.
All of those numbers are increasing.
Looking at the small table, you can see that we're also increasing the scanning demand of all the histology slides almost 5 to 9% per year and this table is stopping at 2018.
Besides histology, we can see on the right side that we have an analysis that has been done on the complexity of clinical trials, could be for multiple reasons, multiple tests, conditional testing, and this is true in different fields, but we're seeing an increase in our trials.
So first just talk about that, the complexity.
Second thing, more histology work and scanning.
And finally, we're seeing a lot of trials that are based on histology itself, histology services.
Now it's not only about the slide staining, as you’ll see it’s also about the slide scanning and the transformation of the slide into an image.
We have an image of the paintings from Rembrandt and there was an analysis that was done on these paintings to try to find new features and by scanning that paintings where we're looking at pixels and the analysis of pixels revealed a lot more information.
That's why we want to do with our slides as well.
We have an H&E slide here that is being scanned.
And if we're going down to the pixel level, we'll be able to identify a lot of different patterns into it.
So we have slide that was the standard being brought to pathologies for microscopic evaluation.
Now we have the ability to take that slide to transform it into an image through a scanning system.
I think an image analysis, we have pixels.
If we have pixels, we can make analysis.
And on that picture that is extracted from a recent paper from 2025, we're seeing a lot of application in that field, thyroid cancer, lung cancer, bladder cancer.
And it's not only about detecting tumour anymore, we have a lot more things that we can do like as the prognosis of a patient, studied survival or even find a treatment design.
So pretty exciting applications, but what about the performance of those models, those image analysis, those artificial intelligence?
Well, looking at this meta-analysis on the right side was assessing the sensitivity and the specificity.
We can see that we have a pretty high diagnostic accuracy.
And if I was giving this talk five years ago, I would have said that the AI is performing at least as good as the pathologist.
But as we've seen with the ChatGPT, everything is evolving really fast.
And right now, what I can say is that the AI is performing better now, especially when we're combining the model with a pathologist evaluation.
Why are we having an interest to that?
Because if we're looking at the market of digital pathology, we're expecting from 2025 to 2035 an increase of at least 10% and that number is increasing every day.
So that was the old model.
So I was talking about 2017.
Why?
Because 2017, this is where we had formal approval of getting the scanning system.
In 2020, post COVID era, there was a massive implementation of scanning systems.
And here we're moving away from the microscope to get a scanning system.
And instead of having the slides reviewed locally, we have tests to a pool of pathologists.
So I am based in Geneva.
I produced a slide even at 6:00 PM, but I may not have a pathologist evaluation at that time, at least using glass lights.
But if I'm able to scan it and have an image, I can provide the image to my US colleagues, and it will be able to provide an evaluation.
So that's a shift with what we've done so far.
How is that possible though?
I'm talking about images, but we need a central repository of where all the images would be.
We need an ecosystem.
This is what we are using, an image management system.
We have an input, some features and output.
For the input, we have the image of course, and we can also have patients’ metadata associated with it. Within the image management system within here, these tool were able to review slides to collaborate, bring reports and then get some metrics.
But what's even more interesting is that this image management system has its own AI feature.
The people that are using, for example at LabCorp, us, we're also developing AI.
But maybe what's more interesting is that this system can integrate your AI.
If you have something, we can integrate that into our image management system and associate it to the LabCorp workflow.
So a lot of advantages if we're doing that, better patient care, we have access to a lot of pathologists.
We have experts that are present anytime, anywhere in the world.
We have long term saving.
We do not need to ship slides.
We're making savings on clinical trials.
We have the AI implementation that I just talked about and a lot of things about the availability of experts.
Do you have second opinion of cases using that image management system accessible consultation?
And of course, finally faster turnaround time.
Remember the example that I was giving you about that Geneva slides that could be assessed within the night.
So I would like first to talk about case examples, looking at internal efficiencies.
How can AI or image analysis improve the global workflow?
I will start simple with here an example of a pathologist’s task.
So normally that's a video, but if it's not working and if you cannot click it, I will just command that. We have here an H&E slide.
And the goal here for the pathologist is to identify the tissue and the tumour inside that.
So there is, that was a previous slide.
We have first in blue and the entire tissue.
And this can be done by pretty much anyone in this room, identify an H&E tissue, and even to identify tumour.
This is really the basic thing for pathologists for the vast majority of the case.
And well, we're not able to see the video, but in order to do that task, it can take up to 15 minutes to have a precise annotation.
However, if we're using the image management system, we can bring the AI, and within the AI we can have the results within two minutes.
So almost dividing by two what we were used to doing. And may not sound that much to go from 15 to 2, but if you're talking about 100 cases per day on five different sides, that makes a difference.
The second example was about spatial biology.
I will not talk too much about spatial biology, but we have ways to stay in slides with multiple markers.
We have some sponsors here today: Lunaphore, Akoya, that are providing multiplex technologies.
They can do up to 20 or 50 for some of them markers. Here there was an example of a six-plex and everything is still based into the same image management system.
Our pathologist can review and consult reports anytime.
Now going into the future.
So we have the multiplex image that you will see and we can display the results that could be seen by the pathologist here.
In that case, we have a multiplex assay, and the pathologist will evaluate few things.
The first one is the tissue identification.
The second one will be the cell detection.
And then phenotype classification does not need to be only phenotyping.
We could add spatial biology, but everything here that we can see within the algorithm mask.
Let's look at this one.
That's another example of pretty exciting things.
It's called virtual staining.
You probably have heard about that, where we're talking about sample collection in a hospital.
And in our case where we're sending the samples to central lab, we're having a chemical physical stain that was then reviewed.
But what if we were not staining it?
What if it was artificially stained with an AI with a deep neural network?
On the left column, you have the unstained slide.
On the third column, we have what we call the ground truth.
This one is actually the physical stain.
But on the far right, we're seeing the virtual stain.
That's the AI trying to match the ground truth.
This is a training data set, but this is working.
And there was a recent announcement this week to show that within our image management system, we could bring 3 virtual stains.
The first one is the H&E, the one that you're seeing on the screen.
We also have Masson’s trichrome that could be used for NASH MASH studies and also pan-CK potential tumour identification.
So everything within the workflow of a clinical trial. And here again, we have multiple possibilities with what we can do with images.
I just listed some cases here we have 6, but we can do much more spatial biology or AI.
And in terms of AI, what's really interesting again is that you can bring your own solution into the system or also commercially available software, HALO page, PathAI, whatever, we can bring that into the software.
Let's talk a bit about regulations.
Though we're in clinical trial space, we do have some regulations to follow.
So first, the image analysis can be augmented by artificial intelligence that will improve the diagnostic accuracy.
And what's important here is that it will reduce the inter-reader variability.
It will enable our pathologist to focus more on complex cases and in order to do that, we have some guidelines.
It's still evolving there.
The AI Act that has been disclosed recently kind of and if it is implemented into a clinical trial, we need to think about CAP validation first.
Then if you want to use that, they will be the sponsor choice.
It's up to the sponsor to decide to use a specific method.
The way we're seeing it in using it as of now for internal efficiencies, it is to get results with the pathologist’s oversight.
So the AI algorithm will drive the biomarkers, they will give insight about the results they will get, but the final call belongs to the pathologists.
And finally, a clear reporting.
You have an example on the right side with a Ki67, IHC sustain slide, the area identification, cell classification and a clear report.
So what have we seen so far in 2025 in that space?
I have 3 examples here on the screen.
First one is pretty interesting.
We're still talking about 1 slide, one image, one marker.
But what's different is that in this case, Ventana has developed and is trying to validate an entire chain.
So it's not only about the AI analysis, it's about the antibody, the stainer, the scanning system.
So the Stainer, the Ventana BenchMark Ultra Plus, the scanner, DP 600, and then the analysis with the Notify Hub, Topologies Hub, looking at it from a whole perspective.
The second one is more related to the virtual thing that I was showing you before, because in this case, we're looking at H&E slides only that H&E slides will feed an AI and that AI will state if there is a presence of FGFR mutation.
If there are, we will go with the PCR test, but FDI thinks that there's an absence of FGFR mutation, we will stop there.
So we're providing a much faster approach to that FGFR mutation that is low cost and pretty accurate.
That's what was shown in that example in ASCO 2025.
And then moving even further in terms of complexity, let's bring other parameters.
We're only talking about images, but what if we were talking about patient data?
That's what you're seeing on the right side.
We're having the H&E slide, but we also have patient metadata with the age, and a pathologist assessment.
And in this case, they were trying to predict the prostate cancer outcomes after a prostatectomy.
That's the multimodal AI. And in this example we have two, but we can think about adding more.
We can think about adding CT scan, MRI, blood work, serum work, full cytometry, genomics, anything that we can think of to deliver a highly personalised medicine.
On top of that, we can also think about digital twins, virtual health coach to develop biomarker driven digital therapeutics.
So at this stage, I wanted to get back to clinical trials and how this could be integrated into it.
We have some kind of a framework that has been disclosed in 2024 that's the example of FGFR and how we could design a trial that will include an AI model.
3 boxes, the first one is a clinical trial.
So that's where the patients are, samples being sent into the central lab.
The central lab will process the samples or the image directly and we'll send that to a secured location to get the result that will either be sent back to the central lab or to the site.
So now when we're thinking about digital pathology, small question, we're thinking about advancing precision medicines, but are we thinking that the answer is doctors, data scientists, AI, that would be the doctor?
I don't know if you've heard in April in Saudi Arabia, there was an AI doctor that has been approved just going there and getting treatment just out of an AI.
Well, we know that AI can really boost and improve all the diagnostic accuracy.
We can deliver adaptive software based treatment and it can accelerate, drastically accelerate biomarker discovery, enabling really tailored treatment for patients.
So getting back to the question that I was asking to that audience, I think that it's not really about if AI will become the doctor, but how we can use AI, make it a trusted partner and a personalised it in our clinical trials to finally take care of our patients.
Thank you very much.
