AI, Multi-Omics and Precision Medicine: From Data Complexity to Clinical Impact
 
Bringing together leaders from academia, pharma, biotech, diagnostics and technology development, the discussion explored a central question facing the field today: how can AI and multi-omics move beyond technical promise to create clearer clinical and therapeutic impact?
 
Across the session, the discussion was structured around three themes - AI and data in multi-omics, the future of the field, and advancing therapeutic development. What emerged was not a simple story of optimism or scepticism, but a more grounded view: AI and multi-omics are already creating value, but their impact depends heavily on how they are applied, how data is generated and interpreted, and whether outputs can be translated into decisions that matter in research, development and clinical care. 
 
Theme 1: AI & Data in Multi-Omics
 
The discussion opened with the question of where AI is truly adding value across the multi-omics workflow - from data integration to drug discovery and clinical development - and where it may be overhyped.
 
A broad consensus emerged that AI is delivering real value in data-heavy, high-complexity environments. Speakers pointed to its usefulness in processing large datasets, supporting visualization, helping standardize fragmented inputs, identifying patterns across complex biological systems, enabling first-pass analyses, and advancing image-based fields such as digital pathology and spatial biology. In these settings, AI is already functioning as a meaningful enabler. 
 
At the same time, several contributors pushed back on inflated claims. A recurring criticism was that AI becomes overhyped when it is presented as independent from workflow, scientific reasoning or clinical implementation. There was scepticism toward claims that AI can fully replace data scientists, bypass direct measurement, or reconstruct complex biological modalities in ways that remove the need for underlying experiments. Instead, speakers repeatedly described AI as most useful when treated as a tool - one that can accelerate exploration and support interpretation, but not substitute for expertisegovernance or robust validation. 
 
That distinction mattered even more when the conversation turned to integration at scale. The group identified a wide set of barriers limiting progress in precision medicine: inconsistent data structures, non-standard naming conventions, sparsity, missingness, variable sample quality, fragmented clinical data and workflows that remain surprisingly manual. In many development settings, important processes still depend on disconnected systems and spreadsheet-based workarounds, making it difficult to harmonize and operationalize multi-omic information. 
 
This led directly into the question of the biggest bottleneck preventing AI-driven, multi-omics insights from delivering measurable patient impact. The discussion suggested there is no single barrier, but rather a chain of them. Data may be generated successfully, but if it cannot be standardized, linked to clinical context, embedded into workflow, reimbursed, and communicated in a way that clinicians can act on, it will not translate into benefit at the patient level. The bottleneck is therefore not just technical. It is translational, operational and commercial. 
 
Theme 2: The Future of the Field
 
The second theme focused on what it will take for multi-omics to become more actionable - particularly through clinically relevant biomarkers and earlier use in clinical development.
 
A strong theme here was that multi-omics is most powerful when it helps narrow decisions, not simply expand datasets. Participants discussed how layered molecular evidence can support biomarker development, improve patient stratification, strengthen response prediction and help define subpopulations more precisely. Rather than treating biomarkers as static outputs, contributors described an evolving model in which genomic, transcriptomic, proteomic, pathological and real-world data can be combined to refine and strengthen clinical signals. 
 
However, the route from multi-omic discovery to clinical adoption remains difficult. Scientific challenges remain substantial, including the need to determine which molecular layers matter most in a given context and how to prioritize them when sample is limited. Technical challenges are equally significant: sample quality, pre-analytical variability, insufficient material, lack of standardization across assays, and inconsistent reproducibility all continue to slow progress. Several speakers emphasized that clinicians may want richer molecular insight, but in practice there is often not enough high-quality tissue or blood, and not enough operational simplicity, to support every desired layer of analysis. 
 
Regulatory and reimbursement barriers were discussed just as forcefully. In therapeutics, the path to approval is relatively well understood. In diagnostics and multi-omic clinical tools, the path is far less straightforward. Contributors highlighted a lack of clarity around validation expectations, weak reimbursement routes and the challenge of proving value in a way that healthcare systems will recognize and pay for. This was framed as one of the major reasons why technically promising approaches do not always become routine in practice. 
 
Another critical point was usability. If multi-omics is to move into routine clinical care, it cannot remain locked inside research-grade reports or highly technical analyses. Clinicians need outputs that are concise, interpretable and tied to action. The discussion repeatedly returned to the same principle: results must fit clinical workflow. In practice, that may mean highly distilled reporting, clear treatment implications and outputs that can be absorbed quickly in time-constrained settings. Without that simplification, even highly sophisticated biomarker strategies risk remaining peripheral to routine practice. 
 
Theme 3: Advancing Therapeutic Development
 
The final theme examined how multi-omics could reshape target identification and validation, and whether current failure rates in drug development stem more from poor early target selection or late-stage decision-making.
 
Here, the discussion shifted from discovery volume to decision quality. Several participants argued that the true value of multi-omics is not simply in identifying more targets, but in helping researchers sift through them more intelligently. By layering evidence across genetics, transcriptomics, proteomics, imaging, pathology and clinical data, teams can build a more robust case for which targets are likely to matter biologically and clinically. Multi-omics, in this view, is a mechanism for improving confidence and prioritization, rather than just expanding the list of possibilities. 
 
The conversation suggested that failure rates are rarely explained by one stage alone. Poor early target selection remains a problem, but so does weak translational planning later in development. A target may be biologically sound yet still fail if the relevant patient subgroup is not identified, if biomarkers are introduced too late, or if the development strategy is not aligned with the population most likely to respond. Several speakers described this as a disconnect between asset strategy and data strategy - particularly in large organizations where biomarker teams may be asked to produce answers without enough time, data or early coordination. 
 
This led into the final question: where is the biggest disconnect across the translational pipeline, from discovery through to clinical development?
 
One answer was that research often produces findings that are scientifically exciting but not operationally realistic. Biomarkers may be discovered in settings that do not reflect how samples are collected in real clinical trials, how tissues are handled at distributed sites, or what can actually be measured consistently in practice. Another answer was that data remains too siloed. The discussion explored the difficulty of connecting cohorts across hospitals and centres, harmonizing data across populations, and building shared evidence bases large enough to support stronger translation. Participants discussed the promise of collaborative and federated models, but were equally realistic about the obstacles - including consent, infrastructure, governance, underrepresentation of many populations, and the need for trust in how data is used.  
 
A More Mature View of the Opportunity
 
Taken together, the discussion pointed toward a more mature framing of AI and multi-omics in precision medicine.
 
AI is already useful, but mainly when deployed in service of clearly defined scientific and clinical tasks. Multi-omics is already powerful, but only when its outputs can be reduced into evidence that improves decisions. The field does not seem to suffer from lack of innovation as much as from lack of alignment - alignment between biology and workflow, discovery and development, molecular signal and clinical applicability, technical possibility and real-world implementation. 
 
That is perhaps the clearest conclusion from the session. The next phase for the field is not about generating more excitement around AI or more layers of omics for their own sake. It is about building systems that make those capabilities usable: earlier biomarker planning, better data harmonization, stronger validation, clearer regulatory and reimbursement pathways, and outputs that clinicians and developers can act on with confidence. If that happens, AI and multi-omics will not just remain promising technologies - they will become more reliable engines of precision medicine progress.