How AI Is Finding New Treatments for Incurable Diseases

How AI Is Finding New Treatments for Incurable Diseases
For many families, the search for a treatment starts with hope and ends in a quiet, painful sentence from a specialist, “we have no curative treatment.” At the same time, bringing a single new drug to market can cost more than 1 billion dollars and take 10 to 15 years, with most candidates failing in clinical trials, according to analyses summarized by the Tufts Center for the Study of Drug Development and Deloitte. Today, artificial intelligence is starting to bend that curve by helping scientists sift through oceans of data, propose new drug molecules, and uncover uses for existing medicines that humans might overlook. This article explains how that shift works in practice, where it is already delivering early results, and what limitations and risks still stand in the way. If you work in life sciences, health tech, or patient advocacy, this guide will help you understand what is real, what is not, and where to focus next.
Key Takeaways
- AI drug discovery uses machine learning to analyze complex biological, chemical, and clinical data, helping researchers find promising treatment ideas faster and prioritize what to test in the lab.
- Most AI generated therapies for incurable diseases are still in early research or clinical trials, so they represent hope but not guaranteed cures, and they still pass through the same regulatory gates.
- Real world examples, from AI designed drugs for idiopathic pulmonary fibrosis to AI discovered antibiotics, show that these methods can deliver testable candidates.
- The biggest challenges involve data quality, clinical validation, regulation, ethics, and ensuring that benefits reach patients with rare and underserved conditions.
Why “Incurable” Diseases Need New Ideas Now
When clinicians describe a disease as incurable, they usually mean there is no approved therapy that can reliably eradicate it or fully reverse its course. Treatments, if they exist, may slow progression or ease symptoms, yet patients still face major disability or early death. This is the daily reality for many people living with conditions like Alzheimer’s disease, amyotrophic lateral sclerosis, metastatic pancreatic cancer, and thousands of rare genetic syndromes. For these groups, incremental improvements are valuable, yet what many truly need are new biological ideas, new targets, and new ways to test potential therapies faster.
Rare diseases highlight the scale of this unmet need. The United States National Institutes of Health estimates that there are more than 7,000 rare diseases worldwide, affecting hundreds of millions of people in total. Patient advocacy groups and NIH sources report that only about 5 to 10 percent of these rare conditions currently have an approved treatment. In other words, for the vast majority, families may search for years without finding a medicine specifically designed for their disorder. That gap is not only a scientific problem, it is also a practical and economic one for traditional drug development models.
The standard pharmaceutical research pipeline was not built for huge numbers of small, genetically distinct patient populations. Developing a new drug candidate often requires screening large libraries of molecules, running many rounds of experiments, and conducting expensive trials in carefully selected groups. Failure rates are especially high in complex indications like neurodegeneration and oncology. From an industry perspective, that means high risk and uncertain returns. From a patient perspective, it means waiting as promising ideas stall or die because resources are limited or predictions about what might work are too weak.
This is where AI becomes compelling for decision makers. By learning patterns from vast collections of molecular structures, omics data, medical images, and electronic health records, AI systems can help scientists prioritize hypotheses before they reach the lab. Instead of trying thousands of compounds in expensive experiments, teams can focus on the dozens that models predict are most likely to act on the right target with acceptable safety. In my experience, what many people underestimate is how much of drug discovery involves deciding what not to test, because time and budgets are always finite. AI gives researchers a more informed filter and, when used well, it turns noisy data into a ranked set of opportunities.
AI approaches are also particularly suited to diseases that are multifactorial or poorly understood. Neurodegenerative diseases often involve complex interactions between genetics, protein misfolding, inflammation, vascular changes, and lifestyle factors. Traditional methods struggle to integrate that many variables. Machine learning models, by contrast, can digest thousands of features and identify subtle patterns that may point to new drug targets or patient subgroups more likely to respond. For rare diseases where human experience is limited, AI models trained on broader biological data can sometimes suggest therapeutic strategies that would be hard to derive from small patient cohorts alone.
None of this means that AI will magically “solve” incurable diseases in the near term. New candidates generated by algorithms must still pass through rigorous preclinical testing and multi phase clinical trials, guided by regulators like the U.S. Food and Drug Administration and the European Medicines Agency. What is changing is the front end of that pipeline and the way ideas are generated. Instead of waiting for a lucky guess or a single lab’s insight, the field is moving toward data driven, algorithm assisted hypothesis generation. That shift is already producing concrete candidate drugs and new uses for older medicines that are now entering human studies. Readers who want a deeper introduction to these methods can explore resources on AI in drug discovery for additional context.
What Is AI Drug Discovery and How Does It Work?
What is AI drug discovery? AI drug discovery refers to the use of computer programs that learn from data to help find new medicines and treatment strategies. These systems analyze large sets of biological, chemical, and clinical information to predict which drug targets, molecules, or existing drugs are most likely to work against a disease, so researchers can test the most promising options more efficiently.
AI in this context usually involves machine learning methods, such as deep neural networks, gradient boosted trees, and probabilistic models, applied to biomedical research questions. These tools are not physical robots running experiments, although they may guide automated lab systems. Instead, they are software models that look for patterns in data that humans might miss or take much longer to find. For example, a model might learn the relationship between protein structures and small molecule binding, or between gene expression signatures and response to a therapy, then use that knowledge to generate new predictions.
A typical AI drug discovery workflow starts with data collection and cleaning. Teams assemble datasets that may include chemical structures, protein sequences, three dimensional protein models, gene expression profiles, cell imaging readouts, animal study results, and clinical records from electronic health systems. These raw inputs are often noisy and inconsistent, so significant effort goes into standardizing formats, removing errors, and curating high quality labels. Many modern efforts also use public resources such as the Protein Data Bank, ChEMBL, and NIH funded genomics repositories as starting points.
Once data are prepared, scientists train machine learning models to perform specific prediction tasks. For example, one model might take a small molecule structure as input and output a probability that it will bind to a given protein target. Another might classify gene expression patterns into disease subtypes. Deep learning architectures like graph neural networks and transformer models have become popular in this space, since they can represent molecules and sequences in flexible ways. Studies published in journals such as Nature Biotechnology and Cell Reports Medicine have described how these architectures improve hit rates compared with older heuristic approaches.
The next step is to use these trained models to search through very large chemical spaces or biological networks. Instead of manually drawing analogs of a known drug, generative AI models can propose entirely new molecular structures that satisfy constraints on potency, solubility, and safety. Reinforcement learning methods can optimize these candidates iteratively, where the reward signal comes from predicted activity or other desired properties. In parallel, natural language processing systems mine millions of scientific papers, clinical trial records, and patents to uncover non obvious links between pathways, phenotypes, and drugs, which is especially useful for repurposing. Readers who want a practical overview of these techniques can review more detail in this guide on how AI is finding new medicines.
Evaluation and validation remain crucial. AI predictions are always hypotheses, not proof. Researchers test high scoring compounds in cell based assays and animal models to verify activity. They also use standard benchmarks to assess model performance, including metrics such as area under the receiver operating characteristic curve for classification tasks and mean squared error for regression tasks. Journals like NPJ Digital Medicine and Science Translational Medicine have begun to emphasize rigorous validation standards, including prospective tests where AI generated candidates are evaluated on experiments or patient cohorts that were not part of the training data.
From “No Options” To New Leads, How AI Changes the Discovery Pipeline
The traditional drug discovery pipeline proceeds in stages, from target identification through lead optimization to clinical development. AI touches nearly every step, yet its impact is most visible where intuition and brute force used to dominate. At the earliest stage, identifying which proteins, genes, or signaling pathways truly drive a disease is a major challenge. AI models that analyze genetic association studies, transcriptomics, and proteomics help rank potential targets based on how central they appear in disease networks, as reported in several Nature and Cell studies on multi omics integration.
Once a target seems promising, medicinal chemists must find or design molecules that modulate it. In the past, high throughput screening campaigns might test hundreds of thousands of compounds in robotic assays. That approach is expensive, time consuming, and still misses vast regions of chemical space. AI assisted virtual screening instead evaluates millions or even billions of candidate molecules in silico, predicting which ones are likely to bind. A study in Nature reported that an AI assisted approach could cut the time to identify a lead compound from years to less than twelve months while dramatically reducing the number of physical molecules screened, which illustrates the scale of potential efficiency gains.
Lead optimization is another place where AI changes the equation for difficult diseases. Even when an initial compound shows activity, it may have poor pharmacokinetics, off target effects, or limited ability to cross the blood brain barrier. Machine learning models trained on historical medicinal chemistry data can suggest modifications that improve these properties while preserving potency. Companies like Exscientia and Atomwise have reported cases, summarized in peer reviewed venues and conference proceedings, where AI guided optimization produced clinical candidates with fewer design cycles than traditional methods would require.
Drug repurposing, sometimes called drug repositioning, is particularly important for incurable or rare conditions. Instead of designing a compound from scratch, researchers ask whether an existing approved drug could be effective for a new indication. AI can compare disease gene expression signatures, protein interaction networks, and clinical outcomes across millions of patient records to spot such opportunities. A common mistake I often see is people assuming repurposing is easy because the drug is already on the market. In practice, matching the right molecule to the right rare disease still requires detailed biological insight, which AI can help supply.
As candidates progress toward clinical trials, AI can also improve operational decisions. Predictive models can help select initial dosing ranges, anticipate drug drug interactions, and identify potential side effect profiles based on similarities to other compounds. Clinical trial design is another emerging area. AI tools can simulate different inclusion criteria and endpoint choices using historical patient data to estimate which designs are most likely to detect a true treatment effect. For conditions where recruiting enough patients is hard, such as ultra rare genetic diseases, this kind of optimization can make or break a study.
Importantly, AI is starting to play a role in matching individual patients with experimental options. Hospital systems and research networks have developed algorithms that scan electronic health records to identify patients who meet complex eligibility criteria for specific trials. For patients with progressive, currently incurable conditions, this can open access to investigational therapies that they and their clinicians might not otherwise discover. What becomes clear in practice is that AI often acts as a bridge between existing but scattered opportunities and the people who need them most. For oncology specialists, resources on AI that transforms cancer treatment provide additional examples of this shift in practice.
Real Diseases, Real Models, Early Results
To understand how AI is finding new treatments for incurable diseases, it helps to examine concrete cases where models have produced drug candidates or major biological insights. One widely discussed example involves idiopathic pulmonary fibrosis, a chronic and often fatal lung disease characterized by scarring that progressively limits breathing. Approved treatments can slow the decline for some patients but usually do not stop or reverse the condition. Insilico Medicine, an AI focused drug discovery company, reported that it used generative AI to identify a novel target and design a small molecule candidate called INS018_055 for this disease. According to company reports and coverage in Nature Biotechnology, the program went from target discovery to a preclinical candidate in around eighteen months, a timeline significantly shorter than typical.
The case of INS018_055 illustrates several pieces of the AI workflow. Insilico integrated omics data, text mining of scientific literature, and pathway analysis to propose a previously unexplored target implicated in fibrotic processes. Then, generative chemistry models suggested molecules predicted to bind this target and satisfy medicinal chemistry constraints. These candidates were synthesized and tested in preclinical models, leading to a compound that advanced into Phase I and then Phase II clinical trials in China and the United States, according to clinical trial registries. Early safety findings were encouraging, though efficacy data are still emerging, and regulators will evaluate the evidence carefully before any approval.
Another influential milestone came from protein structure prediction, a foundational problem in biology that affects drug target understanding. DeepMind’s AlphaFold system, described in the journal Nature, used deep learning to predict three dimensional protein structures from amino acid sequences with accuracy approaching experimental methods for many proteins. Demis Hassabis, the chief executive of DeepMind, has stated in interviews and commentaries that highly accurate protein structure prediction can “fundamentally change how we understand biology” and accelerate drug discovery by revealing binding pockets and conformational states that were previously unknown. Pharmaceutical and academic groups now use AlphaFold predicted structures routinely to design and dock potential drug molecules.
Antibiotic discovery is a third area where AI has already yielded new chemical entities, addressing a global health crisis recognized by the World Health Organization. In a study published in Cell, researchers from MIT and collaborators trained a deep learning model to predict growth inhibition of Escherichia coli for a large library of molecules. The model then screened a collection of compounds that were structurally distinct from known antibiotics and identified a molecule later named halicin, which showed activity against a range of antibiotic resistant pathogens in laboratory tests and animal models. Halicin was originally investigated for a different indication, so this work also demonstrated how AI can find unexpected uses for molecules in existing libraries.
Case studies from major medical centers show similar patterns in complex cancers. At the University of Texas MD Anderson Cancer Center, researchers are exploring AI models that analyze genomic data and treatment histories from thousands of patients to suggest drug combinations that may overcome resistance in diseases like acute myeloid leukemia. While many of these efforts are still published as early stage studies or retrospective analyses in journals such as JAMA Oncology and The Lancet Oncology, they show how AI can propose hypotheses that clinicians then test in carefully designed trials. This is a far cry from AI practicing medicine on its own. It is more like a discovery engine running in the background of expert led decision making.
These early successes are encouraging, yet they also reveal the limits of current AI. INS018_055 and halicin are still far from being standard treatments, and many AI generated predictions will fail in later testing. Incurable diseases are difficult not just because they lack drugs but because their biology is intricate and sometimes poorly modeled in animals. AI cannot overcome flawed biological assumptions or inadequate clinical trial designs on its own. Instead, it amplifies the impact of strong science and careful methodology. Recognizing that distinction is essential for setting realistic expectations about timelines and success rates.
Deep Dive, How AI Uses Data To Suggest New Treatments
Behind every AI generated drug candidate lies a series of technical choices about data sources, model architectures, and validation strategies. Understanding these details, at least in broad strokes, helps explain why the field is progressing and where important uncertainties remain. One central building block is high quality training data. For molecular prediction tasks, this may include assay results that measure how strongly compounds bind to targets or inhibit enzymes, along with associated chemical structures. Public databases like ChEMBL and PubChem provide millions of such activity records. However, these datasets can be biased toward particular target classes, such as kinases, and may under represent the novel pathways that matter in many incurable diseases.
For target discovery and patient stratification, AI systems often ingest multi omics data collected through NIH funded initiatives and academic consortia. That includes genomic variants from genome wide association studies, transcriptomic profiles from RNA sequencing, and proteomic signatures from mass spectrometry. Researchers integrate these datasets with clinical outcomes recorded in electronic health records or disease registries. For example, a study in Nature Genetics might identify risk alleles for a neurodegenerative disease, while a separate dataset links gene expression patterns to disease progression rates. Graph based machine learning models can combine these pieces into network representations, then identify nodes and pathways that appear central to the disease process.
Model evaluation is crucial, particularly because overfitting is a constant risk in high dimensional biological data. Standard practice includes splitting data into training, validation, and test sets, performing cross validation, and reporting metrics such as precision, recall, and calibration plots. Some groups run retrospective simulations where the model must make predictions using only data available up to a certain time, then compare its suggested hits or targets with those that were later confirmed by experiments published in journals like Science Translational Medicine. This kind of temporal validation is more realistic than random splits, since it mimics the process of making predictions about the future, not about data drawn from the same distribution.
Regulators and funding agencies are also pushing for greater transparency and robustness in AI methods. The U.S. Food and Drug Administration has published discussion papers and frameworks, such as “Artificial Intelligence and Machine Learning in Software as a Medical Device” and a draft “CDER’s Framework for the Use of AI in Drug Development,” which emphasize the need for explainability, documentation, and independent validation. While these documents focus partly on clinical decision support tools, their principles apply to AI used earlier in the R&D pipeline as well. In practice, that means companies and academic teams are adopting version control, model documentation templates, and standardized reporting checklists when they submit findings for regulatory dialogue or high impact publication.
An often overlooked technical challenge involves data privacy and governance, especially for models trained on patient level electronic health records. Regulations such as the Health Insurance Portability and Accountability Act in the United States and the General Data Protection Regulation in the European Union set strict rules for how identifiable health information can be used. Many AI in medicine projects therefore rely on de identified data, federated learning approaches that keep data on local servers, or synthetic data generation to test workflows. Reports from organizations like the World Health Organization and the OECD stress that trustworthy AI in health must be built on secure, well governed data pipelines that respect patient rights while still enabling research.
Quality control does not stop once a model has been trained and validated for a specific task. When AI tools are deployed inside pharmaceutical R&D pipelines, teams monitor their performance over time as new data accumulate. For example, a model that predicts toxicity based on historical compounds may become less accurate as chemists explore new scaffolds, a phenomenon known as distribution shift. Companies respond by retraining models, updating feature sets, or adding uncertainty estimates to guide human review. In my experience, what many people underestimate is the ongoing engineering effort required to keep these systems reliable in real world use, where conditions and priorities evolve.
The Hidden Challenges and Risks of Using AI Against Incurable Diseases
While headlines often focus on breakthrough stories, experts who work on AI in drug discovery are just as concerned with what can go wrong. One major challenge is bias in training data. Historical drug discovery has concentrated on certain targets, mechanisms, and patient populations, especially common diseases in high income countries. AI models trained on such data may inherit these biases, leading them to perform worse for under studied conditions, populations with different genetic backgrounds, or rare diseases where examples are sparse. In a sense, AI may mirror the blind spots of past research unless designers actively correct for them using techniques like rebalancing, transfer learning, and targeted data collection.
Another risk lies in over confidence and misinterpretation of model outputs. Complex deep learning systems can produce very confident predictions even when they are wrong, particularly when applied far outside their training domain. For incurable diseases, where patients and families are desperate for new options, this raises ethical concerns. Overstating the strength of AI generated hypotheses could push limited resources toward weak candidates or encourage off label use of drugs without adequate evidence. The World Health Organization has warned in its guidance on ethics and governance of AI for health that such systems must be used to enhance, not replace, sound clinical judgment and scientific rigor.
Operational integration is also harder than it may look from the outside. Large pharmaceutical companies often have complex legacy systems, diverse data formats, and siloed research groups spread across different countries. Implementing a new AI platform involves connecting to existing compound databases, lab information systems, and project management tools. It also requires training scientists to interpret model outputs and incorporate them into their decision making. A common mistake I often see is assuming that buying an AI tool immediately translates into better drugs. In practice, success depends on change management, cross functional collaboration, and a culture that values both data science and experimental expertise.
Economic incentives and funding structures can shape which incurable diseases see the most AI attention. Commercial AI drug discovery companies need a path to return on investment, so they may focus on diseases with larger markets or clearer regulatory pathways, like oncology or autoimmune disorders. Ultra rare diseases, or conditions more prevalent in low income settings, may receive less focus unless public sector funders and philanthropic organizations step in. Reports from the NIH and rare disease advocacy groups emphasize that targeted grants, orphan drug incentives, and shared data resources are crucial to ensure that AI benefits patients beyond the most commercially attractive areas.
There are also important questions about intellectual property and openness. Some AI drug discovery platforms operate as closed systems where models and training data remain proprietary, while others share tools and datasets openly. Open science approaches, such as the public release of AlphaFold protein structure predictions coordinated with the European Bioinformatics Institute, create common resources that researchers worldwide can use. At the same time, intellectual property protections can encourage companies to invest in high risk programs for incurable diseases. Striking a balance between collaboration and competitive advantage is an ongoing policy and industry debate that will influence who can build on AI derived insights.
Finally, regulatory and ethical frameworks are still catching up with the rapid pace of technical change. Organizations like the OECD and UNESCO have published high level principles for trustworthy AI that call for transparency, fairness, and accountability. Regulatory bodies such as the FDA and EMA are holding public workshops and issuing concept papers to solicit input on how AI should be validated and documented in the context of drug development. This process takes time, yet it is essential for building durable trust. If rushed or poorly governed AI projects lead to high profile failures, public confidence in even well designed efforts could suffer, slowing progress for patients who urgently need new ideas.
Contrarian Insights, What Many People Get Wrong About AI and Cures
Popular narratives about AI often swing between two extremes. On one side are headlines proclaiming that AI will soon cure cancer or render entire fields of medical research obsolete. On the other side are skeptical takes that dismiss AI as hype because definitive cures have not yet emerged. Both views miss the more nuanced reality visible to practitioners inside labs and clinics. AI is not a miracle worker, yet it is already changing the probability landscape of discovery projects, shifting the odds that at least some incurable diseases will gain meaningful new treatments in the coming decades.
One oversimplified belief is that AI can find cures simply by analyzing enough data, as if the right pattern is just waiting to be revealed. In practice, many incurable diseases suffer from a lack of high quality mechanistic data rather than an excess. For example, while Alzheimer’s disease has large clinical datasets, the underlying pathophysiology involves interacting processes that remain only partly understood. Machine learning can help organize clues, but it cannot invent causal knowledge without experimental grounding. That is why leading AI and medicine researchers emphasize tight loops between algorithms and wet lab experiments, not purely in silico discovery.
Another misconception is that AI will quickly reduce drug development costs across the board. While there are clear efficiency gains in tasks such as virtual screening and lead optimization, the most expensive parts of drug development often involve large clinical trials, manufacturing scale up, and regulatory submissions. These stages are constrained by biology, logistics, and safety requirements rather than pure computational throughput. AI can help design better trials and identify responsive subgroups, which may reduce the number of participants needed in some cases. For many incurable diseases, proof of benefit still requires careful long term studies that cannot be compressed indefinitely.
There is also a tendency to view AI as a single technology rather than a toolbox of methods with different strengths and weaknesses. Models that perform well on image classification tasks may not be ideal for molecule generation. Techniques suited to large, labeled datasets may struggle in the small sample regimes common in rare disease research. In my experience, one thing that becomes clear in practice is that successful projects combine several types of models and domain expertise. For instance, a rare disease program might use natural language processing to mine case reports, graph neural networks to analyze protein interaction networks, and Bayesian models to handle uncertainty in small patient datasets.
A contrarian yet important point is that some of the biggest long term contributions of AI to incurable diseases may come from areas that seem indirect. Tools like AlphaFold and related structural prediction systems do not prescribe treatments on their own. Instead, they provide foundational knowledge about protein conformations and interactions that will inform thousands of future experiments across many diseases. Similarly, AI methods for automated image analysis in pathology or radiology generate high resolution phenotypes that can sharpen disease definitions and outcome measures. That, in turn, makes it easier to detect meaningful treatment effects in trials.
Finally, many discussions overlook the human capital implications of AI in drug discovery. Far from replacing researchers, these tools are creating demand for new hybrid roles, such as physician data scientists, computational biologists fluent in modern machine learning, and chemists comfortable with algorithm informed design. For students and early career professionals thinking about how to contribute to solving incurable diseases, building literacy in both biology and AI can be a powerful career strategy. Organizations like the Broad Institute, Stanford, and MIT are investing heavily in training programs at this intersection, recognizing that future breakthroughs will require people who can bridge disciplines as much as clever algorithms.
Case Studies, How Organizations Are Applying AI To Hard Diseases
Several organizations provide concrete, real world case studies of AI applied to diseases long viewed as incurable or nearly so. One often cited example is the antifibrotic program at Insilico Medicine, which we touched on earlier. The company used its AI platforms, including target discovery and generative chemistry tools, to identify a novel target implicated in idiopathic pulmonary fibrosis and design the small molecule INS018_055. According to company reports corroborated by clinical trial registrations and independent coverage in Nature Biotechnology, the project progressed from initial target hypothesis to clinical stage candidate in under three years. That is not proof of efficacy, yet it demonstrates that AI can compress early discovery timelines and generate assets credible enough to enter human testing under regulatory oversight.
A second case comes from Recursion Pharmaceuticals, a U.S. based company that describes itself as an AI first industrialized drug discovery platform. Recursion uses automated microscopy to capture high dimensional images of cells treated with thousands of perturbations, including genetic changes and small molecules. Machine learning models then embed these images into a quantitative space where similar cellular responses cluster together. By comparing disease phenotypes to compound induced phenotypes, Recursion identifies repurposing candidates and novel pathways. Some of its programs, including candidates for cerebral cavernous malformation and neurofibromatosis type 2, have advanced into clinical trials, as noted in company filings and reports in Science Translational Medicine discussing high content phenotypic screening.
A third case study involves BenevolentAI, a company that combines natural language processing, knowledge graphs, and machine learning for target discovery and drug repurposing. During the COVID 19 pandemic, researchers from BenevolentAI used their platform to scan biomedical literature and molecular data, identifying baricitinib, a JAK inhibitor already approved for rheumatoid arthritis, as a potential treatment for hospitalized COVID 19 patients. Subsequent randomized controlled trials, supported by the National Institutes of Health and reported in The New England Journal of Medicine, showed that baricitinib improved outcomes in certain patient groups when added to standard care, leading to emergency use authorization and later approval by regulators. This example, while focused on an infectious disease, illustrates how AI repurposing can move from hypothesis to clinical impact when strong trials are conducted.
Large pharmaceutical companies are also integrating AI driven approaches into their pipelines for complex indications. AstraZeneca, for instance, has collaborated with BenevolentAI on chronic kidney disease and idiopathic pulmonary fibrosis programs, with selected targets disclosed in scientific publications and conference presentations. Pfizer, Novartis, Roche, and others have partnered with AI companies like Exscientia and Atomwise for oncology and immunology projects. While many details remain proprietary, public statements and peer reviewed co authored papers show that AI is influencing decisions about which targets to pursue and how to design compounds. This suggests that AI is moving from isolated experiments to a standard component of industrial R&D for hard diseases.
Academic medical centers are not standing still. For example, the Mayo Clinic and other major institutions participate in multi center consortia that apply AI to multi omics datasets in neurodegenerative diseases. Some of these efforts, reported in journals like Lancet Neurology and NPJ Digital Medicine, aim to identify biomarkers that could serve as early surrogate endpoints in trials, helping to shorten development times even when disease progression is slow. Others look at patient stratification, attempting to define subtypes of Alzheimer’s or Parkinson’s that may respond differently to specific mechanisms. While these projects are still mostly upstream from actual therapies, they create a more precise map on which AI guided drug discovery efforts can operate.
What these diverse case studies share is a pattern where AI serves as a catalyst, not a substitute, for scientific and clinical expertise. They also show that success requires more than clever algorithms. Companies and institutions that report progress tend to invest heavily in data generation, laboratory automation, and close collaboration between data scientists, biologists, and clinicians. In each case, regulators, funders, and peer reviewed journals provide external checks on claims, ensuring that AI generated leads are tested with the same rigor as any other candidate. For patients living with incurable diseases, this emerging ecosystem offers cautious optimism that more investigational options will reach trials in the years ahead.
FAQ, Common Questions About AI and Incurable Diseases
How is AI actually used to find new treatments for incurable diseases?
AI systems analyze large datasets of molecular structures, biological measurements, and clinical outcomes to identify patterns linked to disease mechanisms or drug responses. For example, models may predict which proteins are central drivers of disease or which small molecules are likely to bind a target. Generative AI tools can then design new compounds that satisfy potency and safety constraints. Natural language processing systems mine scientific literature and clinical trial registries to uncover non obvious connections between drugs and diseases. Researchers test the most promising AI generated hypotheses in laboratories and clinical studies to see whether they translate into real treatments.
Can AI really cure diseases like Alzheimer’s or ALS?
It is too early to say that AI will cure complex neurodegenerative diseases such as Alzheimer’s or amyotrophic lateral sclerosis. These conditions involve intricate and only partly understood biology, and many past drug candidates have failed in late stage clinical trials. AI can help by integrating genetic, imaging, and clinical data to propose new mechanisms and patient subtypes. It can also accelerate the discovery and optimization of molecules that target these pathways. Any potential cure or strong disease modifying therapy, however, will still require years of careful testing in humans and must meet strict regulatory standards before approval.
What kinds of AI techniques are most important in drug discovery?
Several types of machine learning play key roles in AI drug discovery. Deep learning, including convolutional and transformer based networks, is widely used for tasks such as molecule property prediction, protein structure modeling, and image based phenotyping. Graph neural networks handle data that naturally form networks, like protein interaction maps or molecular graphs. Natural language processing techniques, including large language models, help extract knowledge from unstructured text such as papers and patents. Reinforcement learning methods are applied to optimize molecules or experimental strategies iteratively. The choice of technique depends on the problem, data type, and available computational resources.
How many AI designed drugs are in clinical trials today?
Industry analysts and news reports indicate that by 2023, dozens of drug candidates generated or heavily influenced by AI had entered clinical trials. These include small molecules for idiopathic pulmonary fibrosis, various cancers, inflammatory diseases, and other conditions. Some are entirely novel compounds designed using generative chemistry, while others are repurposing candidates identified through AI analysis of existing drugs. Exact numbers change rapidly as new programs launch and others fail or progress, and there is no single centralized registry for “AI designed” drugs. Peer reviewed articles and conference presentations offer independent confirmation for a subset of these candidates.
Is AI drug discovery safer or riskier than traditional approaches?
AI drug discovery is not inherently safer or riskier than traditional methods, because all candidates must still pass through the same safety and efficacy evaluations. One potential safety benefit is that AI can help identify toxicity risks earlier by learning patterns from large toxicity datasets. It may also suggest compounds with cleaner off target profiles. If models are biased or miscalibrated, they might overlook certain risks or overestimate potential benefits. This is why regulators and experts stress the need for transparent validation, independent replication, and strong preclinical and clinical testing, regardless of whether AI is involved in the early design.
How do regulators like the FDA view AI in drug development?
Regulators such as the U.S. Food and Drug Administration see AI as a promising tool that can improve drug development efficiency, and they also emphasize the need for transparency and rigorous validation. FDA documents and public workshops note that AI models should be well documented, with clear descriptions of data sources, training procedures, and performance metrics. Regulators expect sponsors to show that AI generated insights are supported by empirical evidence and that models behave reliably across relevant populations. AI does not change the basic requirement that new drugs must demonstrate safety and efficacy in controlled trials before approval.
Can AI help patients find clinical trials for incurable diseases?
Yes, AI tools are increasingly used to match patients with suitable clinical trials, especially for complex or rare conditions where eligibility criteria are detailed. Hospital systems and technology companies develop algorithms that scan electronic health records to identify patients whose diagnoses, lab values, and treatment histories match specific trial protocols. These tools can alert clinicians or research coordinators about potential matches. For patients, this can mean more opportunities to enroll in investigational therapy studies they might not have discovered on their own. Data privacy safeguards and clinician oversight remain key components of responsible deployment.
What are the biggest barriers to wider AI adoption in drug discovery?
Major barriers include data quality, organizational culture, and integration challenges. Many companies and academic groups have fragmented, inconsistent datasets that require extensive cleaning before AI can be applied effectively. There can also be skepticism or resistance among scientists who are unfamiliar with machine learning methods or worry about over reliance on models. Implementing AI platforms often involves significant engineering work to connect with existing infrastructure and workflows. Cost, regulatory uncertainty, and competition for skilled data scientists also shape adoption. Overcoming these barriers typically requires leadership support, cross training, and clear examples where AI has added real value.
How does AI help with rare diseases that have very few patients?
Rare diseases pose a challenge because traditional machine learning works best with large datasets. To address this, researchers use strategies such as transfer learning, where models trained on broader biological data are adapted to specific rare conditions. They also leverage multi omics data and knowledge from related diseases to build mechanistic hypotheses. Natural language processing can extract information from scattered case reports and small studies. AI can help identify candidate drug targets or repurposed medicines even when patient numbers are low, though clinical validation still requires careful trial design and often international collaboration to recruit enough participants.
What ethical issues arise when using AI to find treatments for incurable diseases?
Ethical issues include data privacy, informed consent, fairness, and realistic communication of benefits and risks. Patients whose electronic health records or genomic data are used for research must have their information protected and, where appropriate, consented. There is a risk that AI models trained on unrepresentative data could disadvantage certain populations or miss disease patterns specific to under studied groups. Overhyping AI could create false hope or pressure patients into trials with uncertain prospects. Organizations like the World Health Organization and OECD have published guidelines that emphasize transparency, accountability, and inclusion when developing and deploying AI in health.
How quickly might AI change treatment options for currently incurable diseases?
AI is already influencing early stage discovery pipelines, yet the translation into approved treatments takes time. Even with accelerated discovery, a promising candidate may still require five to ten years of clinical development and regulatory review, especially for chronic or slowly progressive diseases. Some faster repurposing wins may appear sooner, particularly when an existing approved drug is identified as helpful for a new indication and trials can be conducted quickly. Structural insights from AI tools like AlphaFold are expected to have long term effects by informing many future programs. Overall, AI is best seen as a force that can steadily increase the flow and quality of candidates rather than a source of instant cures.
What skills do researchers and students need to work at the intersection of AI and incurable diseases?
Researchers and students aiming to contribute in this area benefit from a mix of domain and technical skills. A strong foundation in biology, pharmacology, or medicine helps them formulate meaningful questions and interpret results in context. Knowledge of statistics, machine learning, and programming languages such as Python enables them to build or evaluate models. Experience with tools like TensorFlow or PyTorch, as well as familiarity with bioinformatics databases, is often useful. Communication and collaboration skills matter too, since projects usually involve cross disciplinary teams. Many institutions now offer specialized programs in computational biology, biomedical data science, or AI in medicine to support this training.
Will AI replace human scientists and clinicians in drug discovery and treatment decisions?
Current evidence and expert opinion suggest that AI will augment rather than replace human scientists and clinicians. AI systems excel at pattern recognition and can process vast amounts of data quickly, yet they lack the broader judgment, ethical reasoning, and creativity that humans bring. In drug discovery, models generate hypotheses and prioritize experiments, but researchers still design studies, interpret ambiguous results, and adjust strategies. In clinical care, AI tools can assist with diagnosis and treatment selection, but clinicians remain responsible for integrating patient preferences, comorbidities, and social factors. Regulatory and professional bodies stress that AI should support, not substitute for, human expertise.
Conclusion
Artificial intelligence is not a magic wand that makes incurable diseases vanish, yet it is changing how the scientific and medical communities search for new treatments. By learning from vast arrays of molecular, biological, and clinical data, AI systems help researchers generate and prioritize ideas that might have been overlooked or taken many more years to find. Early case studies, from AI designed molecules like INS018_055 for idiopathic pulmonary fibrosis to AI identified repurposing candidates such as baricitinib for COVID 19, show that these methods can produce testable candidates that reach clinical trials under regulatory oversight.
The practical takeaway for patients, families, and professionals is to blend hope with realism. AI is expanding the toolkit available to those working on some of the hardest problems in medicine, yet every candidate must still run the gauntlet of experimental validation and rigorous trials. Supporting high quality data initiatives, ethical and inclusive research, and cross disciplinary training will help ensure that AI’s growing power is directed where it is most needed. Over the next decade, the success of AI in finding treatments for today’s incurable diseases will depend as much on thoughtful governance and human collaboration as on technical innovation. Readers who want to see how similar approaches are reshaping fields like autoimmune disease research and treatment response prediction can explore work on AI and autoimmune diseases or examples where AI predicts drug response, then consider how these strategies could be adapted inside their own organizations.
References
AlphaFold protein structure prediction work and its implications for biology and drug discovery are described in: Jumper et al., “Highly accurate protein structure prediction with AlphaFold,” Nature, 2021. Available at: https://www.nature.com/articles/s41586-021-03819-2
Global statistics on rare diseases and treatment gaps can be found through the NIH Genetic and Rare Diseases Information Center: and summaries from patient organizations such as Global Genes.
Estimates of drug development costs and timelines are discussed by the Tufts Center for the Study of Drug Development and in Deloitte’s pharma R&D benchmarking reports, for example: Deloitte Centre for Health Solutions, “Ten years on, measuring the return from pharmaceutical innovation,” 2019, available at: https://www2.deloitte.com
The discovery of the antibiotic halicin using deep learning is reported in: Stokes et al., “A Deep Learning Approach to Antibiotic Discovery,” Cell, 2020. Available at: https://www.cell.com/cell/fulltext/S0092-8674(20)30102-1
Information on Insilico Medicine’s INS018_055 program and timelines can be found in company announcements and coverage such as: Mullard, “AI powered drug discovery captures pharma’s imagination,” Nature Reviews Drug Discovery, 2023. Available at: https://www.nature.com/articles/d41573-023-00026-y
The baricitinib COVID 19 repurposing case is described in: Kalil et al., “Baricitinib plus Remdesivir for Hospitalized Adults with Covid-19,” The New England Journal of Medicine, 2021. Available at: https://www.nejm.org/doi/full/10.1056/NEJMoa2031994
Guidance on ethics and governance of AI in health from the World Health Organization is available in: WHO, “Ethics and governance of artificial intelligence for health,” 2021. Accessible at:



