Tom Lawry • January 9, 2024

Generative AI and Precision Medicine – The Future is Not What It Used to Be

“When we look back in (the year) 2041, we will likely see healthcare as the industry most transformed by AI." 

                                                                                                                                           - Kai-Fu Lee, AI 2041


Generative AI is a new and rapidly emerging form of artificial intelligence that has the potential to revolutionize precision medicine by improving diagnosis, treatment, and drug discovery. It’s comprised of Large Language Models and other intelligent systems that replicate a human's ability to create text, images, music, video, computer code, and more.


So, naturally, when Damian Doherty, Editor-in-Chief of Inside Precision Medicine, approached me last fall about writing an article on Generative AI, the first thing I did was ask the latest version of ChatGPT to provide a 2,800-word manuscript on the opportunities and issues of its application to precision medicine.


The content it generated was relevant, logically organized, and backed up with factual information. Sentence structures were precise and delivered in an easy-to-understand format. There was a formulaic beginning, middle, and end, with the correct provisos stated for being wrong.

The result was quite good, but in the end, it was a little too GPT-ish. There were many things my human brain wanted to know that it didn’t cover or guide me towards.


In some ways, this exercise mirrors the deeper discussions and explorations that are just getting underway to both understand our new and evolving AI capabilities and define a logical pathway to help clinicians and researchers make the practice of medicine more precise.

I’ve had the benefit of working with the application of AI in health and medicine for over a decade. Here are my very human thoughts on what should be considered as we approach this opportunity.


An AI Taxonomy


Generative AI is a relatively new form of AI that has been released into the wild. As such, there are very few experts. This means that we are all early in the journey of understanding what it is and how we apply it to do good.

The chart below provides a simple taxonomy to help differentiate generative AI from other forms of Predictive Analytics.

While there is a great deal of hype over generative AI, there is a growing body of evidence on the things it can do well with humans in the loop:[i]

•       Write clinical notes in standard formats such as SOAP (subjective, objective assessment and plan)

•       Assign medical codes such as CPT and ICD-10

•       Generate plausible and evidence-based hypotheses

•       Interpret complex laboratory results

Going forward generative AI will provide benefits in many areas including:


Drug Discovery and Development: Assistance in the discovery of new drugs and their development by predicting molecular structures, simulating drug interactions, and identifying potential drug candidates more quickly and accurately. AI can identify existing drugs that could be repurposed for new therapeutic uses, potentially speeding up the drug development process and reducing costs.


Personalized Treatment Plans: Analyze large-scale patient data, including genetic information, medical records, and imaging data, to guide physicians in the creation of personalized treatment plans tailored to an individual's unique genetic makeup and health profile.


Disease Diagnosis: Assistance in the early and accurate diagnosis of diseases by analyzing medical images, genomic data, and clinical records, helping healthcare professionals make more informed decisions.

 

Medicine has Been Here Before – Change is Hard


Since medicine came out of the shadows and into the light as a data-driven, scientific discipline we’ve always aspired to be better. The reality is that change is hard. It requires us to think and act differently.


When cholera was raging through London in the 1850’s Dr. John Snow was initially rebuffed when he challenged the medical establishment by gathering and presenting data demonstrating that the root cause of cholera was polluted water rather than the prevailing view that it was caused by bad air. From this came the early stages of epidemiology.[ii]

 

In the 1970’s the introduction of endoscopy into surgical practice was met with resistance in the surgical community which saw little use for “key-hole” surgery as the prevailing view and practice was that large problems required large incisions. Today, the laparoscopic revolution is seen as one of the biggest breakthroughs in contemporary medical history.[iii]

 

Generative AI and Large Language Models are part of medicine’s next frontier. They are already challenging current practices across the spectrum of research, clinical trials, medical and nursing school curricula, and the front-line practice of medicine. It’s not a matter of whether it will affect what you do but rather how and when.

 

With the right dialogue and guidance from a diverse set of stakeholders, we will create a path forward that leverages the benefits of our evolving creations to improve health and medical practices while ensuring that appropriate guardrails are put in place to monitor and guide its use.

 

It’s Not About Going Slow. It’s About Getting Things Right


In some ways, the challenge of generative AI today is less about increased AI capabilities and more about the velocity of change it is driving.

Generative AI came screaming into mainstream consciousness in the fall of 2022. ChatGPT, a generative AI product from OpenAI, racked up 100 million users in two months. In the history of humans, there has never been a product that has seen such rapid adoption. Shortly after ChatGPT reached this milestone the next version of GPT was released with greatly increased capabilities.


From the practice of medicine to the development of new drugs, generative AI’s “speed of progress” is not following the normal path that economists refer to as linear growth.  This is where something new is created that adds incremental value, which then creates a small gap between the time of its creation and when it starts being used. As adoption occurs there is another small gap between uptake and the time it takes for policymakers to develop necessary guard rails to both guide its use and safeguard users from risks. Linear growth is steady and predictable and what clinical and operational systems are set up to manage.


Generative AI is upending linear growth. It’s taking a different trajectory that economists call exponential growth. This is where something increases faster as it gets bigger. Most of our systems are not designed to accommodate this dramatic escalation in change. Exponential growth doesn’t last and eventually, the pace of change returns to linear growth. But when it’s happening it feels like the world is inside a tornado.

The European Parliament approved landmark rules for artificial intelligence, known as the EU AI Act which aims to bring generative AI tools under greater restrictions. This includes generative AI developers being required to submit these systems for review before releasing them commercially.[iv]Here in the United States the Biden administration issued an Executive Order last fall to build momentum within federal agencies and the private sector to put better guardrails in place for the use of AI.


The rapid change driven by generative AI has some calling for measures to slow or even suspend AI development to evaluate its impact on humans and society. A petition from the Future of Life Institute was put forward and signed by leaders including Elon Musk calling for a six-month moratorium on AI development.[v]

 

While there is uncertainty in what we are creating and how it should be applied, it is unlikely that any mandates will slow the pace of AI innovation.


Instead of attempting to slow progress, let us expedite the education and dialogue among policymakers, medical and research leaders, and frontline practitioners to chart a course for progress in applying our new intelligent capabilities. These groups are also most relevant to ensuring that a necessary set of laws, regulations, and protocols are in place to safeguard those both providing and receiving health and medical services.


The Creation of Enforceable Responsible AI Principles


Let’s recognize and support the overall good that can come from AI innovation. At the same time, we must be mindful of how our ever-expanding AI capabilities can replicate and even amplify human biases and risks that work against the goal of improving the health and well-being of all citizens.


Prioritizing fairness and inclusion in AI systems is a socio-technical challenge. The speed of progress is spawning a new set of issues for governments and regulators. It’s also challenging us with new ethical considerations in the fields of medical and computer science. Ultimately the question is not only what AI can do, but rather, what AI should do.


While legislators and regulators work on finding common ground, health and medical organizations using AI today should have a defined set of Responsible AI principles in place to guide the development and use of intelligent solutions. Most often, these principles or guidelines are reviewed and approved at the highest level of leadership and incorporated into an organization’s overall approach to Data Governance.


AI in Medicine is Not About Technology. It’s About Empowerment


AI has a PR problem. The narrative in the popular press and professional journals is often negative.  Headlines like “Half of U.S. Jobs Could be Eliminate With AI,” paint a picture of a future work world dominated by what novelist Arthur C. Clarke calls robo-sapiens.[vi] [vii]


It’s no wonder that people are worried. According to a study by the American Psychological Association, the potential impacts that AI could have on the workplace and jobs is now one of the top issues impacting the mental health of workers.[viii]


Generative AI is already impacting today’s workplace and will be the single greatest change affecting the Future of Work in the next decade. It will impact how all work is done. As you let that statement sink in, recognize that the issues to be addressed go beyond productivity. After all, work brings shape and meaning to our lives and is not just about a job or income.


In this regard, there is growing evidence to suggest that AI can increase not only productivity but also job satisfaction.


In a randomized trial using generative AI, 453 college-educated professionals were given a series of writing tasks to complete. Half were given support with ChatGPT. The control group was not given access to Chat GPT. The results showed that the time taken to complete tasks was reduced by 40% among those using this form of generative AI. Beyond increased productivity, those using ChatGPT reported an increase in job satisfaction and a greater sense of optimism. Most importantly, inequality between workers decreased.[ix]


Done right, AI is not about technology. It’s about empowerment. Properly curated, generative AI will help solve one of the most significant challenges facing healthcare - The shortage of human capital.


The effective introduction and use of generative AI in health and medicine enables both cost-cutting automation of routine work and value-adding augmentation of human capabilities. As it and other forms of AI become pervasive in health and medicine, a new intelligent health system will emerge. It will facilitate systems that improve health while delivering greater value. It will provide a more personalized experience for consumers and patients. It will liberate clinicians and restore them to be the caregivers they want to be rather than the data entry clerks we’re turning them into by forcing them to use systems and processes conceived decades ago.


And while generative AI is coming at us fast with much to understand in how we use it, it could not have come at a better time.


The full article written for Inside Precision Medicine may be found at https://www.insideprecisionmedicine.com/topics/precision-medicine/generative-ai-and-precision-medicine-the-future-is-not-what-it-used-to-be/



References Used in This Blog:


[i]   Peter Lee, Carey Goldberg, Isaac Kohane, The AI Revolution in Medicine: GPT-4 and Beyond, Pearson Education, 2023

[ii]  Theodore H. Tulchinsky, MD MPH, John Snow, Cholera, the Broad Street Pump; Waterborne Diseases Then and Now, Case Studies in Public Health, March 30, 2018

[iii] Endoscopic surgery: the history, the pioneers. Litynski GS.World J Surg. 1999 Aug;23(8):745-53. doi: 10.1007/s002689900576.PMID: 10415199

[iv] Ryan Browne, EU lawmakers pass landmark artificial intelligence regulation, CNBC, June 14, 2023, https://www.cnbc.com/2023/06/14/eu-lawmakers-pass-landmark-artificial-intelligence-regulation.html

[v] Pause Giant AI Experiments: An Open Letter, Future of Life Institute, March 22, 2023, https://futureoflife.org/open-letter/pause-giant-ai-experiments/

[vi] http://business.rchp.com/home-2/half-of-all-jobs-eliminated/

[vii] Arthur C. Clark, Britannica, https://www.britannica.com/biography/Arthur-C-Clarke

[viii] Worries about artificial intelligence, surveillance at work may be connected to poor mental health, American Psychological Association, September 7, 2023, https://www.apa.org/pubs/reports/work-in-america/2023-work-america-ai-monitoring

[ix] Shakked Noy, Whitney Zhang, Experimental evidence on the productivity effects of generative artificial intelligence, Science, July 13, 2023, https://www.science.org/doi/10.1126/science.adh2586


By Tom Lawry November 18, 2025
Network inaccuracy isn’t an inconvenience — it’s a public health crisis. Four out of five provider directory listings in major health plans are wrong. That bad data drives higher costs, delayed care, and fear for millions of Americans already worried about getting sick. This isn’t a complex problem to fix. It’s a neglected one. Thi s article is worth reading. T.
By Tom Lawry November 11, 2025
In my keynote presentations to healthcare leaders, one of the questions I always pose is this: Is AI part of your organization’s HR plan? When it comes to AI, healthcare leaders often overestimate the challenges of technology and underestimate the challenges of people.  Employee resistance and lack of understanding are among the top reasons AI initiatives fail to deliver on their promise. This isn’t a criticism. It’s an acknowledgement that the single greatest question anyone in the healthcare workforce has is this: What does AI mean to me and my career? That’s why upskilling the healthcare workforce on AI basics isn’t optional—it’s mission-critical. Here are a few essentials every healthcare leader should consider when building AI fluency across their organization: ✅ Start with awareness, not algorithms. Help staff understand what AI is—and isn’t. ✅ Link learning to purpose. Tie AI education to improving care, safety, and patient outcomes. ✅ Tailor training to roles. A nurse, clinician, and administrator each need different levels of literacy. ✅ Make it continuous. AI learning shouldn’t be a one-off workshop—it’s a journey. ✅ Foster psychological safety. Encourage curiosity and open dialogue about change. ✅ Teach responsible AI. Build fluency in bias, privacy, and ethical use. AI readiness is strategic—not optional. And so, is AI part of your HR plan today? How is your organization preparing its workforce for the age of intelligent care? What steps are you taking to turn AI fear into AI fluency? T.
By Tom Lawry November 7, 2025
The Joint Commission and Coalition for Health AI (CHAI) just released the first national guidance (US) on the Responsible Use of AI in Healthcare. This is a practical, flexible framework designed to help health systems of all sizes govern, validate, and monitor AI responsibly—while ensuring patient safety and trust. Coming next: AI governance playbooks and a voluntary AI certification program for more than 22,000 accredited organizations nationwide. This is a significant step forward for provider organizations in the United States seeking a standard, well-vetted approach to responsibly deploying AI. Go here for more information. T.
By Tom Lawry November 3, 2025
I recently did the opening keynote for the annual gathering of the Forum for Healthcare Strategists to discuss the almost unlimited opportunities that forward-thinking marketing and patient experience leaders have to move us from "one-size-fits-all" to a highly personalized and effective experience for every patient and health consumer. Thank you, Chris Boyer, for interviewing me and allowing me to share my thoughts on AI and strategic marketing in healthcare. The interview may be found here . T.
By Tom Lawry October 31, 2025
I’m excited to be heading to Johannesburg in January to keynote and serve on the faculty for AMLD Africa. Love that this is an event driven by students from Africa and around the world who are dedicated to democratizing Artificial Intelligence across the continent through knowledge-sharing, ethical development, and inclusive innovation for an equitable digital future. The event is happening on the campus of Wits University January 26-29, and is geared towards students, researchers, startups, industry professionals, policymakers who are interested in shaping an intelligent future for Africa. Go here for more information: https://mlafrica.org/event/amld-africa-2026/
By Tom Lawry July 14, 2025
I was in New York last week to do the opening Keynote for the HIMSS 2025 AI Forum. It was a great international gathering to review and discuss the state of AI in health and medicine. My keynote for this event focused on how AI is driving fundamental changes in the provision of health and medical services as a backdrop to what I call the "AI Leadership Imperative." Done right, AI is not about technology. It's about EMPOWERMENT. 2025 is the year health and medical leaders must move away from Fear of Missing Out as a motive and put in place the people and processes necessary to use AI to drive value at scale across health enterprises.  For a deeper look at what I covered in my HIMSS talk, GO HERE for a thoughtful review of my session by Gil Bashe of Medika Life.
By Tom Lawry June 19, 2025
As we commemorate Juneteenth, let us recognize that, despite decades of progress, systemic disparities persist in who gets care, when, and how. These gaps are rooted in the embedded policies, norms, and practices that advantage some and disadvantage others. Health care is a noble cause, and it needs our help. I hope you will take a few minutes to reflect on the information I’ve excerpted from my new book, Health Care Nation. Each of us has the power to make a difference. Health equity isn’t just a moral imperative—it’s a path toward a more just and prosperous future for all of us. T.
By Tom Lawry June 10, 2025
This week, I hit the airwaves on Shake It Off—a talk radio show reaching listeners across the greater New York City area—to share the story behind my new book Health Care Nation. We didn’t just talk health care. We talked movement building . I believe it’s time for citizens and clinicians alike to raise their voices—not just in frustration, but in reimagining what American health care could be . If you're ready to challenge the status quo and be part of a smarter, more humane system, give this a listen .
By Tom Lawry May 21, 2025
Everyone’s talking about Responsible AI. But when it comes to actually putting principles into practice, the follow-through is often inconsistent. That’s why I’ve adapted a leadership module I use in my advisory work and made it available as a free download. It’s designed to help clinical and operational leaders ask—and answer—three key questions that reveal how well their organization is developing, implementing, and managing AI in ways that truly serve all patients and consumers. If you're serious about making Responsible AI real, not just rhetorical, this tool is a great place to start. Go here to download this guide.
By Tom Lawry May 16, 2025
I have just published an op-ed piece in MedPage Today, which is an excerpt from a chapter in my book, Health Care Nation. In many ways, health care has become America’s largest escape room. We’ve locked our most talented health care experts and consumers in with a staggering $4.7 trillion of our own money. The problem is that we haven’t figured out how to escape the maze of convoluted policies, skewed financial incentives, and entrenched traditions that are steering amazing people and 17.3% of our Gross Domestic Product (GDP) in the wrong direction. Go here to learn more.
Show More