Skip to main content

News & Views — Using Artificial Intelligence in Vaccine Education: The Good, the Bad and the Unexpected

Post
News & Views — Using Artificial Intelligence in Vaccine Education: The Good, the Bad and the Unexpected
July 25, 2024

Artificial Intelligence (AI) is everywhere: driving our cars, writing replies to our emails, and turning on the air conditioner in our homes. In medicine, understanding AI’s role in enhancing health education and patient counseling is crucial. Over recent months, the Vaccine Education Center team has done some informal exploring when it comes to using large language model AI to find vaccine information. Large language models, like ChatGPT and Google AI (Gemini), are designed to be general purpose tools, meaning they can handle inquiries about a wide range of topics. AI elevates our ability to get information from online sources in a couple of ways. First, these tools understand a wider array of search inquiries regardless of how they are phrased (often referred to as “natural language understanding”). Second, they have the ability to synthesize information from several sources to compose a response that is reminiscent of a conversation rather than simply a list of sources with pertinent keywords and phrases. For example, you can have an interaction with the computer that resembles a real-life conversation about general topics like how to plan a birthday party or how to change a tire.

Given all of the buzz about AI, we wanted to see what kind of responses the tools would generate when asked about vaccines. While our inquiries were informal and limited, we found some potential applications, areas of promise, and areas of concern. In this issue of Vaccine Update, we thought it might be helpful to share our experience exploring this evolving technology.

Using large language model AI: The good

In our experience, AI was proficient with topical overviews, such as basic vaccine-related questions, providing coherent answers that often included citations. You could also easily click through to the primary references to get more details.

During our exploration, we asked two questions that demonstrated “the good”:

  1. What are good ways to distract a child receiving a vaccine?  — For this question, ChatGPT produced a list of 10 effective techniques, from using squeeze balls to playing music to bringing a favorite toy to the visit. A follow-up question (and which of these work well for infants?) repeated a few of the techniques that were age appropriate and added a few infant-specific options, including breastfeeding, distracting them with rattles, and making gentle sounds. This response did not come with citations, but ChatGPT offered to refer us to the American Academy of Pediatrics website for additional information.
  2. How can I efficiently run a vaccine clinic to get as many people in as possible? — For this question, Gemini produced a four-section document including planning and preparation, efficient clinic operations, communication and outreach, data management and reporting. The final paragraph also included additional tips. This article did not include citations, but it could provide the sources for specific bullet points when requested.

In sum, you might think of results from these types of inquiries like typical media articles — they offer a broad stroke overview of the topic written at an easy-to-understand reading level. But, in this case, the article was written for you at the moment you needed it. Importantly, even with reasonable results and trustworthy overviews, AI is not at a point where you can skip verification of what you are reading. Consulting primary resources continues to be essential.

Using large language model AI: The bad

When we asked the tools questions that were highly technical or involved rapidly changing topics, the responses were fraught with inaccuracies. Posing two technical, but nuanced, questions to AI demonstrated some current limitations of this technology:

  1. Can you provide a catch-up schedule for a 6-year-old child who has only received one dose of pneumococcal vaccine? — On the heels of our recent article about catch-up schedules and combination vaccines, we decided to see whether AI could help devise a catch-up plan. ChatGPT responded that a dose of PCV13 should be administered as soon as possible. This is not correct as children who receive their first dose at 24 months and older only require a single dose, and our prompt did not inform AI when that first dose was administered. Additionally, the AI response was out of date as it did not appear to be aware of the existence of PCV15 or PCV20. As we were hoping for a full catch-up schedule, we followed up with a second question (Can you provide the catch-up schedule for all vaccines for this child?). While a lot of the guidance was correct, some of it was not. For example, AI recommended both HPV and influenza vaccines as optional for this 6-year-old and did not recommend COVID-19 vaccination at all. While knowledgeable providers may catch these errors, this exercise demonstrates the limitations of AI when getting information that is directly related to patient care. This example did offer one potential application though: We were able to quickly get to the catch-up schedule-related pages on the CDC’s website.
  2. I accidentally provided a rubella vaccine to a pregnant person. What is the risk of congenital rubella syndrome for the baby? — This question was posted to Gemini. Part of the response was presented in bold: “You should immediately consult with a healthcare provider.” The additional information included a description of the outcomes caused by congenital rubella syndrome (CRS). Unfortunately, it did not provide important details about other steps typically recommended when a vaccine administration error is made, such as reporting to the Vaccine Adverse Events Reporting System (VAERS) and providing guidance to the patient. It also did not provide specific citations to review. We would have had to do this research independently to come to a reassuring and accurate message if we did not know it or have another source to consult for help developing it.

Using large language model AI: The unexpected

AI tools have a consistent style of writing based on how each tool is trained. While they may lack the creativity of a skilled writer, they can provide empathetic responses to healthcare questions. As described by Ayers and colleagues, in contrast to healthcare providers who may be experiencing stress or fatigue from a busy day, AI’s lack of ability to feel these human stressors allows it to craft efficient and empathetic messaging, even about emotionally charged topics. While we should not rely on AI to provide medically accurate information to answer a patient’s question, AI can be relied on to offer a framework for the response, such as we found in these two examples: 

  1. Hi, I'm a doctor and just got this message from a patient - how should I respond: ‘I have been trying to get pregnant and just got my COVID vaccine and again, this month, I'm not pregnant. Why did you recommend this to me?’ — Within 10 seconds, ChatGPT provided a response that acknowledged the patient's concerns, provided reassurance based on current scientific understanding, and offered further support. While you will need to ensure the accuracy of any medical information provided, AI demonstrated that it can inform a response framework.
  2. Hi- I'm bringing my daughter in for her next well visit and the HPV vaccine is due. I'm worried it will increase the chance she takes part in some sort of risky sexual activity. Can you tell me about this before I bring her in? — Gemini crafted this response: It's great that you're thinking about your daughter's health and future. Many parents have this concern, and it's important to address it directly. Studies have consistently shown that the HPV vaccine does not encourage risky sexual behavior. Its primary purpose is to protect against HPV, a common virus that can lead to serious health problems like cancer. By getting vaccinated, we're giving her the best chance at a healthy future." Gemini then described ways to offer reassurance and an open dialogue and described the value of providing accurate information and addressing misconceptions. Again, providers will have to confirm the accuracy of any scientific points or mentioned studies if not already familiar with them, but this answer offered a good framework for a response and provided additional contextual information for the healthcare provider to consider.

Using large language model AI: The future

  1. Technology is moving fast. This means that constant innovation in the field will change how these tools work, and the responses we received this month may be quite different later in the year. We will need to continue to monitor this field to determine how these tools can help us and where we should continue to exercise caution.
  2. New technology can bring forward applications we haven’t yet considered. For now, our focus may be on looking forward to a time when AI can competently answer technically challenging questions. But other applications for these tools may cause changes to how we practice that we haven’t yet considered. For example, how will they change vaccine development? How will they be used for disease surveillance?

Only time will tell how these powerful tools will change our professional and private lives, but we know they are not going away, so starting to learn about them and monitoring related developments is a good place to start.

Jump back to top