Petra K. Harms
DVM
Dr. Petra K. Harms is the founder and CEO of VetMaite.com, which helps veterinarians navigate technological change and integrate artificial intelligence into their practices. A practicing emergency and primary care veterinarian since 2009, she holds certificates of specialization from Harvard and Stanford on AI in health care.
Read Articles Written by Petra K. Harms
Throw a stone in any direction these days and you’re nearly guaranteed to strike an AI-based business venture. LinkedIn is frothing with weekly announcements about ground-breaking applications of machine learning. Doctor Google has been sidelined in favor of the smarter, younger, hipper ChatGPT, and suddenly, our least favorite source of second opinions comes with the ability to write a report on any topic of your client’s choice.
In business circles, one can’t escape the clamor for applications using big data and artificial intelligence. Why, then, is the veterinary industry’s response so muted? Veterinarians already generate vast amounts of electronic data and could benefit from data-based medical insights. They struggle with staffing and beg for ways to help ease veterinary burnout. Why aren’t veterinary clinics first in line to demand and adopt artificial intelligence tools?
A Shock to the System
In February 2024, the American Animal Hospital Association released “AI in Veterinary Medicine: The Next Paradigm Shift,” a study co-published with veterinary software provider Digitail. With 3,968 respondents surveyed in the veterinary industry, the study revealed an undecided and cautious profession. As a group, we are intrigued by the potential of artificial intelligence but wary of a technology we understand so poorly, with so few established benchmarks, so many unanswered questions and so little data on real-world efficacy. Perhaps us older folk sustained post-traumatic stress disorder from having struggled to adapt to electronic medical records back when they emerged as the hottest new tech tool.
What doesn’t help is that, for many of us, AI technology is as understandable as black magic. It boils down to this: How are we supposed to trust tools:
- We don’t understand?
- We can’t evaluate critically?
- We don’t have performance benchmarks for?
- That lack clear messaging from regulatory organizations?
- That, in the end, might hurt us more than help us?
Myriad Benefits
AI technology’s progress is outpacing our profession’s ability to keep up. We see the robots coming and aren’t quite sure what to do about them. It’s time we break out of our inertia. We can’t stop AI’s evolution, but with education, self-advocacy and early engagement, we can harness the robots to march for us instead of over the top of us.
As companies recognize the cost and time savings that AI tools can provide, the start-up space is flourishing. The marketing agency Full Slice lists more than 30 companies providing AI-based tools and services in the animal care industry [bit.ly/4ely04e].
Today’s AI technology can increase speed and efficiency in nearly all aspects of the veterinary-client interaction. With tools to independently schedule client appointments, summarize complex medical histories, automatically populate medical records during appointments, create customized discharge instructions and even operate client chat interfaces, we can spend less time staring at a screen and more time interacting meaningfully with clients and patients. We can reduce our stretched support staff’s workload.
Diagnostic providers incorporate AI technology by providing radiograph and lab value interpretations and diagnostic suggestions. By the time you read this, you might see tools that offer comprehensive differential diagnoses lists based on laboratory results and aggregated pet medical records.
A year ago, a Google Research report showed that AI-assisted doctors provided significantly more comprehensive lists of differential diagnoses than unassisted doctors or doctors assisted with standard search engines and non-AI resources.
The future, my fellow professionals, is here.
Of course, there is a dark side. The most brilliant technology can ultimately fail its users. Programs that perform well in the computer lab might not replicate their success in the real world. Poor data or bias can generate inaccurate machine learning algorithms and faulty recommendations. A program’s user interface might be so complex and finicky that it takes more effort to operate than having humans perform the tasks. A tool that does not protect a patient’s data privacy can violate existing legislation. Indeed, we might be compliant with legislation one year, only to find ourselves in violation the following year as the legal ecosystem adapts. A tool that makes independent recommendations might expose practitioners to more liability.
Talking About AI
Consider this: Now is the first time veterinary medical records have diagnostic input and recommendations written directly into the physician’s medical record by an entity other than the doctor, staff or another veterinarian. How does that affect our liability if a case goes poorly? On a more existential
level, in a future where apps might supply all the differentials, should we be worried about atrophy or underdevelopment of our practitioners’ diagnostic skills? Might we risk our ability to think independently if we over-rely on machine learning algorithms?
Luckily, players in the veterinary and AI technology spheres are working to bring order to the melee. In April 2024, the Symposium on Artificial Intelligence in Veterinary Medicine took place at Cornell University. Leaders at the intersection of AI in veterinary medicine, representing research, academia, industry, food production, computer engineering and private practice, attended the three-day event. Through presentations and workshops, they took stock of the state of AI in veterinary medicine, identified the challenges and proposed solutions to help the technology move forward.
According to symposium organizer Dr. Parminder Basran: “The purpose of this symposium was to provide an opportunity for discussion and collaboration amongst researchers who focus exclusively on the intersectionality of veterinary medicine and artificial intelligence, foster cross-disciplinary discussions and collaborations, and provide an opportunity for students, faculty and the private sector to learn from each other in this rapidly developing field.
“Creating a welcoming environment is an essential ingredient in establishing a space where people can exchange ideas, develop partnerships, and explore new research and clinical opportunities. We believe we achieved this first seminal step and hope that the next ones could be more action-based.”
When asked what effect the symposium might have on the average veterinarian, Dr. Basran commented, “The natural step is educational empowerment and providing the tools needed for practitioners to explore how AI could improve practices.”
Responsible Development
Organizations are realizing that the value of AI tools is only as good as the performance benchmarks and safety measures installed to protect end users. The Responsible AI Institute is a global nonprofit group that equips organizations and AI professionals to create, procure, and deploy safe and trustworthy AI systems.
“As artificial intelligence continues advancing, it will be crucial to develop these technologies responsibly in the health care domain,” said Alyssa Lefaivre Škopac, the institute’s head of global partnerships and growth. “AI systems could revolutionize disease diagnosis, treatment planning, drug discovery and much more for human medicine.
“At the same time,” she continued, “we must be thoughtful about mitigating risks like bias and privacy violations. The veterinary field has similar opportunities to leverage AI for improving animal health outcomes while upholding high standards of responsibility. Developing AI that is accurate, transparent and aligns with core values in health care settings will be an important multistakeholder challenge in the coming years.”
The nonprofit Association for Veterinary Informatics has been in existence since 1981. Its chairman, Dr. Nathan Bollig, reports that “some of the biggest challenges are keeping up to date on what AI can and cannot do, being thoughtful about whether it is the right tool for the job, and thinking proactively about how to define successful outputs of AI systems.”
Teaming With Engineers
The pipeline that generates AI technology is uniquely positioned to prioritize the needs of veterinary practitioners. The engineers learning to make the technologies are being taught the critical importance of involving industry professionals from the start of their projects.
Stanford University’s online course “AI in Healthcare Specialization” specifies that a clinical AI development team should be consistent with the expert archetype model. This model outlines core participant roles in teams developing machine learning algorithms, which are essential to the development process from Day One. Domain experts (the technology’s end users) are placed on equal footing with computer engineers, data scientists, IT informatics experts and biostatisticians. The emphasis on seeking out and integrating end users makes it much more likely that AI technology will meet their needs. We must educate ourselves so we can tell developers exactly what we do and don’t want.
Next Steps
Yes, the machines are coming. They are immature, however, and we can influence their growth. Much like the drugs we prescribe, the machines can help or cause harm. We must position ourselves in a way that will allow AI technology to help us. We can do it by learning about the technology, its applications, risks and benefits. We can learn to evaluate tools on the market critically and ask informed questions of tool developers. We can engage with tool developers and advocate for user-friendly interfaces from the ground up. We can push the industry to prioritize our needs as professional caregivers first and data generators second. We can have conversations with each other. We can learn about our ethical and legal responsibilities and connect with organizations that help guide us on how to meet them.
In addition, we can ask our regulatory bodies for continuing education opportunities in AI. We can challenge veterinary associations to develop responsible AI adoption frameworks that protect us as end users.
Most importantly, we can recognize how exciting it is to be in veterinary medicine today. We have a fantastic opportunity to influence the process of change and define how we want our hospitals to evolve over the next decade. It is in our best interest to seize the opportunity.
WATCHFUL EYES
Under President Biden, the White House Office of Science and Technology Policy published “Blueprint for an AI Bill of Rights.” The white paper is nonbinding and “intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment and governance of automated systems.” Learn more at bit.ly/3Af41LQ.