Petra K. Harms
DVM
Dr. Petra K. Harms is the founder and CEO of VetMaite.com, which helps veterinarians navigate technological change and integrate artificial intelligence into their practices. A practicing emergency and primary care veterinarian since 2009, she holds certificates of specialization from Harvard and Stanford on AI in health care.
Read Articles Written by Petra K. Harms
OK — drumroll, please — artificial intelligence is here. But how do we adopt it further into veterinary practice? As machine learning technology becomes more pervasive, we’re seeing the global development of guidelines, governance and regulations to ensure AI is implemented safely and responsibly. For better or worse, the veterinary industry has largely escaped such scrutiny. However, the hands-off approach has left us with little direction on using the tools as safely as possible. Uncertain of our responsibilities and how to identify and manage the risks, some veterinary professionals hesitate to engage with new AI tools for fear of causing harm. Or they jump in blindly without addressing the risks. Either outcome could stunt or derail AI’s evolution and integration into hospital toolkits.
Not Quite “Primum Non Nocere”
As clinicians, we navigate gray zones daily. Think of the adage “First, do no harm.” Some people wrongly link it to the Hippocratic oath, thinking doctors follow it as the first commandment of medicine. “Just don’t hurt anything while working to improve it.” Nothing could be simpler, right?
When we start to practice, we realize that veterinary medicine is much messier than that. Treatments have side effects. Interventions carry consequences. Test results have margins of error. Humans are fallible. Pet owner wallets are tight. Nothing comes with a guarantee. First, do no harm — “primum non nocere” in Latin — becomes the more realistic philosophy of:
- Understand your patient, treatment or test.
- Aim for a net benefit when balancing the positives and negatives.
- Provide the very best you can with the resources at your disposal.
- Communicate the hell out of everything.
And all that is OK. It’s medicine. It’s also a healthy way to approach integrating AI technology into your veterinary practice.
Veterinarians can use their ability to navigate gray zones when adopting AI technology responsibly. They can work to understand the tools and the risks, balance the risks against the benefits, and decide when to apply the programs and when to use restraint.
SEER Principles
Guidelines for adopting AI tools into clinical practice must be flexible since technology evolves quickly. Regulations change, and the rules imposed today might be obsolete tomorrow. To keep pace with technology and not stifle research and discovery, artificial intelligence must follow principles and best practices rather than hard and fast rules. We want it to align with our professional ethics and address our concerns as end users. We can combine all these needs into a framework of principles that goes by the acronym SEER.
It breaks down this way:
- Safe: AI technology will be people- and animal-centric. It will strive to avoid harming either party when we generate, store, distribute and use data for veterinary purposes.
- Effective: AI technology will achieve its assigned tasks. The creators will monitor and maintain it.
- Ethical: The developers of AI technology will ethically minimize its environmental impact, data siloing, the atrophy of diagnostic thinking, and unfair distribution or implementation. The technology will be transparent and explainable and provide accessible opt-out mechanisms.
- Responsive: AI technology will prioritize the veterinary user interface by minimizing complex interaction, streamlining processes and maximizing speedy performance. Actionable issues will be corrected.
How Practitioners Can Assess AI Tools
Say you want to test an AI-based tool at your veterinary practice. You can evaluate it by asking yourself a series of questions under the SEER principles. Not every AI tool will answer all your questions to your satisfaction, so you must judge whether the program or machine is the right one for your clinic. You will determine the level of risk to you and your staff, clinic, patients and clients. The degree to which the tool is intrinsic to your practice’s operations will dictate your dependence. Those factors can vary based on the user. For example, a new veterinarian might lean on a diagnostic tool more than an experienced practitioner.
As you take on higher risks or greater dependence, your adherence to the SEER principles becomes more critical. In the contrasting scenarios below, risk is the chance that significant harm might occur with the use of an AI program, either when it functions as it should or when it makes an error. Dependence is the degree to which the AI program influences the performance of a clinic and its staff and how easily the program’s functions can be quickly replaced.
High Risk and High Dependence
Decisions might cause severe patient harm if an error occurs, and staff performance could suffer if the program malfunctions. For example:
- A program suggests treatments when used by a newly graduated veterinarian.
- A new veterinarian uses an AI-based radiograph interpretation tool.
- A program predicts cardiac arrest based on real-time ECG signals.
Low Risk and High Dependence
Decisions won’t cause severe patient harm if an error occurs, but staff performance could suffer if the program malfunctions. For example:
- A program schedules staff work shifts.
- A program orders medical supplies.
High Risk and Low Dependence
Decisions can cause severe patient harm if an error occurs, but staff performance won’t suffer if the program malfunctions. For example:
- A program conducts telephone triage and prioritizes patients.
- A program summarizes a patient’s medical history.
- A program suggests treatments when used by a seasoned veterinarian.
- A program permits the oral dictation of notes.
Low Risk and Low Dependence
Decisions won’t cause severe patient harm if an error occurs, and staff performance won’t suffer if the program malfunctions. For example:
- A program creates automated discharge instructions.
- A program automatically fills in referral information.
- A program automatically signs up a patient for pet health insurance.
What to Ask
Once you identify a program’s risk and dependence profile, you can start questioning the developer or supplier to evaluate the tool in line with the SEER principles.
1. Is the AI Program Safe?
- Will the data I generate be used for anything besides storing information about my cases and clients? In other words, will the developers mine my data or use it to train their algorithms?
- If the data I generate will be used for something other than maintaining client records, what de-identification measures will be taken to comply with data anonymization under veterinary, state or national regulations? Also, if no one regulates data privacy, does the company using the data voluntarily de-identify to HIPAA standards?
- Can I opt out of data sharing? Can the data collector certify its compliance or issue a document that guarantees data privacy or de-identification?
- If de-identification doesn’t happen and my practice wishes to proceed with the AI program, how will I obtain informed, documented consent from the client?
- If the client declines to provide informed consent, how can I store the data and not share it with the AI program?
- Where is the data stored? If it’s off-site, what safeguards protect it?
- Does the AI program insert text directly into the electronic medical record?
- If an AI tool generates diagnostic results, test suggestions or treatment options, does it indicate that artificial intelligence supplied the information, and does it say whether a veterinarian (and which one) verified the material?
If the AI program interfaces with the client, such as with a booking or triage tool:
- Does it clearly tell clients they are interacting with AI technology?
- Does it document the interaction and copy relevant notes to the patient’s file?
- Does it transfer the client to a person at the appropriate time, such as when the patient displays signs of medical distress or trauma or if the pet’s condition changes?
- Does the tool communicate signs of medical distress?
- What dataset does the program use to generate discharge instructions, and how often is the algorithm updated?
2. Is the AI Program Effective?
- Did the AI tool’s developers base it on a large foundation model, such as GPT-4, Dall-E2 or BERT, or does it arise from an algorithm trained on a unique dataset?
- If the AI program is based on an algorithm trained on a unique dataset, is a separate validation set used for training? Also ask this: How well does the model perform against the validation set? Is the accuracy acceptable to me as a user? How often does the tool update its dataset and retrain the artificial intelligence? Is feedback about tool errors used to retrain the model, and how often? Which animal breeds and species was the algorithm trained on? Could potential bias within the dataset create output errors?
- Does the provider identify potential risks in using the foundation model, and if so, how are weaknesses compensated for?
3. Is the AI Program Ethical?
- Does the AI tool support a veterinarian’s diagnostic thinking framework, such as providing differentials in order of likelihood rather than recommending a single suggestion?
- Does it provide suggestions in an explainable way? For instance, does it provide medical reasoning for the suggestion?
- If it recommends treatments, is a variety of choices presented?
- Does the creator have a track record or certifications proving responsible AI development?
- Does the program share de-identified data for research or scientific purposes?
- Does it tag data using a universal or commonly used system to move away from data silos?
- Does it support research into and the development of ethical AI use?
- Does the developer try to reduce the environmental impact of AI technology?
- Does the provider support fair AI use?
4. Is the AI Program Responsive?
- Does the user interface interact seamlessly with records systems, such as electronic medical records, imaging modalities and test results?
- If the interface isn’t seamless, what steps are needed to insert the data into your records system?
- Do the extra steps increase or save staff time? Anything that takes longer, including switching between programs and windows, is not an acceptable user interface.
- What is the contingency plan should the technology fail for reasons such as a power loss, loss of internet connectivity or program failure? For example, how easily could you switch to manual data entry, scheduling or phone calls?
- How easily can you port the data into another system if you stop using the technology?
- Can team members easily share errors and efficiency problems with the tool’s developer?
What to Ask About Your Clinic
Once you decide whether an AI tool is appropriate for your veterinary practice, you must evaluate when to implement it. Assessing your clinic and any needed upgrades might be the first step, but you also must train your team to use the technology safely and appropriately.
Ask these questions:
- Is my clinic capable of running the program given our computing resources, IT support, physical hardware and bandwidth?
- Will installing the AI program require additional equipment or technical upgrades?
- Are my team members comfortable using AI technology? Do they need better AI literacy skills?
- Do they trust AI technology? Have I heard and addressed their concerns?
- Have I told team members how the AI program will affect their jobs?
- Have I communicated the technology’s purpose so that my team members are comfortable adopting it?
- Do they understand the technology’s limitations and benefits?
- Are they aware of their liability when using AI-based tools?
- Are they empowered to provide feedback about AI technology, and are there mechanisms to accelerate their input to the developer?
- Are they encouraged to act according to their professional discretion if they think the AI program is wrong, is excessively time-consuming or doesn’t make sense?
Navigating the Gray Zones
We don’t know where artificial intelligence will ultimately take the veterinary industry, but the uncertainty shouldn’t stop us from engaging with AI. Deciding how we use and don’t use it will influence what the developers offer. We can apply the same critical thinking abilities and moral compass we use daily in medical practice to evaluate whether a particular AI tool is suitable.
The principles of safety, effectiveness, ethics and responsiveness can guide how we interrogate AI tools and help us decide whether the benefits are worth the risks.
The responsible use of AI tools isn’t rocket science. It’s more like medical practice.