Rohan Relan
Rohan Relan is the founder and CEO of ScribbleVet, an AI scribe tool used by thousands of veterinarians. He is a seasoned entrepreneur with an engineering background from the University of California, Berkeley.
Read Articles Written by Rohan Relan
Over the past few years in veterinary medicine, we’ve watched artificial intelligence move from novelty to vital infrastructure. AI can document patient exams, generate client summaries, and fill gaps in workflows that were stretched long before today’s staffing shortages. Many veterinarians have found that AI improved their records and created tangible relief for their clinical teams. Unfortunately, that progress does not erase one very real issue: trust.
Given our knowledge of how AI works, coupled with clinical medicine’s rigorous standards, it’s fair to think that an AI system could fabricate a detail or miscalculate a dosage. Just like a talented assistant or a peer-reviewed article, AI works best as part of a system of checks and balances. Understanding good AI stewardship is key.
The way artificial intelligence sources, interprets, and structures information creates new kinds of business risk that need responsible management. The first step is understanding where those risks come from and how to mitigate them.
Sourcing Errors Are a Business Risk
AI works by making predictions. It fills gaps using the patterns learned during training. When those patterns are strong, the results are excellent. When the data is weak or ambiguous, the model guesses or extrapolates. Sometimes those guesses are nonsensical or counterfactual, or the extrapolations go too far. We label those as hallucinations.
The risk starts when humans forget that artificial intelligence hallucinates and assume that AI tools are infallible. The solution is to use AI within a framework that captures the benefits while catching errors before they cause harm.
Here’s a practical way to think about fitting AI safely into your workflow: Use it when generation is difficult and time-consuming but verification is easy.
AI can make mistakes. The goal isn’t to find the tools that never make error; it’s to use AI in contexts where your clinical expertise allows you to catch those errors quickly and confidently.
Consider AI scribes. Writing SOAP notes from scratch after every appointment might take 10 or more minutes per patient, adding up to hours a day. But reviewing a set of AI-generated notes as soon as you walk out of the exam room? Only a minute or two of your time. The details are fresh in your mind. You know what you discussed, what you observed, and what you recommended. Spotting an error simply requires reading.
Here is the sweet spot for AI in clinical practice: tasks where the technology does the heavy lifting of input or generation, while the clinician provides evaluation and quality control. The time savings are real, and the verification burden is low enough that it doesn’t eat into those savings.
The same principle applies to client communication. Drafting a detailed, personalized discharge summary or follow-up email can take several minutes. Reviewing one that AI drafted based on the visit? Much faster, especially when you conducted the exam.
Making Verification Possible
Not all AI tools are built with verification in mind. When evaluating any system, ask yourself: Can I easily check this tool’s work?
Take record summarization as an example. If you’re using artificial intelligence to condense a lengthy patient history into a quick overview before an appointment, the summary is only as trustworthy as your ability to verify it.
A well-designed system will provide citations or references to the original records, allowing you to click through to the source note when something looks off or when you need more detail. If the tool cannot trace the information’s source, you’re stuck choosing between blind trust and manually searching the complete record yourself. That defeats the purpose and adds risk.
The best AI tools treat transparency as a feature, not an afterthought. They show their work. Where the information came from is obvious. This doesn’t eliminate the possibility of errors, but it does eliminate the need for unconditional trust. You can verify, and verification is easy.
Clinical Decision Support, Not Clinical Decision-Making
Another area where artificial intelligence has become increasingly present is clinical reasoning. Tools that generate differential lists, flag potential drug interactions, or suggest diagnostic pathways can be genuinely useful. But the framework here matters even more.
AI can surface a list of differentials based on the data provided and sources available. That list might help ensure nothing was missed, or it might trigger a line of thinking that leads to a follow-up question. That’s decision support — a starting point, not an end point. The veterinarian needs to evaluate the differentials against the full clinical picture, rule out what doesn’t fit, and decide the diagnostic and treatment plan. That’s decision-making, and it’s the clinician’s responsibility.
Danger arises when the line between support and decision blurs. If a busy clinician starts treating AI suggestions as conclusions, errors can slip through. The safeguard is the same as before: Use the tools in ways that keep verification quick and keep final judgment in human hands.
When Verification Is Hard, Keep the Stakes Low
Not every task fits neatly into the easy-to-verify category. Sometimes AI is asked to synthesize information in ways that are difficult to check or to operate in areas where the clinician lacks immediate context to evaluate the output.
In those cases, the right approach is to limit AI use to low-risk scenarios. If you can’t easily verify the output, don’t use it in situations where an error could harm a patient or create liability for your practice. Reserve artificial intelligence for tasks where a mistake would be inconvenient, not dangerous.
All this isn’t an AI limitation so much as a principle of responsible adoption for the veterinary industry. Every tool has appropriate and inappropriate uses. AI is powerful, but it belongs in contexts where its strengths align with your ability to oversee its weaknesses.
The Path Forward
Veterinary teams are under immense pressure. Burnout, documentation load, client expectations, and staffing shortages create an environment where efficiency gains aren’t optional. Artificial intelligence offers the kind of relief that overwhelmed teams covet, and the clinics that adopt it thoughtfully will be better positioned to weather the current challenges.
Thoughtful adoption for the industry means building workflows that don’t require blind trust. It means:
- Finding the tools that save time on generation and make verification simple.
- Asking vendors whether their systems cite sources.
- Keeping clinical decisions with the veterinarian.
You can’t trust AI tools completely, but you can use them in ways that don’t require you to.
LEARN MORE
For a brief explanation of machine learning, deep learning, and generative AI, read the IBM article “What Is Artificial Intelligence?” at bit.ly/3MnkYK9.
