Privacy and Veterinary Medical Records with AI
Veterinarians do not have HIPAA in the same way human-medicine clinicians do — but every state's veterinary practice act has strict rules about client confidentiality, and most clinics have written policies that pre-date AI tools. Pasting an unedited record into ChatGPT can violate those rules, expose your client's data, and create a discoverable mess if anything ever ends up in front of the state board. This lesson gives you a clear, defensible workflow.
What You'll Learn
- What "veterinary protected information" actually includes
- The 60-second de-identification routine before any AI paste
- Which AI accounts retain training data and which do not
- How to update your clinic's privacy policy and informed consent for AI use
What Counts as Veterinary Protected Information
Even though HIPAA does not apply to a vet clinic, courts and licensing boards treat the following as confidential client information:
- Client name, address, phone number, email, billing details
- Pet's name when it functions as a record identifier (especially when paired with the owner's last name)
- Microchip numbers, license tag numbers, rabies tag numbers
- Photographs of the patient that show identifying tags or collars
- Diagnostic images attributable to a specific patient
- Specific exam dates that, combined with breed and zip code, could re-identify a small clientele
- Controlled substance log entries
- Anything that would identify a specific shelter, breeder, or research facility client
The realistic threshold: if a court could match the data back to a specific client, treat it as confidential.
The 60-Second De-Identification Routine
Before pasting any record into a consumer AI tool, run this five-step pass. It takes about a minute once you build the habit.
Step 1. Replace the client name with "Owner." Replace the patient name with "Patient" or a generic placeholder like "Buddy."
Step 2. Replace the exact date of visit with a relative reference: "today," "yesterday," or "10 days post-op." Specific dates plus a small clientele can re-identify.
Step 3. Remove the address, phone number, email, and any account or invoice number.
Step 4. Round the patient's age to the nearest year if it's a working dog or rare breed in your region. A 4-year-old Belgian Malinois in a small town is identifiable. A "young adult Belgian Malinois" is not.
Step 5. Strip the practice name, your clinic's address, and any DVM signature block from the record before pasting. You can add it back after the AI returns the draft.
A worked example. The original note:
"Apr 22 2026, Smith, John — Buddy (10y MN Golden Retriever, 31 kg) presented to Maple Hills Veterinary at 2:14 pm with acute vomiting starting yesterday evening..."
Becomes:
"10-year-old MN Golden Retriever, 31 kg, presenting today with acute vomiting starting yesterday evening..."
The AI loses zero clinical context. The client loses zero privacy.
Which AI Accounts Retain Data
This matters more than most vets realize. Different tiers of the same product behave differently.
ChatGPT free. Conversations are by default used to improve OpenAI's models unless you toggle data controls off in settings. Do this immediately — the toggle is under "Data Controls" → "Improve the model for everyone."
ChatGPT Plus, Pro, Team. Same default — turn off training data sharing in settings.
ChatGPT Enterprise and ChatGPT for Business. These tiers do not train on your data by default. Suitable for clinic-wide use.
Claude.ai (free and Pro). Anthropic does not train on user conversations by default — a strong baseline for clinical drafting work.
Gemini. Free Gemini conversations may be human-reviewed unless you turn off Gemini Apps Activity. Workspace Business and Enterprise tiers have stricter data-handling.
Perplexity. Search-style queries with citations. The Pro tier has a privacy mode that prevents your queries from being used for training.
Bottom line. For paid clinic use, Claude is the easiest privacy default. For ChatGPT, turn off training; or move to a Team or Enterprise tier if you have multiple staff using it.
Building a Defensible Workflow
Three steps to a defensible AI-in-the-clinic policy:
1. A one-paragraph internal policy. Add to your clinic's existing privacy policy: "We use approved AI tools to assist with drafting medical records and client communications. No client names, addresses, phone numbers, or other identifying information are entered into these tools. All AI-generated content is reviewed and approved by a licensed veterinarian before being added to the medical record or sent to a client." Keep it on file. State boards reward documented intent.
2. A pre-approved tool list. Write down which AI tools your team is allowed to use, on which accounts, with which settings. For example: "Claude Pro on the clinic account; ChatGPT Plus on the clinic account with training data turned off; Perplexity Pro for drug research only." This prevents your tech from pasting a record into a random app you have never vetted.
3. Disclosure on intake forms. Add a short line to your client intake or consent form: "Our team may use AI tools to assist with drafting your pet's medical records and discharge instructions. All AI-assisted content is reviewed and signed by a veterinarian. We do not enter client names, addresses, or phone numbers into AI tools." Most clients are reassured rather than alarmed by transparency.
A Word on Diagnostic Images
Do not paste cytology slides, radiographs, ultrasound stills, or histopathology images into a general-purpose chatbot expecting a diagnosis. Beyond the privacy risk, these tools are not validated for veterinary imaging — they will confidently make things up. Use validated veterinary imaging AI products (such as those integrated with your reference lab or PACS) for image-based diagnostic support.
Key Takeaways
- Veterinary records are confidential under state practice acts even without HIPAA
- Run the 60-second de-identification pass before every AI paste
- Turn off training data sharing in ChatGPT and Gemini; Claude is private by default
- Maintain a one-paragraph AI policy and a pre-approved tool list at your clinic
- Never use a general chatbot to interpret a diagnostic image

