Generative AI
For guidance on introducing AI tools into general practice, see:
|
Policy
Q123
GPDocs Model Practice allows team members to use AI to assist with clinical note taking, as well as administrative tasks (e.g. report writing). Everyone who uses AI must meet the terms and conditions of the tool.
We do not allow clinicians to use generative AI for clinical decision making.
Before introducing
generative artificial intelligence (AI) tools, we assess and mitigate the risks to ensure privacy and safety are protected. We comply with New Zealand legislation governing the use of AI, in particular the Privacy Act 2020 and its Information Privacy Principles (IPPs).
Generative AI (AI) is a subcategory of artificial intelligence that uses algorithms to generate new types of content or “outputs”, including text, images, music, code, and voice. This technology analyses large datasets to identify patterns and make decisions, operating on "prompts" the users provide to instruct it.
Unlike traditional AI, which focuses on recognising and processing existing information, GenAI creates new content based on what it has learned. Prominent examples of GenAI tools include ChatGPT, Dall-E, Nabla, Co-Pilot, and Heidi, each offering unique advantages, functionalities, and potential drawbacks.
Source: WellSouth: Guidance for the use of Generative AI within Primary Care
We recognise this is rapidly-evolving technology, and reassess our processes as needed.
See Privacy Commissioner | Mana Tohu Mātauranga o Aotearoa: Artificial Intelligence and the IPPs.
We consider the terms and conditions, privacy settings, data collection, and content ownership agreements of all AI tools, platforms, or apps in use. These are reassessed if updates are made to the tool. We don't allow confidential information owned by our organisation and copyrighted material to be entered into any generative AI tool.
Before using AI, we assess and mitigate the medico-legal risks to ensure that safety and privacy are not compromised:
Q124
- All AI tools must be approved by senior leadership before being used.
- A Privacy Impact Assessment (PIA) is completed for each AI tool in use.
- Explicit verbal consent is obtained from the patient at the start of each consultation where AI is used, and this is documented in the clinical notes.
- Signage in the waiting area advises patients that AI tools are used.
- A statement on our enrolment form advises new patients that AI tools are used.
Information generated by AI must be reviewed and validated before it is saved. Where clinical notes are created using AI, the relevant clinician is responsible for their accuracy.
Safeguarding privacy and security
A Privacy Impact Assessment (PIA) must be completed for all AI tools in use, to evaluate potential risks and ensure compliance with New Zealand legislation.
See Privacy Commissioner | Mana Tohu Mātauranga o Aotearoa: Privacy Impact Assessments.
Before using an AI tool, we confirm:
- private or clinical information generated or processed by AI is encrypted and/or de-identified
- if/where data is stored (even temporary storage) and any
privacy implicationsStoring data overseas (even temporarily) may breach New Zealand privacy legislation.
Consent should be obtained for patient data being stored overseas.
- the AI tools used in our organisation meet New Zealand legislative requirements under the Privacy Act 2020.
Follow the privacy breach procedure for any breaches that result from AI use.
Obtaining and documenting consent
Consent must be explicitly obtained if AI is used during a consultation:
- Verbal consent must be given at the start of each consultation. This is documented in the clinical record.
- Verbal consent is obtained for each subsequent consultation where AI is used, on a case-by-case basis.
- If consent isn't given, AI tools are not used during consultations.
- Patients may withdraw consent for AI use at any time.
- Where AI is used within our organisation, information is provided openly and transparently using a
layered approach.Best practice is to build patient understanding of AI use by providing information in a number of different ways.
For example:
- statement on the enrolment form
- verbal consent at the start of each consultation, documented in writing
- continued verbal consent for subsequent consultations, at the clinician's discretion
- signage in the waiting area
- information on the website
- creation of an FAQ document.
- If an AI tool being used stores or processes data overseas, this is explained as part of the consent process.
See also Informed Consent.
Ensuring data accuracy
Information generated by AI tools must be reviewed and validated by the relevant clinical or team member, before being saved or stored. The relevant clinician is responsible for the accuracy of clinical notes created using AI.
Before saving AI-generated information, ensure clinical information is accurate, complete, and adequately reflects the care provided:
- Edit or correct
errors, discrepancies, or inaccuracies.Common errors with AI transcription:
- misheard or misinterpreted information
- "hallucinations" or fabricated information
- difficulty distinguishing between speakers
- background noise or poor acoustics
- accent or dialect recognition
- false positives/negatives.
- Add any
missing information.Ensure the AI summary includes:
- information not explictly discussed, e.g. hospital discharge summaries or imaging reports
- non-verbal cues from the patient
- examination findings, if not spoken out loud
- data from medical devices.
- Check medication names are correct.
- Correct the spelling of Māori or Pacific words if there are errors.
- Be careful that
bias and/or inequities haven't been introduced.AI models are likely to have been trained on data with underlying biases, especially if it hasn't been trained using New Zealand data, te reo Māori, or local cultural considerations.
Be careful that AI-generated clinical records don't perpertuate biases or reinforce inequities.
Audits
Use of AI is monitored and audited as part of clinical governance.
Regularly review AI use to ensure:
- any AI use is appropriate and necessary
- Privacy Impact Assessments (PIAs) are current
- consent is obtained and recorded before each consultation
- clinical information is accurately recorded
- biases and inequities are not being introduced.
See also Clinical Governance.
Resources