AI (Artificial Intelligence) Policy
- Purpose
This policy sets out the principles, responsibilities and acceptable usage requirements for the use of generative AI tools (e.g. ChatGPT) and any AI-enabled features integrated into our CRM (HubSpot) or other business systems at Gallop Executive. The aim is to ensure that AI tools are used safely, ethically, securely and in compliance with data protection and confidentiality obligations.
- Scope
This policy applies to all employees, contractors, consultants, interns and other third-party users (“Users”) who use generative AI tools and AI-enabled features in the course of their work with Gallop Executive. It covers:
- Use of ChatGPT (or similar generative AI platforms) for tasks such as document formatting, research, converting written notes to text, marketing content, etc.
- Use of AI or automation features within HubSpot (or other CRM/marketing systems) as relevant.
- Any other AI-driven tools adopted by the company.
It does not permit use of AI tools in a way that bypasses our confidentiality, data protection or client/candidate privacy obligations (see section 4).
- Definitions
- Generative AI tool: Any tool that creates text, images, or other content algorithmically based on prompts and/or data inputs (e.g., ChatGPT).
- Confidential information: All non-public information relating to candidates, clients, Gallop Executive’s business operations, methodologies, strategic plans or proprietary processes.
- Personal data: Any information relating to an identified or identifiable natural person, such as candidate name, contact details, client name, role, feedback, assessments, etc.
- Prohibited input: Any data input into a generative AI tool that includes personal data, confidential or sensitive information (see section 4).
- Usage Rules & Data Input Controls
- Permitted Use
- Users may use ChatGPT (or other generative AI tools) for general business-development research, e.g. market/sector overviews, role-landscape summaries, and converting handwritten or typed notes (that do not contain candidate or client personal data) into text drafts, proofreading, and formatting documents.
- Users may use HubSpot’s AI/automation features for marketing, lead segmentation, task automation, provided that data used complies with this policy and HubSpot terms.
- Prohibited Use / Prohibited Data Inputs
- Never upload or input specific candidate or client personal data (e.g. name, contact details, CVs, interview feedback, salary details, assessments) into ChatGPT (or any external generative AI tool) unless the vendor is contractually approved, data-protected and safe for that purpose.
- Never input confidential information about Gallop Executive’s strategies, methodologies, non-public client mandates (unless approved and secure) into generative AI tools.
- Never rely on generative AI output for decisions about candidates or clients without human review – particularly where privacy, compliance, or fairness is involved.
- Prompt Design & Data Minimisation
- When using generative AI tools, use minimal necessary information in prompts (e.g., general sector, anonymised role summary) rather than identifiable personal or confidential details.
- Ensure that prompts do not include any embedded personal data or sensitive client/candidate details.
- Verification & Human Oversight
- All AI-generated content must be reviewed by a human User before publication or submission, especially marketing material or client-facing documents.
- Any research output must be cross-checked for accuracy and relevance – generative AI tools are aids, not replacements for professional judgement.
- Approval & Vendor Risk
- Before adopting any new AI tool (especially for processing candidate/client data), it must be reviewed for vendor risk, data security, privacy compliance (including GDPR), and must be approved by the leadership team (or equivalent).
- Ensure proper contractual terms, data-processing agreements and data residency/retention controls are in place.
- Data Protection, Confidentiality and Privacy
- All personal data and confidential information processed or stored must comply with the UK General Data Protection Regulation (UK GDPR) and any applicable data protection law.
- When using generative AI tools externally, be aware of the tool’s data usage policies – many tools may use input prompts for training and retention; avoid inputting sensitive or personal data unless you have verified data handling.
- When using AI features within HubSpot or other internal systems, confirm that data controllers/processors are appropriately compliant and that roles/responsibilities are clear.
- Confidentiality obligations to clients and candidates remain paramount; any use of AI must not compromise confidentiality or client/candidate trust.
- Security & Retention
- Do not store AI-generated content with embedded personal or confidential data in unsecured locations.
- If AI output becomes part of a candidate file or client file, ensure appropriate access controls and retention schedules apply in line with Gallop Executive’s data retention and deletion policies.
- If any data breach or misuse occurs (e.g. personal or confidential data inadvertently entered into generative AI tool), this must be reported immediately to a company Director (or data protection designated person) in line with incident management protocols.
- Bias, Fairness and Ethical Considerations
- Recognise that generative AI tools may reflect biases from training data; they are not inherently objective.
- As we are not a volume recruitment business, we never use AI to make screening decisions on candidates, staying true to the core of Gallop Executive’s proposition in business consultancy and highly tailored, specific and targeted talent identification and engagement.
- Do not depend on AI-generated assessments or summaries to make final judgments about candidates, clients or mandates without human input.
- Ensure outputs are evaluated for appropriateness, fairness and alignment with Gallop Executive’s values and non-discrimination obligations.
- Training & Awareness
- All Users must complete training on this policy, generative AI safe usage, data protection, confidentiality and vendor risk before being authorised to use AI tools for business purposes.
- Refresher training should be provided annually (or more frequently as needed).
- Awareness of new AI developments, vendor changes, updated terms of service of generative AI tools, must be communicated to Users.
- Monitoring, Audit & Compliance
- Gallop Executive will monitor usage of approved AI tools for compliance with this policy (e.g. logs, access, prompts entered).
- Periodic audits will assess whether AI tool use aligns with data protection, confidentiality, vendor risk, and ethical guidelines.
- Any non-compliance may be subject to disciplinary action, up to termination of access and/or employment, depending on severity.
- Roles & Responsibilities
- Leadership / Management: Set tone, approve vendor tools, ensure resourcing for training, audit and policy enforcement.
- Company Director (or designated data protection person): Oversee policy compliance, handle incident response, maintain vendor assessments and documentation.
- Users: Understand and abide by this policy, ensure prompts and data inputs comply, review AI outputs, report issues immediately.
- IT / Procurement: Assist in vendor risk assessments, contract review, ensure secure integrations and access controls for AI tools.
- Exceptions & Deviations
- Any proposed deviation from this policy (for example, a novel use-case requiring candidate data in an AI tool) must be submitted for formal approval, documenting purpose, data protection measures, vendor risk mitigation, duration and review.
- Such deviations must include a risk assessment and defined sunset/termination of the use-case.
- Review & Updates
- This policy should be reviewed at least annually, or sooner if required by changes in technology, regulation, business processes or vendor terms.
- Users will be notified of significant updates and required to acknowledge understanding.
Appendix: Summary “Do’s & Don’ts”
DO
- Use generative AI for formatting documents, drafting generic marketing content, summarising handwritten notes (provided no personal data is included).
- Use anonymised or aggregated data when prompting AI.
- Check AI output carefully, edit as needed, validate for accuracy, fairness and compliance.
- Keep confidential and candidate/client personal data only in the approved systems (CRM, ATS, secure files) under access controls.
DON’T
- Input candidate names, CVs, interview feedback, personal contact details, assessments into ChatGPT or other external tools unless explicitly approved and secure.
- Rely solely on AI output for decision-making about candidates, clients, mandates.
- Share outputs from generative AI as final without human review.
- Adopt a new AI tool for processing sensitive or personal data without proper vetting, contracts and risk assessment.
