AI for HR & Recruiters
Module 13: AI Ethics in Hiring
Module Overview
AI in hiring isn't just a technology question—it's an ethics question. As AI becomes more prevalent in recruitment, HR professionals must understand the risks, legal landscape, and ethical considerations. This module is crucial for using AI responsibly and fairly.
Learning Objectives:
By the end of this module, you will be able to:
- Understand how AI bias affects hiring
- Navigate the evolving legal landscape
- Implement ethical AI use practices
- Build fair, compliant hiring processes
- Communicate transparently about AI use
Estimated Time: 45-60 minutes
13.1 Understanding AI Bias
How Bias Enters AI Systems
Training Data Bias: AI learns from historical data. If your past hiring was biased, AI learns that bias.
Example: If an AI is trained on resume data from a company that historically hired mostly men for technical roles, it may learn to favor male candidates—even if gender isn't a direct input.
Proxy Variables: AI can find hidden correlations that serve as proxies for protected characteristics.
Example: Zip codes can correlate with race and income. College names can correlate with socioeconomic status. Hobbies can correlate with gender.
Feedback Loops: Biased decisions become training data for future decisions, amplifying the original bias.
Example: If an AI screens out certain candidates who never get interviewed, there's no data to show they might have succeeded.
Real-World Bias Examples
Amazon's Recruiting AI (2018): Amazon built an AI to screen resumes that learned to penalize resumes containing "women's" (e.g., "women's chess club") and downgrade graduates of all-women's colleges. It was trained on 10 years of hiring data that reflected existing gender imbalances in tech.
Impact: The project was scrapped, but similar systems are in use today.
Facial Recognition Disparities: Studies show facial recognition AI has higher error rates for darker-skinned faces, particularly women. Video interview AI that analyzes expressions could exhibit similar bias.
Impact: Candidates could be unfairly evaluated based on flawed analysis.
13.2 Legal Landscape
Current Regulations
Title VII and EEOC: Discrimination laws apply whether the discriminator is human or algorithmic. If your AI creates adverse impact, you're liable.
EEOC Guidance (2023): The EEOC has clarified that employers are responsible for AI tool outcomes. "The employer is the one who made the selection decision" even when using vendor tools.
ADA Considerations: AI screening can inadvertently discriminate against people with disabilities (e.g., video interview AI penalizing speech patterns).
State Laws:
- Illinois: Requires notice and consent for AI video interview analysis
- Maryland: Bans facial recognition in job interviews without consent
- New York City: Local Law 144 requires annual bias audits of automated employment decision tools
- California, New Jersey, DC: Considering or have passed similar legislation
Compliance Checklist
Create a compliance checklist for AI in hiring.
Check for compliance with:
- Title VII and adverse impact analysis
- ADA considerations
- State-specific AI hiring laws
- EEOC guidance on AI
- Data privacy requirements (GDPR if applicable)
For each area, provide:
- What to check
- Documentation needed
- Red flags to watch for
- Remediation steps if issues found
- When to involve legal counsel
13.3 Adverse Impact Analysis
Understanding Adverse Impact
Definition: Adverse impact occurs when a selection rate for a protected group is significantly lower than for the majority group.
The 4/5ths Rule: If the selection rate for a protected group is less than 80% (4/5ths) of the rate for the majority group, adverse impact may exist.
Example:
- White candidates: 60% move to interview
- Black candidates: 40% move to interview
- Ratio: 40/60 = 67%
- 67% < 80% = Potential adverse impact
Monitoring for Adverse Impact
Create a framework for monitoring AI hiring tools for adverse impact.
Include:
- What data to collect
- How to calculate selection rates by protected group
- Threshold for concern (4/5ths rule)
- How often to analyze
- What to do if adverse impact is found
- Documentation requirements
Note: Collecting demographic data must be done carefully
and separately from selection decisions.
When Adverse Impact Is Found
If analysis shows adverse impact:
- Investigate the cause
- Determine if the criterion is job-related
- Seek less discriminatory alternatives
- Document decisions and rationale
- Consider discontinuing use if can't remediate
13.4 Vendor Due Diligence
Questions to Ask AI Vendors
Create a due diligence questionnaire for AI hiring tool vendors.
Cover:
1. Bias Testing
- What testing was done?
- By whom (internal vs. independent)?
- What groups were tested?
- What were the results?
- How often is testing repeated?
2. Data Practices
- What data does the tool collect?
- How is it used?
- Who has access?
- How long is it retained?
- Is it used to train the model?
3. Transparency
- Can you explain how decisions are made?
- What factors influence rankings?
- Can we audit the tool's decisions?
4. Compliance
- What regulations was this designed to comply with?
- Do you have bias audit certificates?
- Will you share adverse impact data?
5. Support
- What happens if we find bias?
- What's your remediation process?
- Do you carry liability insurance for discrimination claims?
Red Flags in Vendor Responses
- Won't share bias testing results
- Can't explain how the algorithm works
- No independent audits
- Evasive about data use
- Won't commit to adverse impact monitoring
- No process for addressing discovered bias
13.5 Ethical Use Principles
Building an Ethical AI Framework
1. Human Oversight: AI should assist, not replace, human judgment in hiring decisions.
2. Transparency: Candidates should know when AI is used and how.
3. Fairness Testing: Regular audits for bias across protected groups.
4. Accountability: Clear ownership of AI decisions and outcomes.
5. Candidate Rights: Process for candidates to question or appeal AI decisions.
Creating Your AI Ethics Policy
Create an AI ethics policy for our hiring practices.
Include sections on:
1. Principles
- Human oversight requirements
- Fairness and non-discrimination
- Transparency and explainability
- Accountability
2. Approved Uses
- What AI can be used for
- What AI cannot be used for
- Approval process for new tools
3. Governance
- Who approves AI tools
- Who monitors for bias
- How often tools are audited
- How issues are escalated
4. Candidate Communication
- What we disclose to candidates
- How candidates can opt out (if applicable)
- Appeals process
5. Vendor Requirements
- Due diligence requirements
- Contractual protections
- Ongoing monitoring
13.6 Candidate Communication
Transparency Requirements and Best Practices
Required Disclosure (in some jurisdictions):
- That AI is being used
- What it's used for
- Consent before use (in some cases)
Best Practice Disclosure:
- How AI influences decisions
- What candidates can do if concerned
- Human review is part of process
Disclosure Templates
Write candidate disclosure language about AI use in our hiring process.
AI used for:
- [Resume screening / Video analysis / Skills assessment / etc.]
Include:
- What AI tools we use
- What they evaluate
- How they influence decisions
- Human review in the process
- How candidates can ask questions
- Opt-out options if available
Tone: Transparent and reassuring, not alarming.
Comply with [State] requirements.
Handling Candidate Questions
Create FAQ responses for candidates asking about AI in hiring.
Common questions:
1. "Do you use AI in hiring?"
2. "How does AI affect my application?"
3. "Can I opt out of AI screening?"
4. "Is AI fair?"
5. "What if I think AI made an error?"
6. "Who reviews AI decisions?"
Provide honest, reassuring answers.
Don't be defensive or evasive.
13.7 Building Fair Processes
Designing for Fairness
1. Start with Job Analysis: Ensure criteria being measured are job-related.
2. Validate Before Deploying: Test any AI tool for adverse impact before full use.
3. Keep Humans in the Loop: Never let AI make final decisions unreviewed.
4. Monitor Continuously: Ongoing analysis of outcomes by demographic group.
5. Document Everything: If challenged, you need to show your process.
Fairness Audit Template
Create a fairness audit checklist for our hiring AI.
Audit areas:
1. Criteria Validity
- Are measured factors job-related?
- Is there evidence of validity?
- Are there alternatives with less adverse impact?
2. Selection Rate Analysis
- By race/ethnicity
- By gender
- By age
- By disability status (if known)
3. Decision Review
- Are human reviewers checking AI recommendations?
- Are there patterns in overrides?
- Are override rates different by group?
4. Candidate Experience
- Are all candidates treated equally?
- Are accommodations available?
- Is the process accessible?
5. Documentation
- Is decision rationale recorded?
- Are adverse impact analyses current?
- Are vendor audits on file?
13.8 Practical Implementation
Starting or Auditing AI Use
Create a step-by-step guide for implementing or auditing AI in hiring.
Phase 1: Assessment
- Inventory current AI tools
- Identify what each tool does
- Gather vendor documentation
- Review compliance with laws
Phase 2: Due Diligence
- Request bias testing results
- Conduct independent testing if needed
- Establish monitoring metrics
- Set up data collection
Phase 3: Policy Development
- Create AI ethics policy
- Define approval processes
- Establish oversight roles
- Create audit schedule
Phase 4: Communication
- Develop candidate disclosures
- Train hiring team
- Create FAQ resources
- Prepare for questions
Phase 5: Ongoing Monitoring
- Regular adverse impact analysis
- Vendor relationship management
- Policy updates as laws change
- Continuous improvement
When to Stop Using an AI Tool
Consider discontinuing if:
- Bias testing shows adverse impact you can't fix
- Vendor won't address identified issues
- You can't explain decisions to candidates
- Legal landscape changes make it non-compliant
- Simpler alternatives achieve the same goal
Module 13 Summary
Key Takeaways:
-
AI bias is real: Historical data, proxy variables, and feedback loops can all introduce bias.
-
You're responsible: Using a vendor's tool doesn't transfer liability.
-
Laws are evolving: Stay current on federal, state, and local AI regulations.
-
Test continuously: Regular adverse impact analysis is essential.
-
Transparency builds trust: Tell candidates when and how AI is used.
-
Humans must decide: AI assists, but people make hiring decisions.
Preparing for Module 14
In the final module, we'll bring everything together by building HR workflows with AI. You'll learn to:
- Design end-to-end AI-enhanced HR processes
- Integrate AI tools into daily work
- Scale your AI practices
- Plan for the future of AI in HR
Before Module 14:
- Review your current HR workflows
- Identify where you've applied AI so far
- Consider what processes could benefit most from AI
"Using AI in hiring isn't wrong—using it without oversight, testing, and accountability is. The ethics aren't optional."
Ready to continue? Proceed to Module 14: Building HR Workflows with AI.

