Artificial intelligence (AI) has quickly become a fixture in the hiring process across many industries. Employers are increasingly adopting algorithmic tools and automated decision-making systems to screen resumes, assess video interviews, and even predict candidate performance or culture fit. While these innovations offer potential efficiency gains and data-driven insights, they also introduce serious legal risks—particularly in the context of Ontario’s Human Rights Code.
The Ontario Human Rights Code (the “Code”) prohibits discrimination in employment based on specific protected grounds. When AI systems are deployed in hiring, they can inadvertently perpetuate or amplify biases embedded in their training data or algorithms, raising the risk of systemic discrimination. Employers using or considering AI must ensure that these tools comply with legal standards, or they may face human rights complaints and reputational damage.
This blog explores how AI is used in hiring, the potential risks under the Code, and best practices for employers in Ontario to navigate this emerging legal landscape.
The Rise of AI in Recruitment
AI tools in hiring take various forms. Some systems use natural language processing to assess resumes or cover letters, identifying keywords that match job requirements. Others analyze video interviews, using facial recognition or voice pattern analysis to rate a candidate’s emotional expression, confidence, or honesty. Some platforms claim to predict workplace performance or the likelihood of success based on complex data sets.
Large employers and human resources platforms are already integrating these tools into early-stage recruitment workflows, particularly to handle high volumes of applicants. For example, AI might be used to automatically reject resumes that don’t meet certain criteria or rank candidates based on historical data.
While these systems promise efficiency, their algorithms are often opaque and based on historical “successful” hire patterns.
Legal Framework: Ontario’s Human Rights Code
Under Ontario’s Human Rights Code, employers have a legal obligation to provide equal employment treatment without discrimination based on protected grounds. These grounds include race, sex, age, disability, sexual orientation, gender identity, marital status, family status, ethnic origin, place of origin, religion, and others.
This obligation applies at all stages of employment, including recruitment, screening, interviewing, and selection. If a candidate is mistreated because of a protected characteristic—intentionally or not—the employer may be liable for discrimination.
Importantly, intent is not required to establish a breach of the Code. Discriminatory outcomes resulting from neutral policies or practices—including AI—can constitute “adverse effect discrimination” if they disproportionately impact a protected group.
How AI Can Perpetuate Discrimination
AI is not inherently discriminatory. However, it is only as objective as the data it is trained on and the assumptions behind its design. Many AI systems rely on machine learning, where algorithms are trained using historical hiring data. If past hiring decisions reflected biases—such as favouring male candidates over female ones or prioritizing applicants with Anglo-sounding names- the AI system can learn and replicate those biases.
For instance, Amazon famously abandoned an AI recruitment tool after discovering that it systematically downgraded resumes containing the word “women’s,” such as “women’s chess club captain,” because it had been trained on resumes submitted to the company over ten years.
AI systems that assess facial expressions or vocal tone may also disadvantage people with disabilities, neurodivergent traits, or language accents. Video analysis algorithms have also been shown to struggle with accurately identifying facial expressions in individuals with darker skin tones, which may lead to skewed assessments of Black or Indigenous candidates.
Even if the algorithm does not explicitly consider protected characteristics, its design or data inputs may produce outcomes that correlate with those characteristics. For example, a resume screening tool that favours postal codes associated with high-income neighbourhoods may indirectly disadvantage racialized or immigrant applicants who live in lower-income areas.
Employer Liability Under the Code
In Ontario, employers can be held legally responsible for discriminatory hiring practices resulting from AI use, even if a third-party vendor developed the algorithm. Under the Code, an employer cannot delegate or outsource human rights obligations. If an AI tool results in discrimination, the employer, not just the tech provider, can be named in a human rights complaint.
The Human Rights Tribunal of Ontario (HRTO) considers whether the employer took reasonable steps to prevent discriminatory outcomes. Ignorance of how an algorithm works, or blind reliance on a vendor’s claims of fairness, is unlikely to provide a successful defence.
While the HRTO has not yet issued a significant decision directly addressing AI-driven hiring discrimination, it has been clear in previous rulings that systemic practices leading to disparate outcomes can violate the Code. As AI systems become more prevalent, it is only a matter of time before these issues are tested in formal complaints.
Data Privacy and Transparency Concerns
The risks posed by AI in hiring extend beyond discrimination. There are also significant concerns related to transparency, informed consent, and data privacy—issues that intersect with Ontario’s privacy laws and emerging federal legislation such as the proposed Artificial Intelligence and Data Act (AIDA).
Job applicants are rarely informed about how their personal data is being analyzed or the criteria the algorithm uses to evaluate them. Unlike human interviewers, AI systems do not explain their rationale, making it difficult for candidates to challenge or appeal unfair assessments.
This lack of transparency makes it harder for individuals to know whether they’ve been discriminated against, which may prevent them from asserting their rights under the Code. It also undermines accountability since the decision-making process may be proprietary or protected by trade secrecy claims.
Mitigating Legal Risk: What Employers Should Do
Ontario employers using or considering AI-based hiring tools must take proactive steps to ensure they are not exposing themselves to legal liability. The following measures are essential:
First, employers should thoroughly vet any AI tools before deployment. This includes demanding transparency from vendors about how the algorithm was developed, what data was used to train it, and what steps have been taken to mitigate bias. Independent audits, bias testing, and certifications may be useful, but due diligence should not stop there.
Second, human oversight must remain a central part of the hiring process. AI should not be used to decide who is hired or rejected. Instead, it should support, not replace, human judgment. Employers must ensure that trained hiring personnel review AI-generated rankings or assessments and have the authority to override them.
Third, employers should document their policies and practices regarding the use of AI, including how they assess and mitigate bias. This includes maintaining records of how AI tools are selected, tested, and ultimately decided. A paper trail can demonstrate that the employer took reasonable steps to comply with its obligations under the Code.
Fourth, employers should inform applicants when AI is used in hiring. While not currently required by law, this transparency supports fairness and can help prevent legal disputes. Informed applicants are better positioned to raise concerns, request accommodation, or seek clarification if they feel disadvantaged by the process.
Finally, employers should seek legal advice when incorporating AI into hiring practices. Ontario’s human rights laws are complex, and the legal landscape surrounding algorithmic decision-making is still evolving. Legal counsel can help employers develop compliant policies, review contracts with vendors, and respond effectively if a complaint is filed.
Looking Ahead: The Future of AI and Employment Law
As AI continues to reshape the workforce, governments worldwide are beginning to regulate its use in employment. In Canada, the federal government has proposed the Artificial Intelligence and Data Act (AIDA), which—if passed—would impose new obligations on those who develop or deploy “high-impact” AI systems. While AIDA has not yet been enacted, it signals a growing recognition that AI must be subject to meaningful oversight.
Ontario employers should also look for guidance from the Ontario Human Rights Commission, which has previously emphasized the importance of preventing systemic discrimination in technology. Future policy statements or tribunal decisions will likely provide more direction on how AI can be used responsibly in hiring.
Until then, the safest approach is to treat AI as a tool—not a decision-maker—and apply the same human rights principles that govern traditional hiring processes. Equity, transparency, and accountability must remain at the core of recruitment practices, regardless of whether people or machines make decisions.
Navigating Legal Challenges and Compliance in Ontario
The use of artificial intelligence in hiring presents both opportunities and serious legal challenges. In Ontario, employers must ensure that any AI systems used in recruitment comply with the Human Rights Code, particularly regarding discrimination on protected grounds. Failure to do so can lead to legal liability, reputational harm, and lost trust with candidates and employees.
Employers should approach AI adoption cautiously, seek legal advice, and implement safeguards to ensure fairness and compliance. Ultimately, responsible use of AI in hiring requires technological sophistication and a deep commitment to human rights and equality in the workplace.
Experienced Toronto Employment Lawyer Ensuring Your AI Recruitment is Human Rights Compliant
As the legal landscape for AI in recruitment evolves, proactive measures are essential to protect your organization from liability and reputational damage. Our trusted team at Haynes Law Firm is experienced in helping employers navigate these complex regulations and implement robust compliance strategies. Whether you need to audit your current systems, develop new policies, or respond to a complaint, our firm provides trusted legal advice. To schedule a confidential consultation and learn how we can help you mitigate risks, please contact us online or by phone at 416-593-2731.