April 6, 2023 By Jennifer Kirkwood 3 min read

Organizations sourcing, screening, interviewing, hiring or promoting individuals in New York City are required to conduct yearly bias audits on automated employment decision-making tools as per New York City Local Law 144, which was enacted in December 2021.

This new regulation applies to any “automated employment tool;” so, any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence, including homegrown and third-party programs. Organizations must also publish information on their website about how these tools govern their potential selection and interview process. Specifically, organizations must demonstrate how their AI tools support fairness and transparency and mitigate bias. This requirement aims to increase transparency in organizations’ use of AI and automation in their hiring processes and help candidates understand how they are evaluated.

As a result of these new regulations, global organizations that have operations in New York City may be halting the implementation of new HR tools in their systems, as their CIO or CDO must soon audit the tools that affect their hiring system in New York.

To address compliance concerns, organizations worldwide should be implementing bias audit processes so they can continue leveraging the benefits of these technologies. This audit can offer the chance to evaluate the candidate-to-employee lifecycle, covering all relevant personas, tools, data, and decision points. Even simple tools that recruiters use to review new candidates can be improved by incorporating bias mitigation into the AI lifecycle.

Download the AI governance e-book

AI regulations are here to stay

Other states are taking steps to address potential discrimination with AI and employment technology automation. For example, California is working to remove facial analysis technology from the hiring process, and the State of Illinois has recently strengthened its facial recognition laws. Washington, D.C. and other states are also proposing algorithmic HR regulations. In addition, countries like Canada, China, Brazil, and Greece have also implemented data privacy laws. 

These regulations have arisen in part due to guidelines from the US Equal Employment Opportunity Commission (EEOC) on AI and automation, and data retention laws in California. Organizations should begin conducting audits of their HR and Talent systems, processes, vendors, and third-party and homegrown applications to mitigate bias and promote fairness and transparency in hiring. This proactive approach can help to reduce brand damage risk and demonstrates a commitment to ethical and unbiased hiring practices.

Bias can cost your organization

In today’s world, where human and workers’ rights are critical, mitigating bias and discrimination is paramount.

Executives understand that a brand-disrupting hit resulting from discrimination claims can have severe consequences, including losing their positions. HR departments and thought leaders emphasize that people want to feel a sense of diversity and belonging in their daily work, and according to the 2022 Gallup poll on engagement, the top attraction and retention factor for employees and candidates is psychological safety and wellness.

Organizations must strive for a working environment that promotes diversity of thought, leading to success and competitive differentiation. Therefore, compliance with regulations is not only about avoiding fines but is also about demonstrating a commitment to fair and equitable hiring practices and creating a workplace that fosters belonging.

The time to audit is now – and AI governance can help

All organizations must monitor whether they use HR systems responsibly and take proactive steps to mitigate potential discrimination. This includes conducting audits of HR systems and processes to identify and address areas where bias may exist.

While fines can be managed, the damage to a company’s brand reputation can be a challenge to repair and may impact its ability to attract and retain customers and employees.

CIOs, CDOs, Chief Risk Officers, and Chief Compliance Officers should take the lead in these efforts and monitor whether their organizations comply with all relevant regulations and ethical standards. By doing so, they can build a culture of trust, diversity, and inclusion that benefits both their employees and the business as a whole.

A holistic approach to AI governance can help. Organizations that stay proactive and infuse governance into their AI initiatives from the onset can help minimize risk while strengthening their ability to address ethical principles and regulations.

Learn more about data strategy
Was this article helpful?
YesNo

More from Artificial intelligence

FPGA vs. GPU: Which is better for deep learning?

5 min read - Underpinning most artificial intelligence (AI) deep learning is a subset of machine learning that uses multi-layered neural networks to simulate the complex decision-making power of the human brain. Beyond artificial intelligence (AI), deep learning drives many applications that improve automation, including everyday products and services like digital assistants, voice-enabled consumer electronics, credit card fraud detection and more. It is primarily used for tasks like speech recognition, image processing and complex decision-making, where it can “read” and process a large amount…

24 IBM offerings winning TrustRadius 2024 Top Rated Awards

2 min read - TrustRadius is a buyer intelligence platform for business technology. Comprehensive product information, in-depth customer insights and peer conversations enable buyers to make confident decisions. “Earning a Top Rated Award means the vendor has excellent customer satisfaction and proven credibility. It’s based entirely on reviews and customer sentiment,” said Becky Susko, TrustRadius, Marketing Program Manager of Awards. Top Rated Awards have to be earned: Gain 10+ new reviews in the past 12 months Earn a trScore of 7.5 or higher from…

Generate Ansible Playbooks faster by using watsonx Code Assistant for Red Hat Ansible Lightspeed

2 min read - IBM watsonx™ Code Assistant is a suite of products designed to support AI-assisted code development and application modernization. Within this suite, IBM watsonx Code Assistant for Red Hat® Ansible® Lightspeed equips developers with generative AI (gen AI) capabilities, accelerating the creation of Ansible Playbooks. In early 2024, IBM watsonx Code Assistant for Red Hat Ansible Lightspeed introduced model customization and a no-cost 30-day trial. Building on this momentum, we are excited to announce the on-premises release of watsonx Code Assistant for Red Hat Ansible Lightspeed,…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters