As employment-related artificial intelligence (“AI”) tools proliferate, multinational employers feel increasing pressure to deploy AI across their global offices. These tools can provide great value and efficiency across the employment life cycle. For example, recruiters request AI screening services that can save them days of time by quickly reviewing thousands of applications. Business partners want performance-monitoring tools that can save time on supervision. Employees may insist on generative AI tools to help them create reports, presentations, and trainings. The HR Department might request a chatbot that can answer common workforce questions. The list of requests may go on and on.
At the same time, the compliance challenge of rolling out AI tools in global offices may seem overwhelming. AI tools implicate a myriad of legal concerns in the workplace—including labor, discrimination, intellectual property, and data protection. Not only does every country have its own laws in these areas, but legislatures are rapidly developing laws specific to AI. The good news for multinational employers is that the laws with the most comprehensive restrictions on AI—data protection laws—follow a similar framework all over the world. As a result, multinational employers can follow a relatively straightforward checklist to work through the major data protection issues. This checklist may assist employers in creating a global data protection framework for their AI programs, which they can then vary as needed by jurisdiction.
Overview of the Legal Framework Regulating AI Tools
Most countries have data protection laws that comprehensively regulate the use of personal data, and these laws apply to the use of personal data by AI tools and the use of automated decision-making tools. One of the most prominent of these laws is the General Data Protection Regulation in the European Union (“GDPR”),1 but virtually every major trading partner of the United States has a data protection law. These countries include: our neighbors, Canada and Mexico; global powerhouses, such as Brazil, China, the European Economic Area, India, Japan, and the UK; as well as many smaller economies. Increasingly, data protection laws include provisions that govern AI or automated decision-making. Moreover, new laws specific to AI build on the framework of the existing data protection laws.
While the United States lacks a comprehensive data protection law at the federal level, a complex patchwork of privacy and data security laws throughout the United States collectively add up to protections similar to a comprehensive data protection law. Additionally, more than one quarter of all states have adopted comprehensive data protection laws that layer on top of the existing laws. For now, at least, the California Consumer Privacy Act, as amended by the California Privacy Rights Act, (collectively, the “CCPA”) is the only such law that applies to the data of applicants, employees, independent contractors and others in a human resources relationship to a company (“HR data”).2 In addition, a growing body of legislation specifically regulates AI.
Despite this complexity, virtually all data protection laws follow the same basic framework. This provides employers with a checklist of issues to consider. Although there are other elements of these laws, key items for employers to evaluate are the following:
- Legal basis for processing
- Notice
- Proportionality and data protection impact assessments
- Cross-border data transfer restrictions
- Data-related rights
- Data minimization and retention
- Accuracy
- Data security
- Contracting
This article will explore each of these items in more depth and discuss some of the variations between countries.
- Legal Basis for Processing Data
Before considering how to implement a particular AI tool, employers should first consider whether there is a legal basis for collecting and processing the information. Most data protection laws around the world permit the collection and processing of personal data only on limited grounds, such as the individual’s consent or as required by law. In those countries, an employer may only process data if there is a legal basis for doing so. In other words, in reverse of the United States – where an employer can use personal data for any purpose as long as it is not explicitly prohibited – in most countries, an employer cannot use personal data unless an explicitly permitted legal basis applies. Typical legal bases for processing personal data in the employment context are consent, performance of a contract, legitimate interest of the company, and compliance with a legal requirement.3
Many countries rely heavily on an individual’s consent for the legal basis. At one extreme, in South Korea, employers must obtain separate consents to use personal data, to use certain categories of sensitive personal data, to disclose personal data to a third party, and to transfer personal data to another country.4 At the other extreme, a growing number of countries permit data processing on the looser concept of the employer’s legitimate interests. Some countries, like European Union (EU) Member States, strongly disfavor reliance on consent as a lawful basis for processing in the employment context.
Publicly Available Data
The issue of legal basis poses a particular hurdle when publicly available personal data is scraped from the internet. For example, a popular subset of AI tools gathers information about people on the internet – LinkedIn profiles, professional bio web pages, and the like – to identify potential candidates for recruiters or headhunters to contact for hard-to-fill jobs. In principle, a tool like this might collect information about thousands of individuals.
In countries that generally rely on consent as the legal basis for processing, gathering information about hundreds of candidates from publicly available information online might be impractical due to the difficulty of obtaining consent from each individual. Other countries, such as Brazil, the UK, and members of the EEA, permit data processing based on the company’s "legitimate interests" to the extent that the individual’s rights and freedoms do not outweigh the company’s legitimate interests.5
Whether a company can rely on the basis of its “legitimate” interests may be quite dependent on context. For public profiles in professional social networks, the balancing of interests generally should favor the employer, particularly where the profile demonstrates the individual’s interest in finding a job. However, where the individual clearly did not post the profile for potential employers, where the terms of use prohibit scraping, and other factors indicate that the individual did not intend for the profile to be used to evaluate them for employment, then the balance of interests most likely weighs against the employer. Consequently, before implementing AI tools that rely heavily on information scraped from the Internet, global employers should conduct sufficient due diligence to assess the risk of whether the data may not have been collected in compliance with applicable data protection laws.
Privately Available Data
The question of legal basis also raises challenges when collecting personal data in other contexts. For example, an employer may have a legitimate interest in collecting personal data about employees’ activity on the company’s computer system so that an AI tool can analyze it for productivity. However, as noted above, legitimate interest typically must be weighed against the privacy rights of the individual. This includes considering not only the information that will be collected, but the method of collecting the data. For example, a German court held that an employer’s legitimate interest in monitoring employees through the use of keylogger software was outweighed by the employees’ privacy rights. The court found that the use of keylogger software, which records all keystrokes on a work computer to covertly monitor and control employees, requires a suspicion of a criminal offense or other serious breach of duty based on concrete facts.6
In principle, the employee’s consent could be an alternative to legitimate interest as the legal basis for collecting surveillance data about employees. Most data protection laws permit the processing of personal data based on consent. However, in the EEA, Brazil, and the UK, at least, consent generally will not be a feasible alternative in the employment context. In those countries, authorities take the position that employees, and in some cases even applicants, generally cannot “freely give” consent for data processing due to the imbalance of power between employees and employers.
Data Collection Restrictions in the United States and Elsewhere
As noted above, U.S. data protection law by and large has not adopted the concept of a legal basis for processing personal data. Instead, the common law in many states protects personal information in which individuals have a reasonable expectation of privacy. Collecting information that invades this reasonable expectation, such as surreptitious video recording in private offices, may amount to a tort under the common law. In addition, various U.S. laws require consent to collect sensitive information such as audio recordings, biometric data, and location data. Other U.S. laws prohibit the collection of certain information about applicants and employees except in very limited situations, for example, health and genetic data.
Similarly, other countries have laws separate from their national data protection laws that limit the collection of information. For example, many states in Australia have surveillance laws specific to the workplace that limit the types of information that may be collected.7
Key Factors to Consider in Data Collection
As this brief discussion illustrates, the question of whether an employer has a legal basis to collect personal data and process it with AI generally will depend on four factors:
- The type of personal data and the employee’s privacy interests in that data;
- Where the information is collected and processed, including whether the data is publicly available;
- The context of collection and processing, including the invasiveness of the technology; and
- The purposes of use.
Considering whether the employer has a legal basis for collection and use of personal data should be the first step in assessing how to use an AI tool because the analysis may show that the employer simply cannot collect and/or process the necessary personal data in some jurisdictions.
- Notice
After a company has determined that it can legally collect and use personal information, it must consider how it will provide notice to individuals about the practices. Virtually all data protection laws require a company to provide a notice regarding how it processes personal data. Most statutes require that the notices include what data is processed, the purposes of using the personal data, the parties to which the data is disclosed, and information about how to exercise data rights. Other required disclosures may include the legal basis for using the personal data, details about data transfers to other countries, and contact information for the company’s data protection officer.
Notice in Existing Data Protection Laws
Increasingly, data protection laws require additional disclosures in certain circumstances about how artificial intelligence will process personal data. Since it became effective in mid-2018, the GDPR has required notice if the data controller makes decisions solely based on automated processing that “produces legal effects” or “similarly significantly affects” the individual. This likely includes most employment decisions based solely on automated processing.8 In that case, the notice must describe the logic involved, as well as the significance and the envisaged consequences of such processing for the individual.
Additional Notice Requirements in New AI Laws
The EU’s upcoming AI Act supplements the GDPR’s notice requirement for AI tools that operate as biometric categorization systems and emotional recognition systems.9 Although the meaning of this latter term is not yet entirely clear, it potentially could encompass some of the new AI tools that attempt to gauge the emotional state of employees, for example, the emotional state of a call center employee based on analyzing their behavior during calls.
U.S. regulatory agencies and new U.S. laws have progressed in a similar vein. For example, the Equal Employment Opportunity Commission (“EEOC”) has recommended that employers using AI tools provide notice at least about which traits or characteristics the tool is designed to measure, the methods by which those traits or characteristics are to be measured, and the disabilities, if any, that potentially might lower the assessment results or cause the individual to be screened out.10
New York City’s groundbreaking law on using artificial intelligence to assess employees and candidates requires that companies subject to the law provide notices explaining that the company will use automated employment decision tools in connection with the assessment or evaluation of such employee or candidate and the job qualifications and characteristics used in the assessment.11
The EU and the United States are at the forefront in passing laws specific to AI. However, many countries have proposed such legislation. For example, under Brazil’s LGPD, notice to individuals should include the data subject’s right to request the review of fully automated decisions.12 Brazil’s proposed AI Law goes further, requiring that notice to individuals include a statement that AI will be used and an explanation of the individuals’ rights.13 These rights would include the right to an explanation of the AI’s decision, the right to contest the decision, the right to information about the participation of a human in the decision, and the right to request correction of discriminatory biases.
Given these trends, we expect that many jurisdictions will pass laws requiring notice to individuals stating at least that the employer uses an artificial intelligence tool and, most likely, the purposes of use, the traits assessed by the tool, and some explanation of the methods used by the tool. Global employers, therefore, will need to check for new AI-specific notice requirements that supplement the general requirements of applicable data protection laws before implementing AI tools for purposes of human resources administration.
- Proportionality and Data Protection Impact Assessments
Data protection laws generally require that data controllers process personal data in a proportionate manner by balancing the controller’s interests against risks to the individual. To reduce these risks, most data protection laws require a formalized data protection impact assessment (“DPIA”) for high-risk forms of data processing.
Due to the potential impact on an individual’s livelihood, using AI to assess applicants or employees most likely requires a DPIA under data protection laws. However, existing data protection laws can be ambiguous as to whether they require a DPIA because the criteria to determine whether a DPIA is legally required depends on a list of factors. In apparent response to this ambiguity, a growing number of AI laws and proposed legislation would explicitly mandate DPIAs for AI used in the HR context.
In particular, the EU’s upcoming AI law builds on the GDPR by requiring an employer to conduct the GDPR’s DPIA when using an AI system to process personal data to recruit or select employees, to make decisions about hiring, termination, and task allocation, or to monitor and evaluate workers.14 Brazil’s proposed AI Law would take a similar approach. Like the GDPR, Brazil’s general data protection law, the LGPD, requires DPIAs in certain high-risk scenarios. Brazil’s proposed AI Law classifies the use of AI for selection of candidates and employment as high-risk. Accordingly, the proposed law requires a type of DPIA, as well as detailed documentation regarding the functioning of the system and the decisions involved in its construction, implementation and use.15 In the United States, an early draft of regulations for the California Consumer Privacy Act would require a risk assessment covering almost 50 factors if the employer uses “Automated Decision-Making Technology” to make decisions about applicants, employees, or independent contractors.16
China has taken a particularly protective approach. China’s Personal Information Protection Law (the “PIPL”) requires that a personal information handler conduct a personal information protection impact assessment prior to using personal information to conduct automated decision-making and record the handling activities. The PIPL defines “automated decision-making” as the activity of using computer programs to automatically analyze or assess personal behaviors, habits, interests, or hobbies, or financial, health, credit, or other status, and making decisions thereupon. The requirement for such an assessment also applies under other circumstances mentioned under the PIPL, such as handling sensitive personal information or transferring personal information abroad. The assessment must address items including: whether the purposes and methods of the handling are lawful, legitimate, and necessary; the impact on the data subjects’ rights and interests, security risks; and whether the protective measures undertaken are lawful, effective, and appropriate based on the degree of risk.17
Accordingly, before implementing a new AI tool in the workplace, multinational employers should consider whether they must conduct a DPIA. Increasingly, the answer will be “yes,” and the employer should ensure that the DPIA considers the factors prescribed in all relevant jurisdictions. Even when a DPIA is not legally required, global employers still should consider some type of evaluation of the risks of implementing the AI tool in the HR context. Identifying risks in advance of implementation will allow the company also to consider how to mitigate those risks and whether the mitigation measures reduce the risks to a level that is tolerable for the particular employer.
- Cross-border data transfer restrictions
Most data protection laws prohibit the transfer of personal data to another country except in limited circumstances. This poses a hurdle to multinational employers because, if they centralize their HR processes in one office, then handling HR data involves transferring personal data across national borders. For example, if an employer uses an AI tool that operates on a service provider’s server located in the United States to evaluate applications from applicants in another country, the applicants’ personal data most likely would cross international borders in the process of the AI review.
Consent Required
Many jurisdictions require consent from the individual for cross-border data transfers. Depending on the context, obtaining consent can be impractical, but, increasingly, countries permit cross-border data transfers without consent if the sending and receiving legal entities sign data transfer agreements that apply protections equivalent to the origin countries’ data protection laws.
EU, UK, Brazil, and China: Other Mechanisms
Conversely, in China, the EU, the UK, and likely Brazil, employers effectively cannot rely on consent for data transfers, at least in the employment context. The EU offers several options for data transfers, including model data transfer contracts, binding corporate rules, and, with respect to transfers to the United States, the EU-U.S. Data Privacy Framework. Brazil’s National Authority for Data Protection is in the process of developing standard contractual clauses modules, which will be required for cross-border transfers, as no other country has yet been named as providing adequate data protection and consent can be impractical or risky. China’s current data protection legal framework makes it a burdensome exercise on the part of the personal data exporter to essentially seek government approval by filing a number of materials in order to transfer personal data outside of China.18
The United States is quite unusual in that it does not impose restrictions on transferring personal data to other countries. There are export controls on certain types of technology, but not on personal data per se.
In sum, multinationals that use centralized AI tools or otherwise transfer personal data across national borders may need a compliance strategy involving consent in many countries, as well as intra-company data transfer agreements compliant with multiple laws, and filings with some local governments. If the multinational employer already has implemented a cross-border data transfer strategy for other types of HR related activity, the cross-border transfer of personal data for use in centralized AI tool could be “wrapped into” this broader data transfer solution.
- Data-Related Rights
Data protection laws invariably provide some form of data rights. These include rights applicable to all data processing, as well as rights specific to automated decision-making and other forms of AI. Global employers need to be aware of these rights, so they can be prepared for rights requests in relation to their use of AI tools.
General Data Rights Relevant to AI
The traditional data rights particularly relevant to AI are the right to know how data is processed, the right to object, the right to delete, and the right to revoke consent, where the company processes personal data on the basis of consent. For example, in the EEA, employees generally have a right to demand that the employer stop processing their personal data for purposes of monitoring their performance or for surveillance, unless the employer demonstrates compelling legitimate grounds for the processing.19
Automated Decision-making Rights in Existing Data Protection Laws
In addition, many data protection laws have, for years, included rights specific to automated decision-making. These typically include rights to object to automated decision-making, to request human intervention, and to receive information about how the automated decision-making works. For example, the GDPR provides, in effect, a right not to be subject to an employment decision based solely on automated processing.20 The Philippines data protection law grants a right to request information about automated processes that form the sole basis for any decision “significantly affecting” the individual.21 In close parallel, when Chinese individuals consider the automated decision-making to have a significant impact on their interests, they are entitled to request that the data handler explain the situation, and the individual has the right to refuse the use of these automated means.22
Rights in New AI-Specific Laws
New AI-specific laws have followed these approaches. In the United States, NYC’s AI Ordinance provides employment candidates or employees the right to obtain information on the data collected, data source, and data retention policy. The ordinance also provides candidates with the right to request an alternative selection process or accommodation.23 Outside the United States, South Korea has amended its data protection law to give individuals a right to reject entirely automated decisions that materially affect the rights of the individual.24 This likely includes many entirely automated decisions about employment. Further, South Koreans will have a right to obtain an explanation about a decision made through automated means. Many jurisdictions have proposed similar legislation.
Workers’ Collective Rights, Including Works Councils
Although not precisely a data right, workers’ organizations may have rights to notice or approval about data processing of the employees represented by them. In particular, in the EU, employees in some countries have a right to create works councils at their employer.25 These works councils have rights to information, consultation with management and depending on the country, the employer may need works councils’ consent to implement new monitoring and technical systems. This may mean that an employer must provide prior notice to, or reach agreement with, a works council before the employer can start implementing an AI system since otherwise the works council could try to take legal action to block implementation. Further, in many countries, unions may obtain contractual rights, via collective bargaining agreements, to review or approve AI tools in the workplace. Indeed, issues surrounding the use of AI were integral to resolution of disputes between U.S. production companies and both screenwriters and actors.
- Data Minimization and Retention
Virtually all comprehensive data protection laws adopt data minimization principles on the front end and the back end of the data life cycle.26 These principles require that data controllers collect only the minimum personal data necessary to achieve the purpose for which the personal data was collected. Once that purpose has been accomplished, the personal data generally must be permanently deleted subject to any legal obligation to retain the personal data for a longer period.
While these principles can be stated simply, they generally are difficult to apply, in practice, especially in the context of AI tools used for HR administration purposes. In many circumstances, AI tools function more effectively as the volume of information, including personal data, in the underlying database (a/k/a large learning model) increases. While the data minimization principle does allow for the data controller to exercise discretion in evaluating the business necessity for collecting personal data, that principle likewise imposes an obligation on data controllers not to over-collect personal data. Consequently, when evaluating the use of AI tools for HR purposes, employers must carefully consider both the necessity of specific data inputs as well as the relevance to the employer’s business purposes of the AI tool’s outputs.
After the AI tool has been used to process personal data, employers will have to balance multiple considerations in determining how long to retain both the input personal data and output personal data. On the side favoring data retention, employers may wish to retain both input and output data to assess the consistency, accuracy, and possibly biases of the AI tool over time. In addition, retaining this data may help the employer defend itself against claims of disparate impact against minority groups.
On the side favoring prompt data destruction, retaining the personal data of any one individual longer than needed to assess that individual may violate the proportionality principles in data protection laws. Also, keeping data generally heightens data security risks simply because the possibility of a security incident increases over time. Finally, retaining data may increase the cost and administrative burden of responding to individuals’ requests to exercise their rights to access, correct, or delete personal data maintained about them by the company because the company likely will need to search for data in more storage locations and, with respect to an access request, manage the production of a larger quantity of personal data.
Moreover, the balance of risks in retaining personal data versus destroying it may vary by jurisdiction. Consequently, the employer may decide to set different retention periods by country or region.
Finally, employers should not lose sight when evaluating whether to use an AI tool in the first place of the potential, practical challenges of permanently purging personal data once the purpose of collection has been accomplished. For example, many large language models appear to have been built on the premise that personal data may be stored in them indefinitely. Consequently, the practical challenges of removing specific data points from such models may be insurmountable, creating risk for the organization. While outputs may be easier to isolate and purge, if the employer allows the outputs of AI tools to be proliferated across numerous storage locations (e.g., e-mail, local hard drives, centralized human resources information systems), the practical challenges of purging those outputs could become daunting. Nonetheless, with some inquiry early in the implementation process and advance planning, employers should be able to mitigate the risks of over-collection and the practical challenges of permanent deletion.
- Accuracy
Data protection laws generally impose an obligation to ensure that personal data is accurate and kept up to date where necessary. As a practical matter, this means that both personal data entering into an AI tool and the personal data output must be accurate. Regarding input data, ensuring accurate personal data may not pose challenges particular to AI if the employer provides the input data, but some AI tools may provide their own input data, for example, scraping it from publicly available sources. Employers usually have an obligation to make sure this data is accurate too but effectively have limited means for doing so. As a result, employers should evaluate the risk that AI tools which process publicly available data may not comply with the accuracy requirement contained in many data protection laws.
Ensuring the accuracy of the output data can be even more difficult because the algorithms and processes used by the AI tool may be hard to understand. Moreover, to protect their methods from competitors, AI vendors may not provide full information about these algorithms and processes. Inaccuracy and bias can lead to some of the biggest risks of using AI tools, for example, unlawful discrimination.
- Data Security
With few exceptions, data protection laws require reasonable safeguards for personal data, but generally do not spell out precisely what those safeguards entail. Typically, data protection laws clarify that safeguards must include: (a) technical measures, e.g., passwords, firewalls, etc.; (b) administrative measures, e.g., policies, procedures, and training; and (c) physical measures, e.g., locked doors, security guards, etc. Beyond this, employers must look to regulatory guidance and industry standards.
A growing international consensus lays out basic standards, for example, the 27001 cybersecurity standards published by the International Organization for Standardization and the recommended technical and organizational measures for AI systems published by the German Data Protection Commission. However, authorities emphasize that companies should conduct risk assessments specific to their systems and threats to develop reasonable safeguards for their circumstances.
- Contracting
Finally, employers cannot outsource their responsibilities for compliance with data protection laws. As the data controller, the employer remains responsible for the personal data of its applicants and workers processed on its behalf by vendors. This is a key issue for employers because employers typically use AI tools provided by third-party service providers who process personal data on their own servers (“Software as a Service” or “SaaS”) rather than purchasing licensed software and running it on the employer’s own information systems.
Virtually all data protection laws require that the data controller obtain some contractual assurances from these vendors. At minimum, the contractual assurances include the requirement that the vendor provide reasonable safeguards for the data. Increasingly, data protection laws specify detailed vendor contract provisions. The GDPR’s Article 28, for example requires about a dozen points in data processor contracts, including that the data processor only process personal data on the controller’s documented instructions and impose the same data protection obligations that apply to the vendor on any sub-processor.27 The California Privacy Rights Act requires a similar number of provisions, many of which are quite distinct from the GDPR.28 In addition, as discussed above in the cross-border data transfers section, if the vendor stores personal data in a country, other than where the applicant or employee is located, the employer may be required to include country-specific data transfer provisions in the contract.
Finally, employers should consider including supplemental language to protect their interests. These might include, for example, indemnification provisions, mitigation and reporting provisions for data breaches, and requirements for accurate representations about how the AI works so that the employer can meet its obligations to provide notice.
Practical Implications and Takeaways
This article provides a checklist of key data protection points to consider in adopting AI tools across multiple countries. Of course, the real challenge is developing a compliance program for AI that works in practice across multiple jurisdictions. This requires thinking through the data protection implications at each step of the process. In rough chronological order, this likely will include:
- Vendor vetting;
- Conducting DPIAs and determining feasible jurisdictions for deployment;
- Vendor contracting;
- Implementing policies to address risks, such as accuracy, and compliance steps, such as responding to data rights requests;
- Distributing notices, obtaining consent, and communicating with the workforce;
- Handling requests to exercise data rights; and
- Retention and disposal of personal data.
In this process, employers can use the basic data protection framework as a guide and then adapt as needed to local variations.