Published on 30 November 2018 by Soulier Avocats

Labor law and the challenges of Artificial Intelligence: 3rd part of a trilogy

Digital technology has already changed working methods. With the advent of Artificial Intelligence (“AI”), we are just at the beginning of a unparallel transformation that will affect not only the labor and employment market but also working relationships. What does exactly mean AI’s impact on working relationships? When we say working relationships, it implies labor law.

Labor and employment law should be used as a legal tool to steer the obvious changes brought by AI in the workplace. The challenge is thus to identify avenues for adapting our labor and employment legislation in order to anticipate and smooth the transition to the new world.

This article is the third part of a trilogy built around the lifetime of employment contracts: Hiring / performance / termination. It is devoted to the issue of employability of humans in tomorrow’s working world. In order to combat the inevitable fear of machines taking control over humans, humans must already reflect on what are their best assets to remain “employable”, which does not, however, prevent us from building without delay an ethical framework to protect the most vulnerable ones.

In the first part of this trilogy, I addressed issues related to the termination of the employment contract and I specifically wondered whether our labor and employment legislation, as it currently stands, provides some safeguards against the unavoidable (according to some people) risk of job elimination as a result of the development of AI.

It appears that the major challenge to limit job elimination in tomorrow’s working world is to anticipate changes in jobs and skills in each industry by ensuring the adaptation of employees. Continuing training is a key challenge of the digital revolution and our labor and employment legislation already provides for efficient legal tools, such as the Gestion prévisionnelle de l’Emploi et des Compétences (forward-looking job and skill management policy), as explained in the second part of this trilogy.

In the second part, I also addressed the anticipated impact that AI will have on the ways of organizing work and I observed that the shift is already under way with the emergence of new work organizations (telework, working more and more in a collaborative manner, etc.). In this respect, our labor and employment legislation, which still primarily relies on the unity of time and place of work, must adapt accordingly. 

In addition, AI necessarily has an impact on employees’ working conditions, with the emergence of a series of new risks and work situations that are not yet sufficiently taken into account in our current labor and employment legislation that still remains geared for working methods developed during the industrial era. New tools will need to be created and implemented, in particular to assess arduousness and the workload perceived by employees and, more generally, to prevent tomorrow’s psychosocial risks.

While we believe that human work will necessarily survive the age of artificial intelligence – with however new jobs and different working conditions and work situations – a more general question arises as to what will be the place of humans in tomorrow’s working world: What will be the required expertise, abilities and qualities to be “employable”? Who will be the ideal employee in the age of artificial intelligence?

Indeed, for employees to be hired, there must be vacant jobs and “employable” employees!

As indicated by Yuval Noah Harari in his book “Homo Deus: A Brief History of Tomorrow: The crucial problem isn’t creating new jobs. The crucial problem is creating new jobs that humans perform better than algorithms”.

This brings me to the third part of the trilogy on employment contracts lifetime: the formation of the contract, i.e. the hiring of the employee or, more precisely, his/her “employability”.




In tomorrow’s AI-dominated working world, humans will have to find their way, prove their added value and demonstrate their relevance if they want to remain “employable”. In order to combat the unavoidable fear of machines gaining control over humans, humans must reaffirm their free will and make full use of their human qualities that make them, and will always make them, better than machines however sophisticated they may be. In the face of such a challenge, two projects have already been launched:

  • Setting up think tanks to assist in the emergence of tomorrow’s “employable” humans (A.);
  • Developing an ethical framework to protect the most vulnerable ones in tomorrow’s working world (B.).


A. Reflecting on the employability of tomorrow’s employees

This reflection is intended to anticipate the establishment of new career paths and to prepare in advance for the adaptation of humans to the new working world.

In his report entitled “For a meaningful Artificial Intelligence” released on March 28, 2018, Cédric Villani puts forward a number of proposals:

  • Setting up a “public lab for labor transformations”, i.e. “a permanent structure […] to “spearhead” these subjects within labor and professional training public policy” in relation to branches of activity. According to Cédric Villani, this lab should also serve as a laboratory on tomorrow’s jobs where stakeholders could come to train, experiment, share and reflect on the evolution of their jobs.
  • Improving national skills and training more talents in AI. Cédric Villani sets one clear target: “To triple the number of people trained in artificial intelligence in France in the next three years”. While existing training programs should focus more on AI, the report also supports the creations of new programs.
  • More generally, transformation of teaching and, more precisely, of teaching methods: better consideration of cross-cutting skills, learning creative skills, implementation of new teaching practices. The underlying idea is to teach new generations to think differently. This learning could start as early as primary school with the introduction of an “IA, data processing and digital sciences” training module.


B. Creating ethical AI

But while humans must re-invent themselves and adapt to find their place in the new working world, collective responsibility imposes to start working as of now towards ethical AI in order to protect the most vulnerable ones.

  1. Risks of infringement of the rights and freedoms of people, including discrimination in hiring, in the face of the development of AI

Artificial intelligence cannot become another driving force for exclusion” wrote Cédric Villani in his report[i].

It is established that AI can be a vector for discrimination.

As recalled by Cédric Villani in his report, “the algorithms used by Google in its targeted advertising are more likely to offer less well-paying jobs to women, that YouTube’s moderating algorithms are sometimes slow to react when a harmful content is reported and thus allow its viral spread, or alternatively that algorithms that predict criminal behavior recommend a higher level of surveillance in poorer Afro-American quarters. Indeed, all these algorithms only reproduce the prejudice that already exists in the data they are supplied with.

AI would tend to reproduce, or even increase, human errors in terms of discrimination.

As such, with respect to recruitment, the use of so-called “smart” recruitment tools is more and more frequent: however, even though AI can identify insightful connections to which humans would never have thought of, it can also incite discrimination or even discourage out-of-the-ordinary recruitments.

Article L. 1131‐2 of the French Labor Code imposes an obligation to follow training on non-discrimination: “Within companies with at least 300 employees and within all companies specialized in recruitment, employees in charge of recruitment must follow a training on non-discrimination during the hiring process at least every five years”. This provision seems to be quite inconsistent with a recruitment software based on AI!

On September 19, 2018, the European Economic and Social Committee (“EESC”), a consultative body that assists the European Parliament, the European Council and the European Commission, issued an opinion entitled “Artificial intelligence: Anticipating its impact on work to ensure a fair transition” in which it calls for an ethical use of AI within an economic framework.  

The EESC examines AI systems that assess the productivity of employees and those that facilitate the recruitment process. It specifies that the use of such systems must “safeguard rights and freedoms with regard to the processing of workers’ data, in accordance with the principles of non-discrimination”.

The EESC also addresses the issue of “algorithm bias”: the code is indeed the reflect of the values of the person who has created and developed it. And similarly, the dataset can always reveal bias.

Faced with these risks that are real and known to all stakeholders, it seems essential to integrate ethical practices into the development of artificial intelligence.


  1. What means are implemented to prevent these risks?

Public authorities have already taken up this topic and started setting milestones for the development of an ethical AI. At this stage, we are talking about declarations of intents rather than true safeguard measures.

At the national level, Cédric Villani recommends in his report[1] the creation of a “digital technology and AI ethics committee in charge of leading public discussion in a transparent way, and organized and governed by law”. Recommendations from this committee would be elaborated entirely independently and could, according to Cédric Villani, help inform researchers’, economic players’, industry’s and the State’s technological decisions.

In its report on ethical matters raised by algorithms and artificial intelligence dated December 15, 2017 and entitled “How can humans keep the upper hand?”, the French Data Protection Authority laid down two founding principles for the development of algorithms and AI: fairness and continued attention and vigilance.

These principles take shape through 6 policy recommendations intended for both public authorities and the various components of civil society (companies, citizens, etc.):

  • Fostering education of all players involved in the “algorithmic chain” (designers, professionals, citizens) in the subject of ethics: Digital literacy must enable each human being to understand the inner working of the machine;
  • Making algorithmic systems understandable by strengthening existing rights and organizing mediation with users;
  • Improving the design of algorithmic systems in the interests of human freedom to counter the “black-box-like” effect;
  • Setting up a national platform for auditing algorithms;
  • Increasing incentives for research on ethical AI and launching a participatory national worthy cause on a general interest research project;
  • Strengthening ethics within businesses (e.g. setting up ethics committees, dissemination of sector-specific good practices, revising pre-existing codes of professional conduct).

At the European level, the EECS made several recommendations in its aforementioned September 2018 opinion[2], including the following:

  • That the ethical guidelines on AI to be prepared by the European Commission should draw a line in the sand for interaction between workers and intelligent machines, and include principles of transparency when using AI systems for recruitment, assessment, supervision and management of employees;
  • That engineers and intelligent machine designers be trained in ethics; this could be achieved by raising awareness of AI implications but also by incorporating ethics and the humanities into engineer training courses;
  • To ensure the effectiveness of these rules, the EESC calls for the creation of a supervisory authority: The “European observatory focusing on ethics in AI systems”.

Initiatives are also being launched at the level of private businesses. In 2016, Facebook teamed up with Microsoft, Amazon, IBM, Apple and Google to create a “Partnership on Artificial Intelligence to Benefit People and Society”. These US companies wish to define and develop best practices in terms of ethics.

At the end of 2017, Google created an ethics committee and published in June 2018 a list of seven ethical principles for the development of AI, among which the principle to avoid creating or reinforcing unfair bias, in order to avoid any “unjust”/discriminatory impacts on people.

It is true that the inclusion in companies’ codes of conduct/code of ethics of a chapter dedicated to the challenges raised by algorithms and AI systems – setting forth for example the red lines that should not be crossed when devising system parameters, the obligation to ensure the quality and updating of datasets used for algorithms, etc. – will soon become the minimum required.

Companies, and more generally, social partners, including at the level of business sectors, will play a key role, if only to ensure a permanent watch, to identify issues that are emerging or that were imperceptible or that went unnoticed at the beginning, etc.

However, the emergence of all types of codes of ethics at all levels, e.g. national or transnational administrative authorities or even private businesses, will not likely be sufficient to guarantee an ethical AI if the legislator does not at some point endorse the set out principles and declarations and create a binding regulatory framework.



To conclude this trilogy on labor law and the challenges of Artificial Intelligence, I would like to ask an open-ended question that we are all asking ourselves: do we think that AI will be more of a tool at the service of employees and supporting work relationships or, on the contrary, do we think that AI will direct and subject employees?

Undoubtedly, the answer to this question will depend on the choice that will be made by the human community. As a matter of fact, two options are possible. It is up to humans to make a choice. In the end, it comes back to the following philosophical question: do humans control their own destiny, do they have free will?

Between these two options – assisting or directing  – Cédric Villani, in his aforementioned report[3], proposes another route, i.e. complementarity between humans and machines.

I believe this complementarity approach is wise but it requires that humans take adaptation measures and implement the necessary safeguards to promote and preserve it over time.

Humans are different from animals and from tomorrow’s robots: They have the ability to make their lives a quest for meaning, to create and to constantly renew themselves as they deeply feel the vital need to be useful.

Regarding tomorrow’s working world, let us bet on the adaptation of employees to future evolutions, not on subjection!

This at that time that our Labor Code must definitively leave behind its features from the industrial era and respond once and for all to the challenges of the AI age.


[1] For a meaningful Artificial Intelligence, report authored by Cédric Villani and released on March 28, 2018

[2] Opinion from the EESC dated September 19, 2018 entitled Artificial intelligence: Anticipating its impact on work to ensure a fair transition

[3] For a meaningful Artificial Intelligence, report authored by Cédric Villani and released on March 28, 2018