Renan Rodrigues had been working as a food delivery driver at Swiss company Smood for about a year and a half when "the robot" took over. This is how the 33-year-old describes the algorithm-powered program that allocated deliveries and shifts for him and his colleagues.
Smood had used such software since he started working there in 2020, Rodrigues told DW. But at a certain point, "the robot" became entirely responsible for planning his working day, according to him, and appealing to human managers was no longer possible.
The goal of "the robot" was to organize deliveries in the most efficient way possible. From his employer's perspective, Rodrigues is sure that it worked. When he started at Smood in the small Swiss town of Yverdon-Les-Bains, it was typical to do around two deliveries an hour, he said. By the time he left, it was more like four or five.
"I quickly understood that it would be a disaster on a human level," Rodrigues told DW. He saw "the robot" pitting employees against each other. The quickest, best-performing drivers got more deliveries, he claimed. Ultimately, he said, he found himself getting less work. His zero-hour contract didn't guarantee him a minimum number of working hours or a fixed monthly wage, and he found it difficult to anticipate his income.
"For me, the worst part was to tell me that I have a stopwatch running at all times, hours and places, when I'm already being tracked by GPS systems, for my speed, etc.," he said. It created what he called "social stress." Instead of greeting a restaurateur when picking up a meal, he would bark at them to hurry up, running in and out. "It's sad on a human level."
Reckoning with robots
What Rodrigues and his colleagues dubbed "the robot" is also known as algorithmic management, when workplace decisions are made according to computer-powered calculations known as algorithms. It is closely linked to artificial intelligence (AI), which according to the European Commission "refers to systems that display intelligent behavior by analyzing their environment and taking actions — with some degree of autonomy — to achieve specific goals."
The use of algorithmic management is particularly associated with the gig economy, companies like Uber and Deliveroo, whose workers are typically freelance or on zero-hour contracts.
In fact, AI tools are quickly making inroads into various sectors of the economy. For white-collar office jobs, they can be deployed in recruitment or to track performance. A 2022 survey of 1,000 companies by professional services consultancy PwC found that between a sixth and a quarter had used AI in recruitment or employee retention in the past 12 months. Among the companies that were most advanced in the use of AI, around 40% had used it to improve employee experience and skills acquisition, or to increase productivity.
Companies can use data about employees or candidates in a variety of ways, as a report published last year by OpenMind, a nonprofit initiative of Spanish bank BBVA, highlighted. "Human resources professionals make decisions about recruitment, that is, who to hire; in worker appraisals and promotion considerations; to identify when people are likely to leave their jobs; and to select future leaders. People analytics are also used to manage workers' performance."
Take the example of HireVue, a US company that, according to its website, has more than 800 clients, including major multinationals like Amazon, G4S and Unilever. Using video job interviews, the company claims it can massively speed up recruitment, offer candidates greater flexibility and actually make hiring fairer. Algorithms can be trained to eliminate unconscious race and gender biases common in human hirers, so the argument goes. Citing the example of a British customer, the Co-Operative Bank, HireVue said its tools helped push down a gender bias favoring men from a 70/30 ratio to 50/50 — gender parity.
However, a number of experts and journalists have in recent years flagged the risk of reproducing racist, ableist or sexist bias in AI-enabled recruitment. A US study last year found that AI-trained robots repeatedly discriminated against women and non-white people.
The US Equal Employment Opportunity Commission has even issued guidance on the use of workplace AI, warning that "the use of these tools may disadvantage job applicants and employees with disabilities." What if you scored poorly on a test that required high keyboard dexterity, for example?
Legal changes in the pipeline
In the European Union, two key pieces of bloc-wide legislation are on the way that should affect the way AI is deployed at work. The European Commission has stressed that, in general, AI can be beneficial for citizens and businesses, but that it also poses a risk to fundamental rights.
Under the proposed AI act, employment, management of workers and access to self-employment are specifically mentioned as high-risk uses. For makers and buyers of such AI tools, the law should provide specific obligations before products hit the market, chiefly a conformity assessment.
This test would scrutinize, among other things, the quality of data sets used to train AI systems (poorly trained systems can produce biased results), transparency provisions for buyers and levels of human oversight. AI developers would also have monitoring obligations once a product hits the market.
From a workers' perspective, what the AI legislation doesn't do is specifically regulate how it can be used by your boss, according to Aida Ponce Del Castillo of the European Trade Union Institute. "It's a missed opportunity," the researcher told DW. The obligations fall on the sellers of technology. Certain technologies are banned outright under the AI act — like the "social scoring" system associated with the Chinese government — but this doesn't have huge implications for the workplace.
The second upcoming piece of relevant legislation is the Platform Work Directive, Ponce said. It has a dedicated chapter on algorithmic management, but as the name suggests, it only covers the estimated 28 million workers in the EU platform sector. The proposed law, according to the European Commission, "increases transparency in the use of algorithms by digital labor platforms, ensures human monitoring on their respect of working conditions and gives the right to contest automated decisions."
These draft laws — both still working their way through the EU legislative process — should give workers the tools to challenge potentially problematic use of AI by their bosses, Ponce said, cautioning that they won't outright prohibit them. Two things she believes should be banned are emotion-reading tech (one of the most contested forms of AI; many experts doubt that emotions are simple or universal enough to measure) and the suspension of accounts for gig workers like Uber drivers.
"I don't want to say that AI is bad. I've dedicated 20 years of my life to studying technologies," Ponce said. It's always about managing the risks for people, she said.
Former delivery driver Rodrigues' feelings about "the robot" are clear enough: he believes there ought to be much more regulation on what companies can and can't do. He was ultimately fired by Smood, he admitted. But Rodrigues he explained that he doesn't mind: he has landed a training contract for his dream job, and is now set to become a train driver.
Edited by: Ashutosh Pandey