Algorithms Need Management Training, Too

1 year ago 100

The European Union is expected to finalize the Platform Work Directive, its new legislation to regulate digital labor platforms, this month. This is the first law proposed at the European Union level to explicitly regulate “algorithmic management”: the use of automated monitoring, evaluation, and decision-making systems to make or inform decisions including recruitment, hiring, assigning tasks, and termination.

However, the scope of the Platform Work Directive is limited to digital labor platforms—that is, to “platform work.” And while algorithmic management first became widespread in the labor platforms of the gig economy, the past few years—amid the pandemic—have also seen a rapid uptake of algorithmic management technologies and practices within traditional employment relationships.

Some of the most minutely controlling, harmful, and well-publicized uses have been in warehouse work and call centers. Warehouse workers, for example, have reported quotas so stringent that they don’t have time to use the bathroom and say they’ve been fired—by algorithm—for not meeting them. Algorithmic management has also been documented in retail and manufacturing; in software engineeringmarketing, and consulting; and in public-sector work, including health care and policing.

Human resource professionals often refer to these algorithmic management practices as “people analytics.” But some observers and researchers have developed a more pointed name for the monitoring software—installed on employees’ computers and phones—that it often relies on: “bossware.” It has added a new level of surveillance to work life: location tracking; keystroke logging; screenshots of workers’ screens; and even, in some cases, video and photos taken through the webcams on workers’ computers.

As a result, there is an emerging position among researchers and policy makers that the Platform Work Directive is not enough, and that the European Union should also develop a directive specifically regulating algorithmic management in the context of traditional employment.

It’s not hard to see why traditional organizations are using algorithmic management. The most obvious benefits have to do with improving the speed and scale of information processing. In recruiting and hiring, for example, companies can receive thousands of applications for a single open position. Résumé screening software and other automated tools can help sort through this huge quantity of information. In some cases, algorithmic management might help improve organizational performance, for example by more smartly pairing workers with work. And there are some potential, if so far mostly unrealized, benefits. Designed carefully, algorithmic management could reduce bias in hiring, evaluation, and promotion or improve employee well-being by detecting needs for training or support.

But there are clear harms and risks as well—to workers and to organizations. The systems aren’t always very good and sometimes make decisions that are obviously erroneous or discriminatory. They require lots of data, which means they often occasion newly pervasive and intimate surveillance of workers, and they are often designed and deployed with relatively little worker input. The result is that sometimes they make biased or otherwise bad management decisions; they cause privacy harms; they expose organizations to regulatory and public relations risks; and they can erode trust between workers and leadership. 

The current regulatory situation regarding algorithmic management in the EU is complex. Many bodies of law already apply. Data protection law, for example, provides some rights to workers and job candidates, as do national systems of labor and employment law, discrimination law, and occupational health and safety law. But there are still some missing pieces. For example, while data protection law creates an obligation for employers to ensure that data they store about employees and applicants is “accurate,” it’s not clear that there is an obligation for decision-making systems to make reasonable inferences or decisions based on that data. If a service worker is fired because of a bad customer review but that review was motivated by factors beyond the worker’s control, the data may be “accurate” in the sense of reflecting the customer’s unsatisfactory experience. The decision based on it may therefore be lawful—but still unreasonable and inappropriate.

This leads to a curious paradox. On the one hand, more protection is needed. On the other hand, the welter of already existing law creates unnecessary complexity for organizations trying to use algorithmic management responsibly. Confusing matters further, the algorithmic management provisions of the new Platform Work Directive mean that platform workers, long underprotected by law, are likely to have more protections against intrusive monitoring and error-prone algorithmic management than traditional employees. 

Read Original