icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
3 Jan, 2020 14:36

#MeTooBots that will scan your personal emails for ‘harassment’ are an Orwellian misuse of AI

#MeTooBots that will scan your personal emails for ‘harassment’ are an Orwellian misuse of AI

The rise of so-called #MeTooBots, which can identify certain digital bullying and sexual harassment in the workplace, is a sinister threat to privacy and an attempt to harness science to further a political and cultural offensive.

In what must be one of the most sinister developments of the new decade, #MeTooBots, developed by Chicago-based AI firm NextLP, which monitor and flag communications between employees, have been adopted by more than 50 corporations around the world, including law firms in London.

Capitalising on the high-profile movement that arose after allegations against Hollywood mogul Harvey Weinstein, #MeTooBots might make good opportunist business sense for an AI company. But this is not a development that should be welcomed or sanctioned by AI enthusiasts or society as a whole.

This is not a new and exciting scientific application of the capabilities of AI or algorithmic intelligence.

Instead, it is an attempt to harness science to support the Culture War, to transform it into an all-encompassing presence in constant need of monitoring and scrutiny. This doesn’t just threaten privacy, but the legitimacy of AI.

#MeTooBots are based on the assumption that digital bullying and sexual harassment are the default states of workplace environments. What could be wrong with employers protecting their employees in this way?

Also on rt.com Male CEOs need protection from #MeToo mob & McDonald’s has only hurt itself after firing superstar boss for CONSENSUAL fling

A good start might be an assumption that the people they employ are decent, hard-working, morally sound adults who know right from wrong. That aside, the idea that machine-learning represents a superior form of oversight than human judgment and behavior, turns the world on its head. It simply adds to the misanthropy underpinning the Culture War that assumes human beings (and men in particular) to be inherently flawed, animalistic and suspect.

But this attempt to apply science in this way is not a very intelligent application of artificial intelligence. This is a technology looking for problems to solve rather than the other way around.

Machine learning bots today can only be taught pattern recognition. Understanding or spotting sexual harassment can be a very subtle and difficult thing to do. Algorithms have little capacity to interpret broader cultural or interpersonal dynamics. The only outcome one can safely bet upon is that things will be missed or, more predictably, will lead to over-sensitive interpretations and thus more lawsuits, discrimination and the harassment of employees by their employers.

Also on rt.com Amazon’s ‘smart’ doorbell allows customers to spy on ‘minorities minding their own f**ng business’

Any risqué joke, comment on appearance, proposal to go out for drinks, or even the stray mention of a body part will probably be meticulously logged to be used against you at a future date.

#MeTooBots in the workplace will also institutionalize snooping and distrust. The use of AI in this way will transform workplaces into high-tech authoritarian social engineering environments.

For the culture warriors, this will be welcome – as long as they have the upper hand. But for workers it will be an Orwellian nightmare where interpretations of thoughts will now be part of ‘normal’ workplace interactions. Behaviors will necessarily change. Self-censorship will abound. Instrumental interactions will replace genuine authenticity. Mistrust will be the default.

The final danger is that employee suspicion of their employers will only hamper the further use of AI in the workplace – an innovation that has enormous potential for transforming the workplace of the 21st Century for the better. Just imagine what an office would be like if all the dull, boring and repetitive drudgery of so many jobs were performed by dumb machines rather than dumbed-down human beings. Perhaps we need #BadManagerialDecisionBots instead?

Think your friends would be interested? Share this story!

The statements, views and opinions expressed in this column are solely those of the author and do not necessarily represent those of RT.

Podcasts
0:00
27:22
0:00
27:48