‘Know your customer’ indeed! Banks deploy AI systems to monitor customers & employees alike

24 Apr, 2021 00:31

Helen Buyniski is an American journalist and political commentator at RT. Follow her on Twitter @velocirapture23 and on Telegram

A number of US banks have unleashed AI-powered cameras capable of both facial recognition and general behavioral pattern analysis, hinting at a wider rollout in retail stores and elsewhere - and a big drop in customer trust.

Speaking to Reuters earlier this week, City National’s chief information security officer Bobby Dominguez tried to put a positive spin on the dystopian step forward when interviewed by Reuters about the phenomenon on Thursday. noting “we’re already leveraging facial recognition on mobile. Why not leverage it in the real world?”

This cheery approach clashes a bit with the secrecy of the banks that are already using such technology - Reuters explained City National Bank of Florida, JP Morgan Chase, and Wells Fargo were conducting trials of AI surveillance systems, but declined to say when, where, or on what basis the recording takes place. Are the recordings held for months? Deleted after a day unless something really juicy happens? And while City National specifically mentioned it would be trailing 31 sites using facial recognition software that potentially could “spot people on government watch lists” that sounds like a lawsuit waiting to happen. 

Also on rt.com EU data protection watchdog says facial recognition should be banned due to ‘deep intrusion’ into people’s private lives

To be sure, there are some positive uses for the mini-cameras in banks. As the economy burrows into a black hole, it’s not unusual to spot someone snuggled up with a sleeping bag inside a bank’s ATM vestibule, where an AI camera can at least distinguish them from an inanimate object while they try to get some sleep. Have to remove those undesirables! However, this is one of those distinctly American problems where the authorities instinctively go for the wrong solution - surely with nearly 60 empty homes for each homeless person, it makes more sense to merely match up a person to a superfluous shelter rather than deploy an Orwellian network of semi-sentient cameras bent on stopping people from sleeping inside banks.

But the bogus solution has already generated multiple job opportunities by this point - the guy who installs the camera and does tech support on the camera and the guy who sits in his car or in the bank waiting for an alert of suspicious activity inside the vestibule. What are you, some kind of job stealing _monster_?

Also on rt.com Bedside Big Brother? Homeworking employees to get mandatory webcams that allow AI to catch slackers

It’s certainly one explanation - Chase has admitted running a behavioral testing pilot in Harlem, long a black mainstay of New York City, and while the bank hemmed and hawed about the risk of being seen as racially insensitive, it ultimately went with the location anyway for convenience’s sake. And regarding the homeless, a security executive at ‘a mid-sized Southern bank’, interviewed by Reuters, actually gushed about the kind of innovative new measures to combat the homeless setting up shelters in their vestibules, noting that loitering-detection systems, sirens and strobe lights, and even outdoor-facing cameras designed to detect and deter “suspicious activity” immediately outside the bank during closing hours. The banks insisted they didn’t want to stop people from seeing shelter, but ultimately, convenience won the day once again.

Not all facial recognition algorithms are made equal, and some have become infamous for showing - as in the case of an American Civil Liberties Union study from 2018 - the humanity behind the coding. The ACLU last year took up the case of a black man whom an AI algorithm had misidentified as a criminal.

Another algorithm, Clearview AI, is loathed by some as *too* accurate, with its refusal to take down old photos scraped from ancient or deleted social media profiles, arguably violating US law. 

Overall, facial recognition continues to inhabit a gray area. In particular, “smart” doorbells like Amazon’s Ring that - often unbeknownst to users - comprise their own ad hoc surveillance camera network, feeding their data to law enforcement without knowledge or consent of the user, act outside the law, similar to the myriad cameras dotting city streets but less obvious about their activity.     

But why slobber, a la Pavlov, to get these cameras on their premises, as the banks have?  

“Eventually, the software could spot people on government watch lists,” Dominguez, of Chase, told Reuters, indicating the retail snooping was essentially a decoy. As such watch lists metastasize, roping in people not even suspected of crimes but merely involved with people suspected of crimes, access to watch lists has become a must-have.  There are more ways to get on such watch lists than ever, and fewer ways to get of. And as the number of lists grows, so does the information they contain, along the lines of the disturbingly comprehensive online advertising categories employed by Google and Amazon to track their customers.

While Chase, for instance, has insisted it has no plans on using “facial, race, and gender recognition” during its latest test of software that aims to identify behavioural patterns of both customers and workers at some of its Ohio locations , the bank does not have a strong inclination toward telling the truth. Just ask the algorithms themselves - would they lie to you?

Think your friends would be interested? Share this story!