Coming to a checkpoint near EU: Trials of AI lie-detector border guard underway in three EU states
Trials are underway of an EU-funded scheme where AI lie-detector systems will be used to scan potentially dodgy travelers coming from outside the bloc. Too Orwellian? Or just the latest step towards smoother travel?
Commencing from November 1, the iBorderCtrl system will be in place at four border crossing points in Hungary, Latvia and Greece with countries outside EU. It aims to facilitate faster border crossings for travelers while weeding out potential criminals or illegal crossings.
Developed with €5 million in EU funding from partners across Europe, the pilot project will be operated by border agents in each of the trial countries and led by the Hungarian National Police.
Those using the system will first have to upload certain documents like passports, along with an online application form, before being assessed by the virtual, retina-scanning border agent.
The traveler will simply stare into a camera and answer the questions one would expect a diligent human border agent to ask, according to New Scientist.
“What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?”
But unlike a human border guard, the AI system is analyzing minute micro-gestures in the traveler’s facial expression, searching for any signs that they might be telling a lie.
If satisfied with the crosser’s honest intentions, the iBorderCtrl will reward them with a QR code that allows them safe passage into the EU.
Unsatisfied however, and travelers will have to go through additional biometric screening such as having fingerprints taken, facial matching, or palm vein reading. A final assessment is then made by a human agent.
Like all AI technologies in their infancy, the system is still highly experimental and with a current success rate of 76 percent, it won’t be actually preventing anyone from crossing the border during its six month trial. But developers of the system are “quite confident” that accuracy can be boosted to 85 percent with the fresh data.
However, greater concern comes from civil liberties groups who have previously warned about the gross inaccuracies found in systems based on machine-learning, especially ones that use facial recognition software.
In July, the head of London’s Metropolitan Police stood by trials of automated facial recognition (AFR) technology in parts of the city, despite reports that the AFR system had a 98 percent false positive rate, resulting in only two accurate matches.
The system had been labelled an “Orwellian surveillance tool,” by civil liberties group, Big Brother Watch.
Like this story? Share it with a friend!