icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
7 Jun, 2023 21:59

DHS wanted to track internet users with ‘risk score’ – think tank

The ‘Night Fury’ program sought to use automated methods to detect supposedly “pro-terrorist” social media accounts
DHS wanted to track internet users with ‘risk score’ – think tank

The US Department of Homeland Security operated a program that sought to automate the designation of social media users as “pro-terrorist” based on their use of certain keywords and interaction with other targeted users, according to documents obtained by the Brennan Center for Justice and published on Tuesday.

Project Night Fury, a partnership between the DHS and the University of Alabama at Birmingham, sought to assign potentially “pro-terrorist” accounts a “risk score” that would then affect other accounts they interacted with. The university had agreed to develop automated methods to determine whether an account linked to one already under surveillance was itself “pro-terrorist,” using criteria like “keyword set comparisons.” The algorithm, not a human, would determine whether a user could be labeled “pro-terrorist” for simply clicking “like” on a Facebook post or retweeting an account that had already received that label. 

Despite the known limits of artificial intelligence when this system was being formulated in 2018, the entire process would have been automated, according to the documents. The university was assigned to build models to “identify key influencers of pro-terrorist thought” and create an automated system to uncover bots “programmatically generated to exert influence” to spread both “terrorist propaganda” and “foreign influence campaigns.” They would then compile a list of suspect accounts and turn them over to the DHS, along with their names, emails, phone numbers, pictures, and posts.

Missing from the contracts were definitions for important concepts like what “pro-terrorist” actually meant – or what a “risk score” would measure, whether it was the risk of becoming a terrorist or merely sympathizing with one – or an explanation of how an automated process could make such distinctions absent clear-cut definitions. 

It's also not clear how the program would have distinguished between terrorism-adjacent content and material related to drug trafficking and “disinformation,” which it also sought to track. As of the date of the contracts seen by the Brennan Center, the DHS hadn’t figured out how to quantify terrorist sympathies and was seeking assistance from university researchers to “identify relevant attributes.” 

The DHS Inspector General was tipped off to potential privacy violations on Project Night Fury in 2018, and until the Brennan Center’s FOIA request was fulfilled, all that was known about the project came from the OIG’s 2019 report, which found the agents involved with the project did not comply with department policies governing privacy and protection of sensitive information. 

While the DHS supposedly stopped work on Night Fury in 2019, it’s not known how far along the project was, or whether its assets were transferred elsewhere within the law enforcement apparatus. Customs and Border Protection was recently found to be using a similar AI-powered tool called Babel X to snoop through the social media accounts of travelers at the US border.