US Homeland Security chief launches AI task force
The US Department of Homeland Security plans to launch an Artificial Intelligence Task Force to explore how the rapidly-advancing technology can be deployed to “drastically alter the threat landscape,” DHS Secretary Alejandro Mayorkas announced during his State of Homeland Security address at the Council on Foreign Relations on Friday.
“Our department will lead in the responsible use of AI to secure the homeland and in defending against the malicious use of this transformational technology,” Mayorkas declared, insisting that any AI used by DHS will be “rigorously tested to avoid bias and disparate impact.”
Specifically, he said, the department will integrate AI into supply chains and cargo screening, predicting the technology will be able to accurately detect products made with slave labor, zero in on fentanyl and precursor chemical shipments, and even “target for disruption key nodes in the criminal networks” powering both. It can also protect electric grids, water supplies, and other critical infrastructure, he added.
Mayorkas acknowledged on Thursday his task force would also investigate the nefarious purposes AI could serve - the better to defend against them, of course.
Any regulations targeting AI would have to find a “sweet spot” between innovation and safety, he said, and any decision would have to be made quickly.
The rapid pace of technological change - the pivotal moment we are now in - requires that we also act today.
While the DHS secretary stressed that AI was still in a “nascent stage,” he marveled at its perceived potential, declaring, “The power is extraordinary.”
Thousands of AI experts recently signed an open letter urging a six-month moratorium on “giant AI experiments” so that governments, corporations, and other stakeholders can hammer out a regulatory framework with effective safeguards capable of stopping any potentially civilization-destroying developments from wiping out the human race. These include Elon Musk, co-founder of ChatGPT creator OpenAI, who has called the technology “dangerous.”
Even OpenAI CEO Sam Altman has admitted humans will eventually have to “slow down this technology,” as the company’s rapidly-advancing AI chatbots will “eliminate a lot of current jobs.” While he has sought to reassure doubters that the company is working on adding “safety limits,” the Center for Artificial Intelligence and Digital Policy last month asked the Federal Trade Commission to prohibit OpenAI from issuing new commercial releases of GPT-4, the engine powering ChatGPT, calling the software “biased, deceptive, and a risk to privacy and public safety.”