West News Wire: The use of the image, according to digital rights and Muslim civil society groups, raised serious concerns about quickly evolving tools like facial recognition software and was indicative of a pattern of Islamophobia within intelligence and law enforcement agencies that portrayed Muslims as a threat.
A senior official from the CIA’s Digital Innovation Directorate used the image in a presentation about how the agency’s adoption of cloud-based technologies was changing the way it gathered intelligence.
In 2018, Sean Roche said at a conference for the public sector hosted by Amazon Web Services (AWS): “The age of expeditionary intelligence means going to very hostile places very quickly to solve very tough problems.”
He said that small teams of programmers, data scientists and analysts “coding in the field” had delivered “amazing capabilities in the case of finding people we care about”
For the CIA, Roche was the associate deputy director for digital innovation at the time. “Knowing who they are, what they’re doing, their intentions, and where they are,” he stated.
The next image in the presentation depicted pilgrims assembled in the Masjid al-Haram, Mecca’s holiest mosque and location of the Kaaba.
The image looks to be a stock shot from a website for photography, taken in January 2017 while performing the Hajj. A yellow circle, however, has been added during image editing to highlight the face of a man in the crowd.
The man has not been named by reporters. There is no indication that the CIA is interested in him.
News reporters spoke with the CIA about its capacity to use surveillance tools to keep an eye on pilgrims performing the Hajj and whether it would do so during the start of this year’s journey on Monday. However, the CIA did not answer reporters inquiries.
However, the usage of the photograph has raised questions about monitoring technologies among Muslim advocacy groups and legal experts.
Muslim threat portrayal in official training materials and presentations has a long history, according to Edward Mitchell, national deputy director of the Council on American-Islamic Relations (CAIR).
“Muslims should not be the default example of how government technology might be exploited, according to this statement. The Hajj pilgrimage is a time when Muslims notably practise this, according to Mitchell.
The American Civil Liberties Union’s National Security Project senior staff attorney Ashley Gorski told news reporters that facial recognition technology “poses substantial threats to privacy and civil liberties. People have the right to freely practise their religious beliefs without worrying that the government is watching them.
This is yet another instance of how US intelligence services support the use of surveillance tools to keep an eye on and manage religious congregations, including those that are located abroad.
The presentation raised concerns about how technology was being used outside of the US where constitutional protections did not apply, according to Jumana Musa, director of the Fourth Amendment Centre at the National Association of Criminal Defence Lawyers (NACDL), which provides legal guidance in cases involving new surveillance tools.
When they believe themselves to be outside the US gathering intelligence information rather than evidence for a trial, the US government obviously has a different set of criteria. The laws are also far more lax, Musa told reporters.
It’s not merely business as usual to be able to scan a crowd of 100,000 or more people and claim to be able to identify people, according to Clare Garvie, a privacy lawyer at NACDL who specialises in face recognition technology.
It’s not surprising that the CIA would find attraction in such an effective monitoring tool.
Regarding the possible risks posed by AI, Roche stated: “Some people are concerned about AI. Never be.
He cited German futurist Gerd Leonhard when he said: “Human flourishing must continue to be the central goal of all technology innovation. futuristic humanism. There won’t be a takeover by the machines.
A public statement warning that humanity faced an existential threat from quickly evolving technology was signed by hundreds of AI specialists last month.
The Centre for AI Safety, which released the statement, noted risks like as weaponization and the use of technology to “enforce narrow values through pervasive surveillance and oppressive censorship”.
According to some AI scientists, worries about the technology are overstated.