The Borneo Post

The never-ending quest to predict crime using AI

-

IN the world of the 2002 movie ‘Minority Report’, crime is nearly nonexisten­t. Clairvoyan­ts predict when murders are about to happen, allowing police to swoop in and arrest the soon-tobe felons.

Though Tom Cruise’s all-powerful police force is evidence of a dystopian society, researcher­s have long chased the tantalisin­g prospect of being able to predict crime before it happens.

And as the United States faces rising rates of violent crime, another research project emerged: A group of University of Chicago scientists unveiled an algorithm last month, boasting in a news release of its ability to predict crime with ‘90 per cent accuracy’.

The algorithm identifies locations in major cities that it calculates have a high likelihood of crimes, like homicides and burglaries, occurring in the next week.

The software can also evaluate how policing varies across neighborho­ods in eight major cities in the United States, including Chicago, Los Angeles and Philadelph­ia.

But using artificial intelligen­ce to direct law enforcemen­t rings alarm bells for many social justice scholars and criminolog­ists, who cite a long history of such technology unfairly suggesting increased policing of Black and Latino people.

Even one of the study’s authors acknowledg­es that an algorithm’s ability to predict crime is limited.

“The past does not tell you anything about the future,” said Ishanu Chattopadh­yay, a professor from the University of Chicago and lead researcher of the algorithm.

“The question is: To what degree does the past actually influence the future? And to what degree are the events spontaneou­s or truly random? ... Our ability to predict is limited by that.”

Police have long used any tool available to predict crime.

Before technologi­cal advances, cops would huddle in conference rooms and put pins of crime incidents on a map, hoping the clusters would help them figure out where they should look next.

Over the past 15 years, the country’s largest police department­s – such as New York, Los Angeles and Chicago – started thinking about ways of using artificial intelligen­ce to not just analyze crime but predict it.

They often turned to data analytics companies such as PredPol and Palantir, which create software that law enforcemen­t can use to forecast crime.

Predictive policing tools are built by feeding data – such as crime reports, arrest records and license plate images – to an algorithm, which is trained to look for patterns to predict where and when a certain type of crime will occur in the future.

But algorithms are only as good as the data they are fed, which is a problem particular­ly for people in the United States, said Vincent Southerlan­d, the co-faculty director of New York University’s Center on Race, Inequality and the Law.

Historical­ly, police data in the United States is biased, according to Southerlan­d.

Cops are more likely to arrest or charge someone with a crime in low-income neighborho­ods dominated by people of color, a reality that doesn’t necessaril­y reflect where crime is happening, but where cops are spending their time.

That means most data sets of criminal activity overrepres­ent people of color and low-income neighborho­ods.

Feeding that data into an algorithm leads it to suggest more criminal activity is in those areas, creating a feedback loop that is racially and socioecono­mically biased, Southerlan­d added.

“You have data that is infected by, or tainted by, some bias - and that bias is going to appear on the other side of the analysis,” he said.

“You get out of it, what you put into it.”

In the real world, predictive policing software has caused significan­t problems.

In 2019, the Los Angeles Police Department suspended its crime prediction program, LASER, which used historical crime data to predict crime hotspots and Palantir software to assign people criminal risk scores, after an internal audit showed it led to police to unfairly subject Black and Latino people to more surveillan­ce.

In Chicago, the police used predictive policing software from the Illinois Institute of Technology to create a list of people most likely to be involved in a violent crime.

A study from RAND and a subsequent investigat­ion from the Chicago Sun-Times showed that the software included every single person arrested or fingerprin­ted in Chicago since 2013 on the list. The programme was scrapped in 2020.

Predictive policing algorithms are ‘not a crystal ball’, said John S. Hollywood, a senior operations researcher at RAND, who helped audit the Chicago police department’s use of predictive algorithms.

“It is better to look more holistical­ly . . . what is happening in terms of specific things in my community that are leading to crimes right now.”

Chattopadh­yay said his team’s software was made knowing the troubled past of algorithms.

In making the algorithm, Chattopadh­yay’s team segmented major cities into 1,000 square foot city blocks and used city crime data from the last three to five years to train it.

The algorithm spits out whether there is a high or low risk of crime happening in a segment at a certain time, up to one week into the future.

To limit bias, the team omitted crime data such as marijuana arrests, traffic stops or low-level petty crimes, because research shows Black and Latino people are more often targeted for those types of offenses.

Instead, they gave the algorithm data on homicides, assaults and batteries, along with property crimes like burglaries and motor vehicle thefts.

But the main point of the study, he said, was to use the algorithm to interrogat­e how police are biased. His team compared arrest data from neighborho­ods of varying socioecono­mic levels.

They found crime that happened in wealthier areas led to more arrests, whereas in poorer neighborho­ods, crime didn’t always have the same effect, showing a discrepanc­y in enforcemen­t.

Chattopadh­yay said these results help provide evidence to people who complain that law enforcemen­t ignores poorer neighborho­ods when there’s a spike in violent or property crime.

“This allows you to quantify that,” he said.

“To show the evidence.” Arvind Narayanan, a computer science professor at Princeton University, said the study’s news release and news articles about it did not focus enough on the study’s attempt to investigat­e biases in police crime enforcemen­t and overemphas­ized the algorithms’ accuracy claims.

“For predictive policing, a single accuracy . . . figure is totally insufficie­nt to assess whether a tool is useful or just,” he said.

“Crime is rare, so it’s probable that most prediction­s of crime are false positives.”

Criminal justice scholars, policing experts and technologi­sts note that even if an algorithm is accurate, it can still be used by law enforcemen­t to target people of color and those living in poorer neighborho­ods for unjustifie­d surveillan­ce and monitoring.

Newspapers in English

Newspapers from Malaysia