Hypothetical situation: It’s 1:30 p.m. on a Tuesday, and you are standing on the sidewalk in front of a house in West Ashley with an empty duffle bag in hand. Is that enough evidence for police to confront you or even arrest you on an attempted burglary charge? Almost certainly not. But what if the officer knows that on this particular block, from 1 to 2 p.m. on Tuesday, there is a 15-percent chance that someone will try to commit a burglary? Would that be a good enough reason for the officer to question your presence on the sidewalk?
Welcome to the new world of predictive policing, where computer models can give police hour-by-hour, block-by-block crime forecasts: Watch out for car break-ins around this neighborhood at lunchtime. Keep your guard up for violence over the weekend at this gang turf border. Police departments from Los Angeles to Memphis, Tenn., have already been using predictive policing software for years. This month, the technology arrived in Charleston.
On June 11 in the downtown Double Tree Hotel ballroom, Charleston Police Chief Greory Mullen delivered the news during a posh fundraising luncheon for the Charleston Police Fund. “Today, I’m very excited to announce that we are partnering with IBM to start a predictive policing pilot project here in Charleston,” Mullen said. “All across the country, if you’ve heard about predictive policing, that’s the new model of policing. They’re actually starting to look at data and trying to not just react to crime and where it’s happening, but actually take a lot of different types of information and predict in the future where that crime might occur … to be preventive instead of reactive.”
For now, the police department is only testing out a part of IBM’s Crime Prediction and Prevention software, and it is focusing on just one type of crime, armed robberies. To make its predictions, the program has analyzed three years’ worth of Charleston-area data from myriad sources, including crime reports, police dispatch records, geographic information systems, and even weather databases. If police are satisfied with the results from the pilot program, they’ll negotiate a contract with IBM and decide how they want to set up the full crime-prediction suite. The City of Memphis has spent about $400,000 a year since 2006 on its Blue CRUSH predictive policing initiative and reported a 30-percent decrease in serious crime, as well as a 15-percent decrease in violent crime, from 2006 to 2010.
Once a city enters contract negotiations with IBM, the next step is customization. Police could opt for a program that automatically identifies suspicious activity on the city’s network of 30-plus surveillance cameras, alerting police to events such as movement in off-limits areas, removal of important objects, and obstruction of camera views. The system could incorporate data from 911 call transcripts, and it could even monitor suspicious activity on Facebook and Twitter. Police can opt to look at the data once a day or once a week, or they can make it available to cops in the field, updated in real time on the laptop computers they keep in their cruisers.
In addition to the predictive policing suite, Charleston police have already secured a $600,000 grant from the South Carolina Law Enforcement Division to participate in CopLink, another program from IBM that will allow Charleston police to automatically share crime data with authorities in North Charleston, Mt. Pleasant, Charleston County, and Horry County. That system should be up and running in less than a year.
One person who has given predictive policing a lot of thought is Andrew Guthrie Ferguson, an assistant law professor at the University of the District of Columbia. He wrote a paper titled “Predictive Policing: The Future of Reasonable Suspicion,” which is currently being peer-reviewed for the Emory Law Journal (the paper is also where the hypothetical example of the man with the duffle bag comes from). “I’m sort of agnostic about this technology,” Ferguson says in a phone interview. He doesn’t doubt that predictive policing is useful, both for arresting criminals and for deterring them with a police presence. But he has his concerns about how it will interact with the Fourth Amendment, the part of the U.S. Constitution that guards against unreasonable searches and seizures. It’s only a matter of time, he says, before a lawyer challenges a court’s admission of evidence based on a predictive algorithm and the case makes its way to the Supreme Court.
“I think what you would say is the worst case — and I don’t even think this is that far-fetched — is that there will be a case where someone gets stopped on a street corner for suspicion of burglary,” Ferguson says. “It’ll go before a court, and they’ll say, ‘OK, officer, what was your reasonable suspicion for stopping this person?’ And he’ll say, ‘The computer told me,’ essentially, right? ‘The computer said look out for burglaries, I saw this guy in the location, so I stopped him because he looked like a burglar.’ And race, class, all of those things obviously are a part of it. And the judge will then just defer … How are you going to cross-examine the computer?”
The future is here
Ferguson’s concerns about predictive policing are twofold. One is that the new crime-fighting tool will be used dishonestly, with officers misapplying data to run minorities out of neighborhoods or to question people without a good reason. His other concern, the one he focuses on in his paper, is that computer-generated crime forecasts will lower the threshold for reasonable suspicion, leading to increased racial and class profiling as well as a rash of pat-downs and arrests that otherwise would have been perceived as unwarranted.
As with a tip from an informant, a computer-generated crime forecast “can color what a reasonable officer observes, even if it cannot be reasonable suspicion in itself without direct observation,” Ferguson writes in his paper. Later, he notes, “Merely looking into car windows is not sufficient to warrant the reasonable belief that criminal activity is afoot. However, with a predicted ‘tip’ of a car theft, it might be.”
He’s not speaking hypothetically. On a Friday afternoon in July 2011, a predictive policing program in Santa Cruz, Calif., cast a high probability for car burglaries in a downtown parking garage. Cops camped out in the garage, and sure enough, they spotted two women peering into car windows. They confronted the women, and one turned out to have outstanding warrants; the other was carrying illegal drugs. The future had arrived in Santa Cruz: Two arrests had been made that wouldn’t have been without predictive policing.
Analytics is IBM’s bread and butter. In 2008, the company that invented the floppy disk and the UPC barcode landed a high-profile public security gig, signing on to help the FBI improve its biometrics system for recognizing fingerprints, palm prints, irises, and faces. The company has also created software to predict insurance fraud, forecast furniture sales, and even provide commentary on tennis strategy during Wimbledon. Tech companies refer to certain feats of heavy-duty analysis as “big data,” and at IBM, it’s part of the broader campaign to build a “smarter planet,” as the company’s television commercials say. IBM paints a decidedly utopian picture of the digital future.
IBM spokesmen are quick to point out that the sheer amount of cold, hard data being crunched ensures that predictive policing software is actually far more accurate than traditional police investigation methods. Whereas veteran cops have a general idea of “high-crime areas” often encompassing entire neighborhoods, a computer can identify crime hot spots as small as 500 by 500 feet. Besides, predictive policing doesn’t require cops to record any more information than they already do. It’s just that, whereas a human analyst could take hours or even days to discover a crime pattern, a computer can connect the dots in just seconds.
Precogs not included
Let’s clear up one point about predictive policing: It’s not Minority Report. For one thing, unlike in the Philip K. Dick short story and 2002 Tom Cruise vehicle, there are no precogs to tell police exactly when and where a crime will occur. What IBM offers is only a probability model — albeit an extremely sophisticated one — that is capable of factoring in a wide variety of already-available data.
Predictive policing, which Time magazine included on its list of the best inventions of 2011, is really a series of inventions designed to predict different types of crimes. One of the inventions can be credited to George Mohler, a math and computer science professor at Santa Clara University who discovered that the same mathematical model used for predicting earthquake aftershocks could also be used to predict certain types of property crimes. Different crimes, however, require different predictive algorithms.
Prediction is a time-honored law enforcement practice. A search warrant is nothing but an educated guess that police will find something, and cops have always tried to stay a step ahead of the bad guys. But the tools for prediction started getting more sophisticated in 1995 when the New York Police Department, under the leadership of Commissioner William J. Bratton, rolled out a new process called CompStat. The NYPD created a database for all of its precincts and started holding weekly meetings with personnel to discuss emerging crime patterns. An entire CompStat Unit was formed to map out the crimes and analyze the quantitative and qualitative data that came pouring in. Bratton moved to the Los Angeles Police Department in 2001, and soon police departments nationwide were adopting CompStat as their operating procedure. Ferguson attributes part of CompStat’s popularity to the fact that the National Institute of Justice offered grants for police departments to implement it, an attractive offer during tight budget years.
In 2007, both the Charleston and North Charleston police departments adopted CompStat. In Charleston, members of the command staff attend a CompStat meeting every Tuesday, mapping out hot spots for crime and planning their patrols accordingly. Violent crime in the city went down 50 percent from 2007 to 2011, while robberies dropped from 269 to 159, auto thefts fell from 444 to 267, and burglaries went down from 746 to 448. But it’s hard to say how much of an impact CompStat had on those numbers. Nationwide, the crime rates for serious offenses, including murder, rape, and assault, have dropped since the early 1990s, a fact that criminologists have attributed to everything from longer prison sentences to an aging population to Roe v. Wade.
There is anecdotal evidence to suggest CompStat has been working. North Charleston Mayor Keith Summey, in his interview for re-election in 2011, credited hot-spot policing with reducing the city’s once-infamous homicide rate. David Cheatle, a deputy chief at the North Charleston Police Department, is an advocate for the proactive approach. He says that when crooks started robbing Charleston-area Chinese restaurants at gunpoint in the fall of 2011, his department caught on to the trend early, thanks to CompStat, and realized that the robbers were catching employees off guard by entering through unlocked back doors. North Charleston police took a proactive approach, sending officers to every Chinese restaurant in the city and warning the managers to lock up their back doors. The robbers hit 30 restaurants in all, but thanks to the preparation, Cheatle says, only one of the incidents took place in the City of North Charleston.
The ultimate informant
For better or for worse, IBM’s predictive policing program decreases law enforcement’s reliance on human brainpower. A computer can consider more variables and deduce patterns more quickly than a police captain scratching his chin and staring at pinpoints on a map. But Scott Cook, an IBM spokesman, says some of the conclusions drawn by the program will still sound like common sense to longtime police officers.
“Talk to an officer that’s got 20 years on the force and whatnot, and he knows where the hot spots are,” Cook says. “He knows that in the middle of August, if it’s 110 degrees and 80-percent humidity, tempers are short and oftentimes there are issues that happen on this street or that street … Now, in the case of Charleston, you have the potential of over 400 officers who have that type of knowledge. So what it’s doing, then, is to augment that knowledge and the instinct of a police officer with some hard data to back it up.”
One way to think about predictive policing, as mentioned in Andrew Ferguson’s report, is to compare it to informant tips. Police might hear that a certain house is a crack den or that a certain street corner is a hotbed of prostitution, but they need some sort of corroboration before they go kicking down doors and interrogating loiterers.
So, is a computer-generated crime forecast a more reliable tip than a phone call from an informant? Ferguson says there haven’t been any conclusive studies yet on the accuracy of crime predictions, although the LAPD is currently working on one.
In any case, just as a tip is only as good as its source, a forecast can only be as reliable as its data inputs. And in Memphis, the city IBM holds up as a poster child for predictive policing, that data might be compromised. In January, The Commercial Appeal reported that Memphis Police Department officials had discovered 79,000 memos written by officers from January 2006 through July 2011. The department inspected a sample of 20,000 memos and found that one in 15 should have been upgraded to a full-fledged report, meaning that — if the sample is representative of the whole — about 5,250 crimes have been left out of the reports during the years Blue CRUSH has been in effect. This also calls into question the department’s claim of a 30-percent reduction in serious crimes.
But if all goes well with a predictive policing program, Cook says it can be a useful decision-making tool.
Let’s say police receive an anonymous tip that someone is doing drugs at a certain house. “How reliable that tip is, you know, previously officers would heavily rely on that,” Cook says. “Now they have the opportunity to have quick access to information that will help them validate that tip or disprove that tip. So instead of privacy being encroached upon, quite the opposite is true.”
What could possibly go wrong?
If you live on certain streets in Charleston, you’ve seen police officers perform plenty of Terry frisks. The pat-downs are named for the landmark 1968 case Terry v. Ohio, which gave police the authority to search a person’s body for weapons without a warrant or probable cause, provided that the officer believes the person might be armed and dangerous.
On June 16, 2012, at about 2:30 a.m., an officer spotted a black male riding his bicycle down the middle of Kennedy Street just north of the Crosstown. The man had no safety lights on his bike, so the cop flagged him down for violating the city’s nighttime bicycle safety laws. The officer noticed the man was avoiding eye contact and exhibiting “very nervous behavior,” according to an incident report. Because the bicyclist was “in a neighborhood with a high crime volume, including narcotics and weapons law violations,” the cop reasoned in the report, he needed to perform a Terry frisk.
While patting down the man’s pant legs, the officer felt several soft bags in the man’s cargo pockets. The officer asked permission to search the man’s pockets, and the man consented (he didn’t legally have to). The search yielded no weapons, but the eight bags of marijuana and $412 in cash that the officer found were enough to justify an arrest on a charge of possession with intent to distribute.
Most weeks, there are numerous incidents like this one in the Charleston police reports. Sometimes the officer pulls a driver over for a broken taillight, and other times he stops a person on the sidewalk for loitering, but variants on one phrase — “in a neighborhood with a high crime volume” — are echoed throughout many of the reports where police conduct a search on a person’s clothes or vehicle.
If Charleston police adopt IBM’s full predictive policing suite, “high crime volume” will take on a more precise meaning. The term will no longer be applied to neighborhoods, but to individual blocks. Victoria Middleton, executive director of the American Civil Liberties Union of South Carolina, says stop-and-frisk policing in computer-identified hot spots could hamper the good work done by officers who patrol on foot and get to know the people they protect, people who sometimes end up being key informants because of the trust relationship they have with an officer.
“It can create problems in the targeted communities when trust is eroded,” Middleton says. “That would definitely be counterproductive if the goal is to reduce and prevent and solve crimes.”
The search-and-frisk also presents an opportunity for racial profiling if, as has been alleged, Terry frisks are more common in low-income and minority neighborhoods. In New York City, the home of CompStat, a class-action lawsuit has been filed against the police department alleging that officers routinely subject people to searches based on racial bias. And the accusers have some data to back up their claims: In 2010, 600,000 people were frisked in New York, and a study by The New York Times found that blacks and Hispanics — who together make up little more than half of the city’s population — accounted for 85 percent of the people who got frisked. In one predominantly black Brooklyn neighborhood, the Times found that officers gave vague reasons for half of the stops, including “furtive movement.”
Toward the end of his paper, Andrew Ferguson lists a few potential problems that courts and police departments should be wary of if they adopt predictive policing:
• Constitutional quandaries. A court case involving predictive policing might one day cause a reassessment of the Fourth Amendment. Ferguson writes that such a court case could cause judges to “rethink the currently overly flexible approach to reasonable suspicion, based on a concern that this technology could be manipulated or used in a discriminatory manner.”
• Transparency. If a judge asks an officer to give his reason for stopping or arresting a person, and the officer mentions the computer-generated crime forecast, that officer will have to be able to explain how the computer arrived at that prediction, as well as how accurate and timely it is. “Metrics for evaluation will need to be created, and then it will be up to courts to address the line-drawing on a case-by-case basis,” Ferguson writes.
• Reliability. If data collection is flawed, the predictive policing system will give flawed forecasts.
One final note on the future of public safety: If under-reporting is a problem in Charleston, as it is in Memphis, then a predictive policing model could skew the police department’s priorities toward certain types of crime and away from others. As Ferguson notes in his paper, studies have shown that domestic violence, petty thefts, and retaliatory acts among violent criminals are not always as well-reported as other crimes. “If financial fraud or high-end drug dealing is under-reported compared to car thefts,” Ferguson writes, “then a system based on predictive policing and data will focus on the latter at the expense of the former.”