So I listened to The Daily yesterday, which I nearly always enjoy, but it seemed a bit… off to me. But interesting.
It was a detailed telling of the story about the innocent man in the Detroit area who got arrested because facial recognition software misidentified him as having been the guy who had stolen some watches from a store.
I’ve heard about this guy several times over the last few months, and each time the story has been brought up, it has been in the context of demonstrating that the use of facial recognition by the cops is highly problematic — and unfair to minorities, since the algorithms used in this country are way better at recognizing white men.
But is that really the case? After having listened to the story, I’m thinking the problem here is not that it’s bad or unfair technology, but that the cops used it improperly. I mean, they really screwed up here. But that would seem to call for better procedure, not abandoning the tool.
No doubt about it, what happened to Robert Williams was a nightmare. And inexcusable.
Cops came out of nowhere to arrest this guy on the basis of nothing but an erroneous digital identification. The software was SO bad that when they had him in interrogation and showed him the photo that was supposedly him, he held it to his face and “What you think, all black men look alike?” And the cops, to their credit, saw that it wasn’t him, and let him go.
But this was after he had spent the night in jail. It was after he had been disrespected, and cuffed, in front of his children at his home. It was after a cop told his wife that “we assume you’re his baby mama…”
It was gross. It was all kinds of cringe-worthy, and this man will carry around the humiliation of the experience for life. And it all happened because they had zeroed in on this man based on nothing but the facial recognition that was based on
But here’s the thing. The podcast started out by saying “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.”
So, it’s only happened once? That we know of, of course…
There’s probably a reason it’s a rare occurrence. As the NYT’s Kashmir Hill explains, the police use the technology because they “feel that face recognition is just too valuable a tool in their tool set to solve crimes. And their defense is that they never arrest somebody based on facial recognition alone, that facial recognition is only what they call an investigative lead. It doesn’t supply probable cause for arrest.”
So… it’s like hearing a name from a snitch or something. Or getting an anonymous tip on the phone. It’s a reason to look at somebody, but not a reason to arrest him.
What happened here, it seems, is that the cops grossly violated the rules, and made a lazy, unjustified arrest of an innocent man. Which you don’t have to have facial recognition to do. You just have to be a bad cop. You don’t need special equipment, you can base a false arrest on all sorts of sloppy police work.
In Williams’ case, the cops didn’t even go see the suspect to see if he looked anything like the picture. They called him on the phone, and when he refused to come in (reasonably assuming it was a crank call), they sent a patrol car to arrest him.
Seems to me having the software as a tool could be very helpful, as long as it’s used as a lead, and not as cause for arrest. The way it was in this one case.
This podcast kept referring to the “bias” involved. But isn’t the problem less one of “bias,” and more one of not following rules? Wouldn’t adhering faithfully to those rules eliminate the problem with using this tool?
I suspect some of my libertarian friends will disagree. But I’d like to hear the reasons…
Like with those cop stories the other day, I urge everyone to listen to the podcast, or at least read the transcript, before commenting…
By the way, on the central reason they used the word “bias”…
They were talking about the fact that the software recognizes white guys better than black guys or Asians. But the word seemed wrong, in this sense. Of course, they were probably using it in some esoteric statistical sense, but still… (Seems like if there’s a “bias” there, it’s against the guilty white guys — they’re more likely to get caught and nailed, accurately. Which is a good thing, right? Since they’re guilty and all…)
But it’s definitely a problem that needs to be addressed. We need better software. And that’s doable. As the reporter said in the podcast:
Mind you, I’m not saying it’s great the Chinese do it well. It’s creepy knowing WHY the Chinese are so good at it. On that point, I’m in agreement with my libertarian friends.
But if you’re not using it to control the population politically, and ARE using it to catch crooks, then it’s worth improving the algorithm so it does a good job with everybody.
That makes sense, right?
I have been retired from law enforcement for ten years and technology has changed. But I think you are spot on saying facial recognition tools should be used to develop leads. It’s an investigative technique rather than positive identification. It might be more useful when searching photos for a match with a record subject if the databases are established with positive biometrics. In other words, when someone is arrested, the record of arrest and prosecution (RAP) is supported by fingerprints or DNA. The information can be compared with a suspect and positively rule the suspect in or out.
Thanks for weighing in, Mark! Your professional perspective is of particularly high value here…
Even as imperfect biased algorithms are, many activists and researchers believe that the reverse — a system that can perfectly identify any individual — would be far worse, as it could spell the end of privacy as Americans know it. Do we need BIG BROTHER watching us?
And that’s the libertarian view — which I tend to reject, but the horror of what the Chinese are doing with the technology does tend to bolster the libertarian point of view.
And yes, up to now there has been more concern expressed about the tech being too GOOD than too bad. As reflected in that podcast:
It’s not just the libertarian view Brad; please explain why only libertarians would think that way. And please, avoid explaining with the heuristic explanation that there is a difference between Libertarian and libertarian. As I’m sure Bill will agree, police have a role in our country, but they’ve had too much power, supported by a chorus of “law and order”, too long.
Of course there is a difference between libertarian and Libertarian. And when I use the lower case, I’m referring to something that is more libertarian than, say, communitarian or whatever. People might refer to it as classical liberalism, but I think “libertarian” communicates more clearly.
If person A is more concerned about civil liberties than person B, then person A is more libertarian than person B. It’s a perfectly valid term of description — and one that I’ve made good use of my entire career. I can’t help that a fringe political party uses the term as its name — but if I’m talking about the party, I’ll capitalize it. Although I’ll say in that party’s defense that it is being more descriptive than the Democrats and Republicans. In fact, both major parties seem at times to be competing to be more libertarian than the Libertarians — although they don’t quite succeed, usually. The Libertarian Party members tend to be pretty much out there…
Often the Democrats are more democratic than Republicans, and sometimes they’re not. And sometimes I doubt whether the Republicans even know what “republican” means — either in a classic “as opposed to a monarchy” sense, or in the more restrictive sense that I think Madison used it, referring to representative democracy. In the sense of the system he favored, as opposed to what justifiably horrified the Founders — direct democracy (shudder)…
You didn’t answer: “It’s not just the libertarian view Brad; please explain why only libertarians would think that way.” You did explain how you see libertarians as being different than Libertarians, which is exactly what I asked you not to do: “And please, avoid explaining with the heuristic explanation that there is a difference between Libertarian and libertarian.”
At bottom, Brad does not believe in a “right to privacy.”
Well, certainly not as a legal, Constitutional right, as set out by Griswold.
Respecting other people’s privacy is a fundamental principle of a civilized society — one that is regrettably violated on a massive scale in our society today. I’m reminded of something Edmund Burke said, which was cited in a Bret Stephens column earlier this week:
Privacy is about manners, not law.
But no, I don’t see a constitutional right to privacy, no matter how many times I’m told it’s there. The “penumbras” and “emanations” just don’t do it for me. Had the Framers meant to protect a right to privacy, they’d have used their words…
Burkean notions of how government and society should operate went down the tubes in the Articles of Confederacy period. It demonstrated to the Founders that the hope they had placed in private virtue, a “republic of virtue,” was inadequate to provide for a properly functioning order.
Anyway, it doesn’t matter what you believe. The Supreme Court through numerous decisions has determined that an implicit right to privacy exists in the Constitution.
It doesn’t matter what I believe? Good.
Maybe it will stop bothering you…
Sorry, but as far as the law of the land is concerned, it is what it is.
Sorry; I guess I didn’t understand the question. Although on the second thing, I was arguing with your request. Of COURSE that difference exist, and it’s essential.
I don’t know what it is I need to explain about thinking that way being libertarian.
Some folks see the technology as useful and non-threatening. Other people — the more libertarian ones, the folks who worry more about what they perceive as threats to civil liberties — see it as a Big Brother intrusion.
What’s not clear about that?
Maybe it would help me if you offered an argument addressing why it’s NOT a libertarian concern, I could address it more helpfully…
Sure it’s a libertarian concern; my point it is a concern to others that do believe in a right to privacy. Like Ken, Bill and me …
Deciding between privacy or no privacy is a false choice. We want both, don’t we? Facial recognition is here and will remain. The question is where to draw the lines of acceptable use. In the private enterprise / commercial realm, the line is mostly determined by the market. In government, we tend to freely use intrusive technology until regulation sets limits. The earlier reference to fingerprints proves the point. When fingerprints first began to be relied upon, there were no standards. Currently, there are generally accepted standards and an infrastructure for using fingerprints taken from arrests (rolled) and compared with suspect prints (latent). But it is still the wild west for facial recognition which has neither standards or infrastructure. In the context of using facial recognition for criminal justice purposes, the same infrastructure used for fingerprints might be suitable for the retention and use of digital facial images. That doesn’t mean the technology for one necessarily works for the other. Instead, it means there is already a community that knows how to manage policy restricting access to the repository of data they maintain for approved purposes. What’s scary is the practice of harvesting images from any source. This should be stopped, and that would be a good start at reigning in what looks like a free for all at present.
An earlier comment of mine was poorly stated and has bothered me since. The comment, “Deciding between privacy or no privacy is a false choice. We want both, don’t we?”. The comment might be better expressed as, “Deciding between anonymity or recognition is a false choice”. I should think most people want anonymity (privacy) and recognition (no privacy) at different times. Hence my comment that we want both privacy and no privacy.
i got busted,dissed,and jailed on trumped up/false charges when I was 16.After seeing me at a known weirdo avenue,some narcotics agents brought some hashish over,borrowed my pipe,left,then came back with a warrant signed by,JP Strom(simple sell for the pipe/resin/1970) . While I was trying to sleep (prescribed Valium) at the hoosegow(taken in bathroom) they killed the guy next to me;hung him;a staged suicide and no one was ever the wiser…(true story,I swear).
Since then,all the cops have been great,and saved my life several times(Karma?)
They were using “bias” in both senses of the term. Algorithms are written by human beings, who have both conscious and unconscious biases that can end up in the software. Even data that looks neutral may be biased. For example, when writing an algorithm for home loans you find that previous home ownership in the family of your borrower is a good indication that they will repay the loan. However, a particular group, for various historic reasons has not had a history of ownership until recently. Using the algorithm their loan requests would get a lower score, even though members of that group may be no more at risk from default than anyone else.
Sorry, the above comment should have been in response to the original article and not Bill’s comment.
Thanks. But I don’t think the way they used it works either way, in the way they seemed to be using it.
You cite an example that shows the way an algorithm could, if effect, be biased against minorities. That works.
This facial recognition problem doesn’t make it more likely that black people will be recognized. It makes it more likely that when it DOES point to a black person, it will be wrong. And we know that. Which is why you must not use it as basis for an arrest — especially if it points to a black guy, but frankly you should need more evidence to arrest a person of ANY ethnicity….
In the case of facial recognition it may be the problem of “they all look alike to me.” If a Chinese algorithm is better at identifying Asian faces it could be a case of it being easier to pick up facial details and nuances when observing someone who looks like you.
Except, of course, that they can train an algorithm to do it….