Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 19, 2021

Ban the Scan! AI, Facial Recognition, and Human Rights

Ban the Scan! AI, Facial Recognition, and Human Rights
BY: TARTLE

Scanners!

Facial recognition software is becoming more and more common. There are lots of uses for it. One is as a way to unlock your phone. Another is for stores to be able to recognize incoming customers, enabling them to provide personal offers. However, the biggest and most controversial use of facial recognition software is in law enforcement. From federal agencies down to some smaller municipalities it is becoming common to see cameras mounted on street posts and the sides of buildings. There are different law enforcement applications of this software, from catching speeders in the act to recording crimes in progress. The main area of concern though is how this software is often used to search for suspects.

How can that be bad, you ask? Surely, tracking down suspected criminals can’t be a bad thing? Can it? It depends on how you go about it. If you have a good description of the suspect or even a photograph then you are in good shape. The software will find him and then he can be quickly and easily apprehended. However, what happens when you don’t have a good description of a suspect. What if you have a very generic description, skin color, hair color, height, just a few basics that don’t do much to narrow down the people your scanner is looking for?

In that case, you’re unfairly profiling people based on merely superficial characteristics. That leads to a few things. One, it leads to police resources getting wasted running down false suspects. Two, those false suspects are actually innocent people who are now getting harassed, innocent people who may develop resentment towards the police after such treatment. All because you didn’t have a better description to go off of than “tall black man, athletic build, wearing blue jeans”. True, sometimes that’s all there is to go on. However, a real person can spot all the little behavioral cues that separate a real suspect from just a face in the crowd. An algorithm in the facial recognition software that is going over the images collected by hundreds of cameras around a city has a much more difficult time. Unfortunately, all the real people are getting used up tracking false positives generated by the software. 

There is also the sad fact that facial recognition software currently is not great at recognizing the differences in faces amongst different ethnic groups. Most famously, Apple’s software for unlocking their phones was pretty bad at being able to tell Asian faces apart, at least when it was first released. Others have a more difficult time identifying differences in African faces. Why is that? Is the software racist? Of course not, its code, it only acts on the data that’s fed into it and can only do so based on how it is designed.

All right, are the coders racist then? Probably not. So, how does that happen? A simple explanation is that the coders are simply coding based on their experience and the fact is, Silicon Valley is mostly full of white people. So they code for those facial characteristics. Even when training the software and refining the code to pick out finer differences, the faces you are scanning for the purpose are probably white. Why? Because they are the faces most readily available. If the software were being developed in Shanghai, there is a good chance it would do great at picking out Asian faces and not be as good at picking out white ones. 

As an example, back in school, I had a friend whose parents were missionaries in Africa. He said when he first came back to the US, everyone in class looked the same to him. He was used to the differences in the black faces he’d spent the last year or so with and as such the white people he was now in contact with were bland copies, while to me each was incredibly different. Frame of reference matters and very often people don’t realize how much their natural environment affects things they do on a daily basis.

So, how do we deal with this? We can’t just accept the unfair profiling of people through poorly trained facial recognition software. The opportunity for abuse and rights violations is just too high. The clear answer is that the coders need to do a better job of training their software to recognize different ethnic groups. Get out there and do the effort to get some unfamiliar faces fed into the algorithm. Yes, we know there are deadlines. But what if we told you that you could do it without leaving your desk? What if someone – like TARTLE – had a whole marketplace of people who might be willing to share images of their face to help you with that? In that way, you can get better software and innocent people won’t be getting harassed by police whose time would be better spent tracking down actual criminals. 

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Summary
Ban the Scan! AI, Facial Recognition, and Human Rights
Title
Ban the Scan! AI, Facial Recognition, and Human Rights
Description

Facial recognition software is becoming more and more common. There are lots of uses for it. One is as a way to unlock your phone. Another is for stores to be able to recognize incoming customers, enabling them to provide personal offers.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby. Where humanities gets into the future. And source data defines the path.

Alexander McCaig (00:26):

Everybody, welcome back to TARTLE Cast. We're here to talk about banning the scan. Ban the Scan. New York City issue going on here with facial recognition. As you can see, we have their ... From Amnesty International, we have their website up. Ban the Scan. Do you want your face to be used to track you? No, not me.

Jason Rigby (00:46):

No, not me either. Yeah.

Alexander McCaig (00:47):

And that's only one function of it. They're trying to emotionally capture you on it. But, the greater function here is how the scanning of the facial geometry essentially lacks in the algorithm's accuracy.

Jason Rigby (01:03):

Yeah, and I think in what we can speak to is the two main issues. Number one is their machine learning-

Alexander McCaig (01:08):

Yep.

Jason Rigby (01:09):

... and how their machine learning is racially profiling. And then number two, and we'll talk about that and they have that up here for Brown and Black people and other minority groups. And then number two, which I think is the most important, more than any of them is making sure and understanding the intent of the user. So, what is law enforcement using this? Are they using it to track descent? Or, are they using it to track crime? Big difference.

Alexander McCaig (01:36):

Yeah.

Jason Rigby (01:36):

You know, like they were talking in this article, I thought it was really interesting, Alex. They were talking about ICE, as far as trying to find people that were illegal aliens, so-called illegal aliens, whatever, which is a funny word to me. [crosstalk 00:01:50] I don't like that. Yeah.

Alexander McCaig (01:50):

Aliens are funny. I like aliens.

Jason Rigby (01:53):

I mean, it's like who's illegal and who's not? And like what?

Alexander McCaig (01:56):

Yeah, who's borders [crosstalk 00:01:58] is that?

Jason Rigby (01:57):

I went five miles over on the speed limit. That's illegal.

Alexander McCaig (02:00):

Yeah.

Jason Rigby (02:02):

Somebody's trying to better themselves.

Alexander McCaig (02:04):

You're a couple changes in the genome away from being illegal.

Jason Rigby (02:08):

So they used it to track people and then to see where other groups were at. And then to be able to track them. Terrorism, I get. You know, I mean, it's great. If you can have, you know-

Alexander McCaig (02:20):

If you already have a picture of the guy's face and you know who you're looking for.

Jason Rigby (02:22):

Yes, exactly.

Alexander McCaig (02:23):

But if you don't know who you're looking for, and you take this presupposed idea that's been imprinted into the algorithm to say, "Look for this type of person."

Jason Rigby (02:32):

Yeah, now we have-

Alexander McCaig (02:33):

This type of person.

Jason Rigby (02:34):

Right.

Alexander McCaig (02:34):

Well, how do you define that type? So now it's not the fact that the facial recognition is wrong. It's the fact that it's being used to profile inappropriately.

Jason Rigby (02:41):

Mm-hmm (affirmative).

Alexander McCaig (02:42):

Yes, there is data showing that maybe someone of a specific minority group, maybe committing a certain amount of crimes in a specific area. So you need to look for that. But what you find is that people that have never committed a crime, ever, the facial recognition is choosing them because their skin color is Black-

Jason Rigby (02:59):

Yes.

Alexander McCaig (02:59):

... because the algorithm, the facial geometry, is not picking up on it properly. The shadows in the context of the face, as opposed to a white male. So the white male, it may be scanning them, but they're less likely to be flagged as a risk so that the cops can go out there and frisk that person, whatever precinct they're in, as opposed to someone that may be Black.

Jason Rigby (03:15):

Yes. And one of the things that they talk about, and I want to get into, is the weaponizing by law enforcement against marginalized communities. So this is really interesting. This was by, I want to make sure I get the ... Matt [Mohamadie 00:03:35]. I'm probably pronouncing his last name. Sorry, Matt. New Yorkers should be able to go out about their daily lives without being tracked by facial recognition. Other major cities across the U.S. have already banned facial recognition. So we know Portland and a couple other big cities have.

Alexander McCaig (03:46):

Yeah.

Jason Rigby (03:46):

They also said this. For years, the NYPD has used facial recognition tract tens of thousands of New Yorkers, putting New Yorkers of color at risk of false arrest and police violence.

Alexander McCaig (03:56):

Yeah. Put someone at risk. So the algorithm, if it is biased, which it is, is automatically going to put someone in a bucket that says, "You need to worry about these people." When the person's like, "I've never done anything wrong. I paid all my bills. I've never harmed a fly."

Jason Rigby (04:11):

How many times have we talked about putting people in buckets, Alex?

Alexander McCaig (04:14):

We used to say this all the time. The second you define for somebody ... Here's the difference to me. Remember you said it was fine with terrorism? So let's talk about this. The second you define someone in a bucket-

Jason Rigby (04:24):

Right.

Alexander McCaig (04:24):

Okay. You're moving the right that they have the free will and the choice of them to define who they are, with their own ego characteristics, perspective of what reality is for them. Now, for a terrorist, they're a terrorist because they've created some part of terror. They've done something that is a bad act towards the greater group causing mass fear.

Jason Rigby (04:41):

Right.

Alexander McCaig (04:42):

That person has defined themselves publicly as a terrorist.

Jason Rigby (04:46):

Yes.

Alexander McCaig (04:46):

They put themselves in their bucket. Why is it that you are allowing the freewill of a terrorist to choose to be a terrorist, but any other random Joe Schmo on the street, you're going to put them in a bucket that you think is better suited for him?

Jason Rigby (04:56):

Yeah. And that is huge. And that's what I wanted to ... Whether you agree with facial recognition, we should have cameras in cities or not, I'm not so concerned about that as much as I am concerned about that ability to be able to allow them through their own free will and their actions, their actions should dictate surveillance.

Alexander McCaig (05:16):

Yeah. Their actions should dictate surveillance, not surveillance dictating an action upon them.

Jason Rigby (05:21):

Yeah. Like saying, "Oh, let me follow this person because they're Black. And let me see in this nice neighborhood that they're in, let's watch them for 10 minutes and see if they commit a crime."

Alexander McCaig (05:30):

Yeah, you going to do anything?

Jason Rigby (05:30):

Or they're up to something shady.

Alexander McCaig (05:31):

And so then you got these big brother eyes on you, but it's the data, right? It's what data have you put into that algorithm to feed it that says, "Wait a minute, red flag alert. This person, which we say is in this bucket, we need to go find them." And then you're pushing people around. And it becomes more of an aggressive police state because it's not about the cops showing up when a crime is being committed.

Jason Rigby (05:55):

Right.

Alexander McCaig (05:55):

It's saying, "We think crime is going to be committed. Let's show up right now."

Jason Rigby (05:59):

Yeah, Jumaane Williams, a New York City public advocate said this. And listen to this statement. And I want you to talk about this. "Facial recognition is just the latest version of bias-based policing, a digital stop and frisk."

Alexander McCaig (06:12):

Of course it is. You remember after 9/11, anybody that was Muslim or whatever it might be, if you were in a Hijab, you're going to get stopped.

Jason Rigby (06:19):

Right.

Alexander McCaig (06:19):

Regardless.

Jason Rigby (06:20):

At the airport [crosstalk 00:06:21] .

Alexander McCaig (06:20):

It doesn't matter if you've been a good person your entire life.

Jason Rigby (06:22):

Right.

Alexander McCaig (06:23):

You know? It didn't matter if Sunni or Shiite, they're stopping you. So when you look at the context of this, is now you've elevated the efficiency of profiling by using a machine. But the thing is, is your profiling biased? Or is it true profiling depending on how the person has defined themselves through the crime they've committed, or whatever characteristics they choose to live by.

Jason Rigby (06:42):

Yeah, they did a federal study that showed that there were between 10 and 100 times. So think, 10 to 100 times more false matches among Black women than white men.

Alexander McCaig (06:52):

Yeah. And I mean, listen, that's an issue with the software and [inaudible 00:06:56] people. Why is it still used?

Jason Rigby (06:56):

Yes.

Alexander McCaig (06:57):

You're still using the same horrible algorithm, pulling in the same bad data and then applying it to real life.

Jason Rigby (07:04):

But this is the problem with data in general.

Alexander McCaig (07:07):

This happens to everything. You see it in marketing.

Jason Rigby (07:09):

Yes.

Alexander McCaig (07:10):

You see it with how large corporate businesses make their decisions. It's all this bad data coming in on this biased algorithm that has no truth to it, where you're defining what the world looks like when the rest of the world's like, "What are you doing? There's 7 billion of us and a million of you. You're nothing. You're a pimple on a gnat's ass." And you're defining what the gnat looks like.

Jason Rigby (07:31):

Yeah. It's the proverbial ... And we talked about the software. It's a proverbial garbage in, garbage out.

Alexander McCaig (07:35):

Yeah.

Jason Rigby (07:36):

And we'll spend millions and millions of dollars, even billions, on these machine learning algorithms. And then we're just pumping that machine. It's not ... The machine learning is agnostic. It's the data that we're giving it.

Alexander McCaig (07:48):

That's correct. And-

Jason Rigby (07:49):

It's our bias that we're feeding it.

Alexander McCaig (07:51):

And if we don't focus on specific human rights and perspectives that are truthful, that value an individual's freewill and how they define themselves, as a human right to do so, this is me, this is who I am, this is how I'm going to define it ... Until they break a law within a society I've chosen to live in, it's not for you to say I belong in a specific bucket. It's not for you to apply your bias machine learning algorithm to me, and then redefine my life through force if need be, to stop and frisk me. Because you're bad data saying, "This is what needs to be focused on." It's just wrong. It doesn't align with human nature and how humans are supposed to be treated. Again, we're elevating the technology, but missing all the aspects of how it should be elevating the human being. It's not. It's separating us. It makes us ... It dis unifies us.

Jason Rigby (08:37):

Does this technology elevate humanity?

Alexander McCaig (08:40):

It does if used properly.

Jason Rigby (08:41):

Yes.

Alexander McCaig (08:42):

If you're really using it with an awesome intent, that's non-biased, non-dogmatic, non-racial profiling, and allows people to define who they are, yeah, of course. Are they consenting to it? How about this. For the city of New York, did they send out a memo to everybody living in New York, "Do you consent to this?" Did you poll or ask these people? Was that a part of your survey? Over here in New Mexico, they asked the people at Rio Rancho, "Do you want speeding cameras?" They're all like, "No. Hell no." And they didn't put them in. But it seems like New York City just did whatever the hell they wanted to do.

Jason Rigby (09:17):

As most municipalities that feel like they have this ... And this is a problem that we see in government transparency. And I don't want to get into number six of our big seven, because I don't want to-

Alexander McCaig (09:28):

Well, it's corporate government, corporate transparency. There was no transparency in that choice.

Jason Rigby (09:31):

No.

Alexander McCaig (09:32):

What do you think? Because there's a security threat, it just gives you the right to do whatever the heck you want? It's about the people there that want to feel secure. So if this is something they're asking for, well then that is the decision of the whole group of people there in New York City. Not for a few select people in a specific area that no one's ever had any contact with to decide for the 16 million people that live in New York, how they should live their lives, and how they should be observed.

Jason Rigby (09:54):

Yeah. Put an initiative out there.

Alexander McCaig (09:56):

Yeah.

Jason Rigby (09:56):

And then let the people vote.

Alexander McCaig (09:57):

Just ask. What's so hard to ask?

Jason Rigby (09:58):

Yes.

Alexander McCaig (09:59):

Ask. You know?

Jason Rigby (10:01):

Yeah. But we know better.

Alexander McCaig (10:03):

Yeah, of course.

Jason Rigby (10:03):

That's the mentality.

Alexander McCaig (10:04):

We know ... Yeah, it's the government mentality, we know better.

Jason Rigby (10:06):

And now we have data to back our, "We know better."

Alexander McCaig (10:09):

Well, yeah. Your data is telling you, you clearly don't know anything. Right?

Jason Rigby (10:13):

So real quick, before we go, Alex. Big seven. Number three is human rights.

Alexander McCaig (10:19):

Number three is human rights. And when you're looking at human rights, there are things that we are all born with regardless of color. And we've seen that through the use of data and many other things, it's not respected. In many facets all over the world. So how do we come together collectively with our data, defining it ourselves to say, "These are the rights, this is what we want. We want to do. What bucket we want to be in. How we see our own future. How I decide I want to be policed. How I to say what my security looks like." That's that human right. We all have the right to live and be safe, and make our own choices and be responsible. That's on us. You know, we're born human beings. We're all given that. Right? But now it's not for someone else to define that for us. And so when you start to lose that human right, right there, and it becomes a government right-

Jason Rigby (11:06):

Now we have an issue.

Alexander McCaig (11:07):

Now we have an issue.

Jason Rigby (11:08):

Yeah, so big seven. How can someone, if they say, "I agree with you," how is TARTLE helping human rights? And how can I be a part of TARTLE?

Alexander McCaig (11:17):

Glad you brought that up. So, say for instance we have here at Amnesty International, on the back. Maybe Amnesty International comes on TARTLE and they say, "We have a data pack, we want to ask you about surveillance in New York City to New York people." Great, you can fill it out. Amnesty International is going to pay you for that data. And if you want to double down ... So after you've shared that data with them so they can act upon it, you can say, "Great. I have all these earnings I've had from sharing my data on TARTLE, I want to donate them back to Amnesty International.

Alexander McCaig (11:42):

Because I back up ... I'm an activist for what they're doing. That's something I can align with." In a matter of three button clicks on the marketplace, you can do that. And that's how you can act upon those things within the big seven, within human rights. If you don't like the thing going on with people tracking your faces, and you want the human right of other people that sit outside of your own socioeconomic or demographic area, go ahead. Help them out. Do that by sharing your data and sharing your earnings back to the things you care about. You can do that on TARTLE.go.

Jason Rigby (12:11):

Yeah, T-A-R-T-L-E dot C-O.

Alexander McCaig (12:21):

Thanks.

Speaker 1 (12:21):

Thank you for listening to TARTLE Cast with your hosts, Alexander McCaig, and Jason Rigby. Where humanity steps into the future, and source data defines the path. What's your data worth?