Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 11, 2021

Google Captcha Class Action Controversy

Captcha Class Action Controversy

Google Captcha Class Action Controversy

SHARE: 
BY: TARTLE

Google Antitrust

Coming out of Massachusetts is a new antitrust lawsuit against Google. This is hardly an unheard of situation. However, this one is pretty interesting. The plaintiff is alleging that Google is unfairly using unpaid labor for its own profit. How? It’s really quite ingenious. You know those captcha/recaptcha things we all deal with from time to time so we can convince the algorithm that we aren’t bots? If you’ve noticed, they tend to be some form of distorted text. Letters that are blurry, poorly written or with extra lines through them for some reason. Sometimes you even need to do that twice. 

Well, the lawsuit is alleging that at least that second captcha text is being used by Google to train its text recognition AI. That matters because Google Books scans in thousands upon thousands of books, digitizes them and uploads them to the internet for free. Those scans though, are often from rough copies that might have pen marks on them, suffer from damage due to age or just have artifacts from the original printing. By making use of the captcha system, Google is teaching its AI to better deal with those problems and thus create more accurate digital versions of the work. 

So what’s the big deal? Sure, they’re sort of tricking people into doing work for them but at least they are doing it for the end of making more knowledge available to more people. Obviously that’s a good thing in itself. It’s a little shady that they didn’t really tell people about it, but if that was all it was, no harm no foul. Yet, as often happens, Google goes right ahead and takes it to the next step, eating up some of that good will that we would otherwise have. How so?

There are newspapers (yes, they still exist) and magazines that are interested in digitizing their archives. Universities and governments are also trying to get their documents, books and research converted to a digital format. Along comes Google, offering their scanning and conversion software to take care of that. For a substantial price of course. That is where problems arise. Because now Google is profiting off the software that you (and everyone else) helped develop. It’s pretty understandable why that might bother someone. And in all honesty, if Google were upfront about what they were doing with the data gleaned from the captcha system, then it would be fine. People would have the opportunity at least to know what they were doing and why. But again, Google doesn’t tell people about that. It seems only fair that since Google is making a profit, they should offer at least a shekel or two for the trouble. 

In fact, if they were willing to both be upfront about how they are using the data and offer something to the people helping them with it, it would be great if Google expanded the program. They could scan documents not just individually but as part of a searchable and cross-referenced database that would be a massive benefit to researchers everywhere, making it easy to find not just the one item you’re looking for but several related documents that could then be compared and contrasted. It would make Wikipedia look like LiveJournal. Hopefully, Google or someone else gets to work providing something like that in the near future. 

In the meantime, this kind of situation is exactly why TARTLE exists. For a long time now, businesses have been benefiting from data generated by others. We’re offering people the ability to take control of their data again by signing up with us and funneling all of your data through TARTLE, which allows you to actually be rewarded when you share it. If you even want to share it. The choice is yours.

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby, where humanities steps into the future and sourced data defines the path.

Alexander McCaig (00:25):

Welcome back everybody to TARTLE Cast. Not turtle, TARTLE. It was funny, I got a call from-

Jason Rigby (00:31):

Not tardell.

Alexander McCaig (00:32):

Yeah. I got a call from someone the other day and they're like, "Oh, blah, blah, blah. Your company, turtle." And I'm like, "Do you see a U in that?"

Jason Rigby (00:41):

Yeah. Yeah. You are not involved in ...

Alexander McCaig (00:47):

I don't know what you're talking about. What is a turtle? I'm not involved with any turtle thing. It's TARTLE. T-A-R-T-L-E.

Jason Rigby (00:55):

You have an article here. Not an urticle, an article.

Alexander McCaig (00:59):

Yes.

Jason Rigby (01:00):

It's going to be fire.

Alexander McCaig (01:01):

Yeah. I don't know anything about it, but I'd said, why didn't you just tee me up on this because I think this could be-

Jason Rigby (01:08):

No, I love this article. Yeah. It's from bizjournals.com. A Massachusetts woman's lawsuit accuses Google of using free labor to transcribe books and newspapers. Here's a class-action lawsuit.

Alexander McCaig (01:19):

Can we talk about the title though?

Jason Rigby (01:20):

Yeah.

Alexander McCaig (01:21):

A woman's lawsuit.

Jason Rigby (01:22):

Yeah.

Alexander McCaig (01:23):

What the hell? What's the difference between if it's a man or a woman?

Jason Rigby (01:25):

I think it's just a clickbaity type-

Alexander McCaig (01:27):

No, I know that.

Jason Rigby (01:28):

... or a Massachusetts woman's lawsuit.

Alexander McCaig (01:30):

It's absurd how they're trying to coarse us into reading the article. I'm already triggered. Go ahead.

Jason Rigby (01:34):

Yeah. A Massachusetts woman has launched a potential class-action lawsuit against Google saying the tech giant is unfairly using people around the world to help it transcribe books, addresses, and newspaper as part of its reCAPTCHA program which is Google's own version of a security program called CAPTCHA.

Alexander McCaig (01:50):

Yeah. CAPTCHA.

Jason Rigby (01:51):

We've all experienced that.

Alexander McCaig (01:51):

We've all done the reCAPTCHA.

Jason Rigby (01:52):

Google requires website users to decipher words displayed as distorted images on their computer screens before users can access websites or certain features.

Alexander McCaig (01:59):

Oh, I get it. I get it, get it, get it. I understand. So, the reCAPTCHA that you're going through and it has a blurry word or something like that is helping them with their scanning algorithms on books to read text of different formats.

Jason Rigby (02:12):

Yes.

Alexander McCaig (02:12):

So, if they get human input for their machine learning models, it's beneficial to the algorithms when it reads a scanned book that may have an obtuse looking text or a bad scan itself.

Jason Rigby (02:23):

Yeah. So, the-

Alexander McCaig (02:26):

Hold on a second. That is hysterical. It's horribly efficient. But I mean, this is interesting. So, go ahead. So, how did she come to do the saw suit?

Jason Rigby (02:32):

So, the company is unfairly profiting off the labor of others as users, inter, extra words that aren't part of any security requirement. According to a suit filed Friday in US district courts in Massachusetts, she signed up for a Gmail account, was forced to respond to a reCAPTCHA prompt by typing two words that were displayed as two distorted images. And she did not receive any compensation for transcribing the second word. The suit states that the Google's reCAPTCHA program is unique in that it often requires you to type two separate words displayed as a distorted image to proceed past the program.

Alexander McCaig (02:59):

I always wondered why it has me do it twice.

Jason Rigby (03:00):

The suit alleges that Google has used the service to transcribe scanned images of books and newspapers-

Alexander McCaig (03:07):

See, this is what I said ahead of time.

Jason Rigby (03:08):

... to decipher addresses found in images captured as part of Google Street View project. But, because there's two sides to every story, but as Google continues to digitize books, newspapers, and other images that cannot be deciphered by scanning technology, the suit alleges the company has turned to humans to do their work for them.

Alexander McCaig (03:24):

That's interesting because the payback ... this is a twofold thing. This could go both ways.

Jason Rigby (03:31):

I'm going to tell you the bad section here in a second.

Alexander McCaig (03:33):

Yeah. It's like we're scanning books so that we can open up the availability and knowledge to everyone across the globe. That sounds pretty cool.

Jason Rigby (03:41):

That sounds cool. Yeah.

Alexander McCaig (03:43):

We're not forthright in telling you why we were having you do two separate images.

Jason Rigby (03:48):

Right.

Alexander McCaig (03:49):

Right. This has been the problem with tech altogether. And this is classic Google fashion. They take a black-box approach and lack any sort of transparency to tell you why you're putting in labor. And if you tell people why you were doing it they'd feel better about it. And if you didn't tell them and they had to do it twice, well give them a couple of shekels for it. So, let's continue. This is really interesting.

Jason Rigby (04:15):

Yeah. The lawsuit states at Google profits by selling transcribed versions of books through Google Books and by selling its services to companies like the New York Times to transcribe archive versions of its newspapers. The damage could be in the millions and could affect hundreds of thousands of people around the world. According to the suit, in 2009, Google acquired reCAPTCHA to create searchable archives of old newspapers and books, and this is according to Forbes. A Google spokesman said the company had no comment. So, I want to get into this part about-

Alexander McCaig (04:46):

It's interesting, the ethics thing.

Jason Rigby (04:47):

Yeah. So, I love that they're taking books ... I've gone on there and I've seen books that are out of print that Google books has on there.

Alexander McCaig (04:56):

Yeah. And I like the availability of where I can read it

Jason Rigby (04:59):

And being a history buff and loving history I want every book ... because Fahrenheit 451, I think is what it, that movie. And I had told you about that the old 1960s movie, where everybody was just watching flat-screen TVs.

Alexander McCaig (05:11):

Yeah. Flat screen TVs and firemen burning books.

Jason Rigby (05:13):

Firemen burning books because they didn't want people to understand. I don't care how absurd the book is, whether it's the satanic Bible or it's the ... I don't care what book it is. It needs to be archived and you need to have it so that it's safe forever.

Alexander McCaig (05:29):

It's the storage of knowledge. I'm actually trying to wrap my head around this in my brain because I'm torn.

Jason Rigby (05:39):

Right. Well, Google should have run it like Wikipedia. They should have said you're helping us save history.

Alexander McCaig (05:44):

Yeah. That would have been it. And if there was ... Okay, I got it now. So, you are helping us preserve history by going through and we're using the power, or the collective power of all of us to enhance these algorithms so it can better learn and read textual information or information about our world and then put that into a digital format efficiently so that it can then be re-read and reassessed. That's cool. People will be cool with that. But then if you're going to turn around now and make a business, monetary, financial stream out of it, now you've crossed the line. Now that compensation should be going back to those people.

Jason Rigby (06:29):

Well, Google books should be free. Or a version of it should be free. And then there should be a huge search engine like Google So Good app. And how many college students would this help if they could go in there and then search and then next thing you know, it gave you this list of books that were talking about that subject?

Alexander McCaig (06:44):

Okay. So, I think this is cool because I personally have an interesting way of learning through books, how I teach myself. So, you know I read a lot, you read a lot. Look at all the books around us. We're smart. Like a lawyer, you walk in, he's got a bunch of legal books on the wall. He must know what he's doing.

Alexander McCaig (06:59):

What I do with a book is as I'm reading it, if I understand something or don't understand it, or if I found something that was of great value, whatever it might be, intellectual or emotional value, it's underlined. I notate whatever the conscious thought was that came with it. And then I tabulate it so I can go back and reference. And then as I'm finished with a book, what I do is I take that and I transfer all of the notes, all the underlined sentences, whatever it might be into a completely separate journal. And then I have a huge body of books that I do that with. And they all get transferred and translated into this one single journal.

Alexander McCaig (07:44):

Then I can go back and I look for things that corroborate with one another. I look for things that have correlations across these different books. If authors are sharing ideas, whatever it might be. And then I'm like, "Oh, that's something interesting. There must be truth within this." Or, "There must be a falseness because of how contradictory this is across all the boards. Someone doesn't know the truth of it." And so that's how I learn. So, I look for the truth between the interconnectedness of knowledge.

Alexander McCaig (08:13):

And so, when I look at what Google is doing, I would think it would be amazing if what I do physically could be put into an algorithm of scanned material. And as I begin to type things in, like a specific word or subject, it talks about how many books it goes through, the correlations, and the differences between those books. And if something popped that up-

Jason Rigby (08:34):

That is machine learning helping humanity.

Alexander McCaig (08:37):

That's machine learning helping humanity. That's super cool. But if I'm going to write to New York Times and like, "Hey, New York Times, we just had hundreds of millions of people across the globe do reCAPTCHA and they've refined our algorithms. And so we can scan all your old newspapers and we can guarantee you that it's going to read them properly, but you got to pay us a huge chunk of change." Well, who really put in the labor to do that?

Jason Rigby (08:57):

Right.

Alexander McCaig (08:58):

And if we consider TARTLE, you put in the labor of creating data, you need to share in that value of that data creation. You deserve that compensation. It's like when you go to work, you want payment for the labor you put in.

Jason Rigby (09:14):

Yeah. And I think this is a prime example of why TARTLE exists.

Alexander McCaig (09:17):

Yeah. This is a prime example of why we exist. And this article, in this class action lawsuit also shows you how society is changing. Their perception of the amount of time and work we put into online is changing and how people are being much more critical of how the larger tech bodies, the whales in the tech industry, how they continue to take that black-box approach. There's very smart people that work at those companies.

Jason Rigby (09:48):

This is amazing.

Alexander McCaig (09:49):

Yeah.

Jason Rigby (09:49):

That they've come up with this.

Alexander McCaig (09:50):

They're so intelligent. And intelligence is a double-edged sword. You always have to remember, you got to see full circle, like, where is the labor actually coming from? It's cool that they put the collective power of everyone working together on it, but they weren't forthright and honest for why it was happening. And they weren't talking about how they were making a big buck off of it.

Jason Rigby (10:09):

Yeah. I mean, they made no comment, but it'll be interesting too, when the lawsuit hits, we'll have to keep up to date on it and maybe make a part two on this.

Alexander McCaig (10:17):

Yeah. No, I thought that was terribly interesting. It's terribly interesting.

Jason Rigby (10:22):

I mean, just think of the future real quick before we go, and you can comment on this, where AI, I mean, you have machine learning now, but AI gets to the point to where it can know every book that's ever been in the world. And then we could ask it questions and it can make decisions based off of facts that it correlated through these books that it found these truths in.

Alexander McCaig (10:44):

Yeah. I think that'd be phenomenal.

Jason Rigby (10:46):

To access that wisdom.

Alexander McCaig (10:47):

It's like digitizing the library of Alexandria before the damn Romans came in and burnt the thing down.

Jason Rigby (10:54):

Yeah. Sad.

Alexander McCaig (10:55):

Yeah. It was all that knowledge in one place. And you had to go and do it yourself, do your own indexing. But now we have the power of assimilating that with efficiency in a digitized format. There's incredible answers in that collective power, and I'm excited to see what that future looks like.

Jason Rigby (11:11):

Or Google can absorb Wikipedia-

Alexander McCaig (11:14):

And then Wikipedia goes to crap.

Jason Rigby (11:18):

Or maybe they can make it a not-for-profit like it is and then combine Google Books with Wikipedia, and there you go.

Alexander McCaig (11:24):

Or Google could-

Jason Rigby (11:24):

Could you imagine, here's an article ... or make a version of Wikipedia. Here's an article, here's what everybody's saying about it, and then here's what books are saying about it.

Alexander McCaig (11:35):

That's so cool because then it gives you all the different phase states and the different truths, depending on how they are in perspective.

Jason Rigby (11:41):

I mean, yeah. If I type in [crosstalk 00:11:44].

Alexander McCaig (11:43):

Societal perspective. A doctoral perspective and an author perspective, that's cool.

Jason Rigby (11:50):

Yeah. And then combine white papers with it, what a leading scientist is saying.

Alexander McCaig (11:54):

It's awesome. And then, it comes to some sort of relative output where you're like, "Oh, this is interesting." And then the thing that'd determine, this is probably not truthful at the moment or it lacks any truth.

Jason Rigby (12:02):

Yeah. It would come out.

Alexander McCaig (12:04):

It'd be cool.

Jason Rigby (12:05):

Yeah. It'd probably be the number one research site in the world.

Alexander McCaig (12:08):

Yeah, it would. And there's no reason why Google couldn't donate to Wikipedia. What do they ask for like two bucks from people?

Jason Rigby (12:14):

Yeah, exactly.

Alexander McCaig (12:15):

We could have this whole thing done in 30 minutes if all the Wikipedia users ... Google, give them some money.

Jason Rigby (12:21):

They don't have a lot.

Alexander McCaig (12:22):

Yeah. They have very little.

Jason Rigby (12:23):

Razor-thin profit margins.

Alexander McCaig (12:26):

They're almost in the red, now a chance they're in the black.

Jason Rigby (12:34):

ABC. That's fine. Don't they have a big sign up at Google? And doesn't it say, "Do good," or something like that? I'm going to look it up real quick because I want to make sure [crosstalk 00:12:42].

Alexander McCaig (12:42):

Bring this up right now. Do you know how many ethics officers they've gone through at Google? Just that statement alone I just made is worrisome to think about why are you going through so many different ethics officers? What is possibly happening over there?

Jason Rigby (12:56):

No, it's a phrase using Google's corporate code of conduct and it says, "Don't be evil." And they have that really big, "Don't be evil."

Alexander McCaig (13:04):

Evil's a function of perspective. They could be thinking they're doing a lot of good, but good for one person could be evil for another.

Jason Rigby (13:13):

Yeah.

Alexander McCaig (13:13):

It's way too ... Yeah, I get it.

Jason Rigby (13:17):

Yeah. It's way too Luciferic.

Alexander McCaig (13:19):

Yeah. It's some sort of a Luciferian agenda. It's nebulous in perspective because how Google manipulates data, you can manipulate that sentence any way you want.

Jason Rigby (13:32):

This is on Wikipedia.

Alexander McCaig (13:35):

Thank you. Wikipedia.

Jason Rigby (13:38):

Don't be able to phrase using Google's corporate code of conduct, which is also formally proceeded as a motto. Following Google's corporate reconstruction under the conglomerate Alphabet, incorporated in October 2015, Alphabet took, do the right thing, as its motto. Also forming the opening of its corporate code of conduct. The original model was retaining Google's code of conduct, now a subsidiary of Alphabet. In April 2018, the motto was removed from the code of conduct preface and retained in its last sentence. So, they changed it from, don't be evil, to do the right thing.

Alexander McCaig (14:03):

Yeah. And it shouldn't even be, do the right thing, is to be, do something truthful.

Jason Rigby (14:09):

Yeah. How is what you're doing helping humanity?

Alexander McCaig (14:12):

Yeah. Just what is the right thing?

Jason Rigby (14:15):

Yes.

Alexander McCaig (14:16):

Who's to say that it was right-

Jason Rigby (14:17):

Globally.

Alexander McCaig (14:18):

When you act truthfully, that's obviously the right thing to do.

Jason Rigby (14:21):

Right.

Alexander McCaig (14:22):

Because that's just logically how that falls into place. But I don't know, it's just too biased in perspective [inaudible 00:14:31]. There's too many dogma's that could be built into saying, I'm doing the right thing.

Jason Rigby (14:34):

Yeah. Or don't be evil.

Alexander McCaig (14:36):

The church thought that the children's crusade was the right thing.

Jason Rigby (14:39):

Yeah. They could say don't be evil.

Alexander McCaig (14:41):

Yeah. Don't be evil but go out here and fight and slaughter.

Jason Rigby (14:46):

Yeah. Because then you get into this dogma that becomes ... I mean, look at right now with mask and not wearing mask and all that. I mean, look at how polarizing it is.

Alexander McCaig (15:01):

Yeah. It's very polarizing. It's almost hypocritical when I think about this statement. Google does so much work with data. Data's agnostic, non-dogmatic, it's not biased. It's not political. But your corporate policy already has biases and dogmas built into it. It's like you're contradicting the thing that supports who you are.

Jason Rigby (15:23):

Yeah. And then YouTube alone, I mean, so many people have gotten taken off of YouTube just based off of-

Alexander McCaig (15:30):

It just a little hypocritical.

Jason Rigby (15:33):

Yeah. I think so.

Alexander McCaig (15:34):

I'm not saying Google's bad. I'm just saying that it's hypocritical.

Jason Rigby (15:38):

Yeah, we'll get zero views from this video because guess where this video's going on?

Alexander McCaig (15:42):

Yeah, because going on YouTube. It's just not very well thought through. That's all. They didn't take the time to think it through.

Jason Rigby (15:46):

Their algorithms, like, they're talking about Google, Google, Google, Google. Don't be evil. What?

Alexander McCaig (15:51):

Yeah. What is that? Corporate policy? Red flag. Do not let that go on YouTube.

Jason Rigby (15:57):

I know, but I do appreciate the technology. I appreciate everything that is happening in this world right now to make our lives better. And I think the end goal, I'm optimistic for humanity. And as the world becomes more flat and we know about that book, The World is Flat, and as the information begins to communicate, especially in third world countries, I'm super excited about the future of how it's going to elevate humanity. Just knowledge alone will elevate.

Alexander McCaig (16:27):

Yeah. Education, transparency, and availability of resources. And the newer iterations of TARTLE that we have coming out, everything in the system explains what it is, how it's working, and why it's there. So, it's not going to be like a reCAPTCHA event where you're like, "Why am I doing this?" It's not black-box like that. It's not a Google approach. You're going to know exactly what's going on and why it's there.

Jason Rigby (16:56):

I love that.

Alexander McCaig (16:57):

Everything has a reason and we're forthright and truthful about that.

Jason Rigby (17:00):

Yeah. And that's the way it should be.

Alexander McCaig (17:01):

Yeah. That's exactly how it should be.

Speaker 1 (17:03):

Thank you for listening to TARTLE Cast with your hosts, Alexandra McCaig and Jason Rigby, where humanity steps into the future and sourced data defines the path. What's your data worth?

June 11, 2021

Google Captcha Class Action Controversy

Captcha Class Action Controversy

Google Captcha Class Action Controversy

SHARE: 
BY: TARTLE

Google Antitrust

Coming out of Massachusetts is a new antitrust lawsuit against Google. This is hardly an unheard of situation. However, this one is pretty interesting. The plaintiff is alleging that Google is unfairly using unpaid labor for its own profit. How? It’s really quite ingenious. You know those captcha/recaptcha things we all deal with from time to time so we can convince the algorithm that we aren’t bots? If you’ve noticed, they tend to be some form of distorted text. Letters that are blurry, poorly written or with extra lines through them for some reason. Sometimes you even need to do that twice. 

Well, the lawsuit is alleging that at least that second captcha text is being used by Google to train its text recognition AI. That matters because Google Books scans in thousands upon thousands of books, digitizes them and uploads them to the internet for free. Those scans though, are often from rough copies that might have pen marks on them, suffer from damage due to age or just have artifacts from the original printing. By making use of the captcha system, Google is teaching its AI to better deal with those problems and thus create more accurate digital versions of the work. 

So what’s the big deal? Sure, they’re sort of tricking people into doing work for them but at least they are doing it for the end of making more knowledge available to more people. Obviously that’s a good thing in itself. It’s a little shady that they didn’t really tell people about it, but if that was all it was, no harm no foul. Yet, as often happens, Google goes right ahead and takes it to the next step, eating up some of that good will that we would otherwise have. How so?

There are newspapers (yes, they still exist) and magazines that are interested in digitizing their archives. Universities and governments are also trying to get their documents, books and research converted to a digital format. Along comes Google, offering their scanning and conversion software to take care of that. For a substantial price of course. That is where problems arise. Because now Google is profiting off the software that you (and everyone else) helped develop. It’s pretty understandable why that might bother someone. And in all honesty, if Google were upfront about what they were doing with the data gleaned from the captcha system, then it would be fine. People would have the opportunity at least to know what they were doing and why. But again, Google doesn’t tell people about that. It seems only fair that since Google is making a profit, they should offer at least a shekel or two for the trouble. 

In fact, if they were willing to both be upfront about how they are using the data and offer something to the people helping them with it, it would be great if Google expanded the program. They could scan documents not just individually but as part of a searchable and cross-referenced database that would be a massive benefit to researchers everywhere, making it easy to find not just the one item you’re looking for but several related documents that could then be compared and contrasted. It would make Wikipedia look like LiveJournal. Hopefully, Google or someone else gets to work providing something like that in the near future. 

In the meantime, this kind of situation is exactly why TARTLE exists. For a long time now, businesses have been benefiting from data generated by others. We’re offering people the ability to take control of their data again by signing up with us and funneling all of your data through TARTLE, which allows you to actually be rewarded when you share it. If you even want to share it. The choice is yours.

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby, where humanities steps into the future and sourced data defines the path.

Alexander McCaig (00:25):

Welcome back everybody to TARTLE Cast. Not turtle, TARTLE. It was funny, I got a call from-

Jason Rigby (00:31):

Not tardell.

Alexander McCaig (00:32):

Yeah. I got a call from someone the other day and they're like, "Oh, blah, blah, blah. Your company, turtle." And I'm like, "Do you see a U in that?"

Jason Rigby (00:41):

Yeah. Yeah. You are not involved in ...

Alexander McCaig (00:47):

I don't know what you're talking about. What is a turtle? I'm not involved with any turtle thing. It's TARTLE. T-A-R-T-L-E.

Jason Rigby (00:55):

You have an article here. Not an urticle, an article.

Alexander McCaig (00:59):

Yes.

Jason Rigby (01:00):

It's going to be fire.

Alexander McCaig (01:01):

Yeah. I don't know anything about it, but I'd said, why didn't you just tee me up on this because I think this could be-

Jason Rigby (01:08):

No, I love this article. Yeah. It's from bizjournals.com. A Massachusetts woman's lawsuit accuses Google of using free labor to transcribe books and newspapers. Here's a class-action lawsuit.

Alexander McCaig (01:19):

Can we talk about the title though?

Jason Rigby (01:20):

Yeah.

Alexander McCaig (01:21):

A woman's lawsuit.

Jason Rigby (01:22):

Yeah.

Alexander McCaig (01:23):

What the hell? What's the difference between if it's a man or a woman?

Jason Rigby (01:25):

I think it's just a clickbaity type-

Alexander McCaig (01:27):

No, I know that.

Jason Rigby (01:28):

... or a Massachusetts woman's lawsuit.

Alexander McCaig (01:30):

It's absurd how they're trying to coarse us into reading the article. I'm already triggered. Go ahead.

Jason Rigby (01:34):

Yeah. A Massachusetts woman has launched a potential class-action lawsuit against Google saying the tech giant is unfairly using people around the world to help it transcribe books, addresses, and newspaper as part of its reCAPTCHA program which is Google's own version of a security program called CAPTCHA.

Alexander McCaig (01:50):

Yeah. CAPTCHA.

Jason Rigby (01:51):

We've all experienced that.

Alexander McCaig (01:51):

We've all done the reCAPTCHA.

Jason Rigby (01:52):

Google requires website users to decipher words displayed as distorted images on their computer screens before users can access websites or certain features.

Alexander McCaig (01:59):

Oh, I get it. I get it, get it, get it. I understand. So, the reCAPTCHA that you're going through and it has a blurry word or something like that is helping them with their scanning algorithms on books to read text of different formats.

Jason Rigby (02:12):

Yes.

Alexander McCaig (02:12):

So, if they get human input for their machine learning models, it's beneficial to the algorithms when it reads a scanned book that may have an obtuse looking text or a bad scan itself.

Jason Rigby (02:23):

Yeah. So, the-

Alexander McCaig (02:26):

Hold on a second. That is hysterical. It's horribly efficient. But I mean, this is interesting. So, go ahead. So, how did she come to do the saw suit?

Jason Rigby (02:32):

So, the company is unfairly profiting off the labor of others as users, inter, extra words that aren't part of any security requirement. According to a suit filed Friday in US district courts in Massachusetts, she signed up for a Gmail account, was forced to respond to a reCAPTCHA prompt by typing two words that were displayed as two distorted images. And she did not receive any compensation for transcribing the second word. The suit states that the Google's reCAPTCHA program is unique in that it often requires you to type two separate words displayed as a distorted image to proceed past the program.

Alexander McCaig (02:59):

I always wondered why it has me do it twice.

Jason Rigby (03:00):

The suit alleges that Google has used the service to transcribe scanned images of books and newspapers-

Alexander McCaig (03:07):

See, this is what I said ahead of time.

Jason Rigby (03:08):

... to decipher addresses found in images captured as part of Google Street View project. But, because there's two sides to every story, but as Google continues to digitize books, newspapers, and other images that cannot be deciphered by scanning technology, the suit alleges the company has turned to humans to do their work for them.

Alexander McCaig (03:24):

That's interesting because the payback ... this is a twofold thing. This could go both ways.

Jason Rigby (03:31):

I'm going to tell you the bad section here in a second.

Alexander McCaig (03:33):

Yeah. It's like we're scanning books so that we can open up the availability and knowledge to everyone across the globe. That sounds pretty cool.

Jason Rigby (03:41):

That sounds cool. Yeah.

Alexander McCaig (03:43):

We're not forthright in telling you why we were having you do two separate images.

Jason Rigby (03:48):

Right.

Alexander McCaig (03:49):

Right. This has been the problem with tech altogether. And this is classic Google fashion. They take a black-box approach and lack any sort of transparency to tell you why you're putting in labor. And if you tell people why you were doing it they'd feel better about it. And if you didn't tell them and they had to do it twice, well give them a couple of shekels for it. So, let's continue. This is really interesting.

Jason Rigby (04:15):

Yeah. The lawsuit states at Google profits by selling transcribed versions of books through Google Books and by selling its services to companies like the New York Times to transcribe archive versions of its newspapers. The damage could be in the millions and could affect hundreds of thousands of people around the world. According to the suit, in 2009, Google acquired reCAPTCHA to create searchable archives of old newspapers and books, and this is according to Forbes. A Google spokesman said the company had no comment. So, I want to get into this part about-

Alexander McCaig (04:46):

It's interesting, the ethics thing.

Jason Rigby (04:47):

Yeah. So, I love that they're taking books ... I've gone on there and I've seen books that are out of print that Google books has on there.

Alexander McCaig (04:56):

Yeah. And I like the availability of where I can read it

Jason Rigby (04:59):

And being a history buff and loving history I want every book ... because Fahrenheit 451, I think is what it, that movie. And I had told you about that the old 1960s movie, where everybody was just watching flat-screen TVs.

Alexander McCaig (05:11):

Yeah. Flat screen TVs and firemen burning books.

Jason Rigby (05:13):

Firemen burning books because they didn't want people to understand. I don't care how absurd the book is, whether it's the satanic Bible or it's the ... I don't care what book it is. It needs to be archived and you need to have it so that it's safe forever.

Alexander McCaig (05:29):

It's the storage of knowledge. I'm actually trying to wrap my head around this in my brain because I'm torn.

Jason Rigby (05:39):

Right. Well, Google should have run it like Wikipedia. They should have said you're helping us save history.

Alexander McCaig (05:44):

Yeah. That would have been it. And if there was ... Okay, I got it now. So, you are helping us preserve history by going through and we're using the power, or the collective power of all of us to enhance these algorithms so it can better learn and read textual information or information about our world and then put that into a digital format efficiently so that it can then be re-read and reassessed. That's cool. People will be cool with that. But then if you're going to turn around now and make a business, monetary, financial stream out of it, now you've crossed the line. Now that compensation should be going back to those people.

Jason Rigby (06:29):

Well, Google books should be free. Or a version of it should be free. And then there should be a huge search engine like Google So Good app. And how many college students would this help if they could go in there and then search and then next thing you know, it gave you this list of books that were talking about that subject?

Alexander McCaig (06:44):

Okay. So, I think this is cool because I personally have an interesting way of learning through books, how I teach myself. So, you know I read a lot, you read a lot. Look at all the books around us. We're smart. Like a lawyer, you walk in, he's got a bunch of legal books on the wall. He must know what he's doing.

Alexander McCaig (06:59):

What I do with a book is as I'm reading it, if I understand something or don't understand it, or if I found something that was of great value, whatever it might be, intellectual or emotional value, it's underlined. I notate whatever the conscious thought was that came with it. And then I tabulate it so I can go back and reference. And then as I'm finished with a book, what I do is I take that and I transfer all of the notes, all the underlined sentences, whatever it might be into a completely separate journal. And then I have a huge body of books that I do that with. And they all get transferred and translated into this one single journal.

Alexander McCaig (07:44):

Then I can go back and I look for things that corroborate with one another. I look for things that have correlations across these different books. If authors are sharing ideas, whatever it might be. And then I'm like, "Oh, that's something interesting. There must be truth within this." Or, "There must be a falseness because of how contradictory this is across all the boards. Someone doesn't know the truth of it." And so that's how I learn. So, I look for the truth between the interconnectedness of knowledge.

Alexander McCaig (08:13):

And so, when I look at what Google is doing, I would think it would be amazing if what I do physically could be put into an algorithm of scanned material. And as I begin to type things in, like a specific word or subject, it talks about how many books it goes through, the correlations, and the differences between those books. And if something popped that up-

Jason Rigby (08:34):

That is machine learning helping humanity.

Alexander McCaig (08:37):

That's machine learning helping humanity. That's super cool. But if I'm going to write to New York Times and like, "Hey, New York Times, we just had hundreds of millions of people across the globe do reCAPTCHA and they've refined our algorithms. And so we can scan all your old newspapers and we can guarantee you that it's going to read them properly, but you got to pay us a huge chunk of change." Well, who really put in the labor to do that?

Jason Rigby (08:57):

Right.

Alexander McCaig (08:58):

And if we consider TARTLE, you put in the labor of creating data, you need to share in that value of that data creation. You deserve that compensation. It's like when you go to work, you want payment for the labor you put in.

Jason Rigby (09:14):

Yeah. And I think this is a prime example of why TARTLE exists.

Alexander McCaig (09:17):

Yeah. This is a prime example of why we exist. And this article, in this class action lawsuit also shows you how society is changing. Their perception of the amount of time and work we put into online is changing and how people are being much more critical of how the larger tech bodies, the whales in the tech industry, how they continue to take that black-box approach. There's very smart people that work at those companies.

Jason Rigby (09:48):

This is amazing.

Alexander McCaig (09:49):

Yeah.

Jason Rigby (09:49):

That they've come up with this.

Alexander McCaig (09:50):

They're so intelligent. And intelligence is a double-edged sword. You always have to remember, you got to see full circle, like, where is the labor actually coming from? It's cool that they put the collective power of everyone working together on it, but they weren't forthright and honest for why it was happening. And they weren't talking about how they were making a big buck off of it.

Jason Rigby (10:09):

Yeah. I mean, they made no comment, but it'll be interesting too, when the lawsuit hits, we'll have to keep up to date on it and maybe make a part two on this.

Alexander McCaig (10:17):

Yeah. No, I thought that was terribly interesting. It's terribly interesting.

Jason Rigby (10:22):

I mean, just think of the future real quick before we go, and you can comment on this, where AI, I mean, you have machine learning now, but AI gets to the point to where it can know every book that's ever been in the world. And then we could ask it questions and it can make decisions based off of facts that it correlated through these books that it found these truths in.

Alexander McCaig (10:44):

Yeah. I think that'd be phenomenal.

Jason Rigby (10:46):

To access that wisdom.

Alexander McCaig (10:47):

It's like digitizing the library of Alexandria before the damn Romans came in and burnt the thing down.

Jason Rigby (10:54):

Yeah. Sad.

Alexander McCaig (10:55):

Yeah. It was all that knowledge in one place. And you had to go and do it yourself, do your own indexing. But now we have the power of assimilating that with efficiency in a digitized format. There's incredible answers in that collective power, and I'm excited to see what that future looks like.

Jason Rigby (11:11):

Or Google can absorb Wikipedia-

Alexander McCaig (11:14):

And then Wikipedia goes to crap.

Jason Rigby (11:18):

Or maybe they can make it a not-for-profit like it is and then combine Google Books with Wikipedia, and there you go.

Alexander McCaig (11:24):

Or Google could-

Jason Rigby (11:24):

Could you imagine, here's an article ... or make a version of Wikipedia. Here's an article, here's what everybody's saying about it, and then here's what books are saying about it.

Alexander McCaig (11:35):

That's so cool because then it gives you all the different phase states and the different truths, depending on how they are in perspective.

Jason Rigby (11:41):

I mean, yeah. If I type in [crosstalk 00:11:44].

Alexander McCaig (11:43):

Societal perspective. A doctoral perspective and an author perspective, that's cool.

Jason Rigby (11:50):

Yeah. And then combine white papers with it, what a leading scientist is saying.

Alexander McCaig (11:54):

It's awesome. And then, it comes to some sort of relative output where you're like, "Oh, this is interesting." And then the thing that'd determine, this is probably not truthful at the moment or it lacks any truth.

Jason Rigby (12:02):

Yeah. It would come out.

Alexander McCaig (12:04):

It'd be cool.

Jason Rigby (12:05):

Yeah. It'd probably be the number one research site in the world.

Alexander McCaig (12:08):

Yeah, it would. And there's no reason why Google couldn't donate to Wikipedia. What do they ask for like two bucks from people?

Jason Rigby (12:14):

Yeah, exactly.

Alexander McCaig (12:15):

We could have this whole thing done in 30 minutes if all the Wikipedia users ... Google, give them some money.

Jason Rigby (12:21):

They don't have a lot.

Alexander McCaig (12:22):

Yeah. They have very little.

Jason Rigby (12:23):

Razor-thin profit margins.

Alexander McCaig (12:26):

They're almost in the red, now a chance they're in the black.

Jason Rigby (12:34):

ABC. That's fine. Don't they have a big sign up at Google? And doesn't it say, "Do good," or something like that? I'm going to look it up real quick because I want to make sure [crosstalk 00:12:42].

Alexander McCaig (12:42):

Bring this up right now. Do you know how many ethics officers they've gone through at Google? Just that statement alone I just made is worrisome to think about why are you going through so many different ethics officers? What is possibly happening over there?

Jason Rigby (12:56):

No, it's a phrase using Google's corporate code of conduct and it says, "Don't be evil." And they have that really big, "Don't be evil."

Alexander McCaig (13:04):

Evil's a function of perspective. They could be thinking they're doing a lot of good, but good for one person could be evil for another.

Jason Rigby (13:13):

Yeah.

Alexander McCaig (13:13):

It's way too ... Yeah, I get it.

Jason Rigby (13:17):

Yeah. It's way too Luciferic.

Alexander McCaig (13:19):

Yeah. It's some sort of a Luciferian agenda. It's nebulous in perspective because how Google manipulates data, you can manipulate that sentence any way you want.

Jason Rigby (13:32):

This is on Wikipedia.

Alexander McCaig (13:35):

Thank you. Wikipedia.

Jason Rigby (13:38):

Don't be able to phrase using Google's corporate code of conduct, which is also formally proceeded as a motto. Following Google's corporate reconstruction under the conglomerate Alphabet, incorporated in October 2015, Alphabet took, do the right thing, as its motto. Also forming the opening of its corporate code of conduct. The original model was retaining Google's code of conduct, now a subsidiary of Alphabet. In April 2018, the motto was removed from the code of conduct preface and retained in its last sentence. So, they changed it from, don't be evil, to do the right thing.

Alexander McCaig (14:03):

Yeah. And it shouldn't even be, do the right thing, is to be, do something truthful.

Jason Rigby (14:09):

Yeah. How is what you're doing helping humanity?

Alexander McCaig (14:12):

Yeah. Just what is the right thing?

Jason Rigby (14:15):

Yes.

Alexander McCaig (14:16):

Who's to say that it was right-

Jason Rigby (14:17):

Globally.

Alexander McCaig (14:18):

When you act truthfully, that's obviously the right thing to do.

Jason Rigby (14:21):

Right.

Alexander McCaig (14:22):

Because that's just logically how that falls into place. But I don't know, it's just too biased in perspective [inaudible 00:14:31]. There's too many dogma's that could be built into saying, I'm doing the right thing.

Jason Rigby (14:34):

Yeah. Or don't be evil.

Alexander McCaig (14:36):

The church thought that the children's crusade was the right thing.

Jason Rigby (14:39):

Yeah. They could say don't be evil.

Alexander McCaig (14:41):

Yeah. Don't be evil but go out here and fight and slaughter.

Jason Rigby (14:46):

Yeah. Because then you get into this dogma that becomes ... I mean, look at right now with mask and not wearing mask and all that. I mean, look at how polarizing it is.

Alexander McCaig (15:01):

Yeah. It's very polarizing. It's almost hypocritical when I think about this statement. Google does so much work with data. Data's agnostic, non-dogmatic, it's not biased. It's not political. But your corporate policy already has biases and dogmas built into it. It's like you're contradicting the thing that supports who you are.

Jason Rigby (15:23):

Yeah. And then YouTube alone, I mean, so many people have gotten taken off of YouTube just based off of-

Alexander McCaig (15:30):

It just a little hypocritical.

Jason Rigby (15:33):

Yeah. I think so.

Alexander McCaig (15:34):

I'm not saying Google's bad. I'm just saying that it's hypocritical.

Jason Rigby (15:38):

Yeah, we'll get zero views from this video because guess where this video's going on?

Alexander McCaig (15:42):

Yeah, because going on YouTube. It's just not very well thought through. That's all. They didn't take the time to think it through.

Jason Rigby (15:46):

Their algorithms, like, they're talking about Google, Google, Google, Google. Don't be evil. What?

Alexander McCaig (15:51):

Yeah. What is that? Corporate policy? Red flag. Do not let that go on YouTube.

Jason Rigby (15:57):

I know, but I do appreciate the technology. I appreciate everything that is happening in this world right now to make our lives better. And I think the end goal, I'm optimistic for humanity. And as the world becomes more flat and we know about that book, The World is Flat, and as the information begins to communicate, especially in third world countries, I'm super excited about the future of how it's going to elevate humanity. Just knowledge alone will elevate.

Alexander McCaig (16:27):

Yeah. Education, transparency, and availability of resources. And the newer iterations of TARTLE that we have coming out, everything in the system explains what it is, how it's working, and why it's there. So, it's not going to be like a reCAPTCHA event where you're like, "Why am I doing this?" It's not black-box like that. It's not a Google approach. You're going to know exactly what's going on and why it's there.

Jason Rigby (16:56):

I love that.

Alexander McCaig (16:57):

Everything has a reason and we're forthright and truthful about that.

Jason Rigby (17:00):

Yeah. And that's the way it should be.

Alexander McCaig (17:01):

Yeah. That's exactly how it should be.

Speaker 1 (17:03):

Thank you for listening to TARTLE Cast with your hosts, Alexandra McCaig and Jason Rigby, where humanity steps into the future and sourced data defines the path. What's your data worth?