Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 30, 2021

LGBTQ+ Artificial Intelligent Counselor

LGBTQ+ Artificial Intelligent Counselor
BY: TARTLE

An AI Counselor?

Ever heard of AI counselors? Unless you have been living under a rock for the last few years, you are probably well aware of the growing mental health problem in the Western world today. The causes for this issue are many. Lack of purpose, despair over the state of the world, and of course playing a bigger role than ever in the last year would be the lack of human interaction. 

Of course, regardless of the state of lockdowns in your area and the increased suicide rates that go along with that, there are some groups that have greater struggles with suicide than others.

The Trevor Project recognized that one of these is those identifying as LGBTQ. This group has a disproportionately high rate of suicide, whether that be from people rejecting them, their own confusion, or a combination of these factors will vary from case to case. The important thing is that the people behind the Trevor Project realized that when there is an adult who lets these people know they care about them and treats them like they are an important person worthy of respect, there is a 40% drop in instances of suicide. 

While the Trevor Project is well intentioned, they are also woefully undermanned for the task. With such a large number, 1.8 million annually contemplating suicide, Trevor only has 600 employees to handle the demand. This has set them on a path to working on new ways to serve those in distress. One of those ways being explored is the use of AI as a counselor. 

How can that work? How would someone respond to this? Knowing that they are being put in contact with a machine when what they really want is a person would seem to be something that would be upsetting. Honestly, that is intuitive. However, there could still be a role for AI in handling some of the basics. Sometimes a person just needs a little encouragement when they call and not a full psychological evaluation. A properly trained AI can help sift through the callers in those first few minutes. If it turns out that the person calling is someone with a more pressing issue than even the best trained AI can hope to deal with, it can then put the caller in touch with a person.

Speaking of training those AIs, the Trevor Project is feeding all of its collected years of conversations into their programs in order to teach them how to interact with people. By looking at the flow of the conversations and how certain people respond to various phrases and tones of voice, an AI can be trained to at least handle the more basic issues that might come up. 

There are also those who might have been putting off opening up precisely because they are afraid to talk to a person. There may be issues of judgement and anxiety at play that would actually make the prospect of talking to an AI more enticing. Sometimes, people just need to vent and an AI presents an opportunity to do exactly that.

These AI counselors have of course a far wider application. One will naturally think again of the separation issues caused by people adhering to lockdown orders around the world. There are some who have literally not seen their loved ones for over a year. That, in addition to the complete disruption of normal life for many has sent the suicide rate through the roof. We’re talking numbers far greater than any hotline can deal with. Or think of Japan. Young people commit suicide there at an alarming rate under the best of conditions. This is often in response to social pressures to be the best at everything. Given the pressures against being open about how one feels about such things and the country’s general acceptance of technology, an AI counselor might actually be preferable for many there. 

TARTLE is eager to help in any way to develop these kinds of projects so that more people can be helped. That’s why we are asking you to sign up and share your data and experiences with just these kinds of endeavors. It’s one small way we can contribute to getting people the help they need. 

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Summary
LGBTQ+ Artificial Intelligent Counselor
Title
LGBTQ+ Artificial Intelligent Counselor
Description

The Trevor Project recognized that one of these is those identifying as LGBTQ. This group has a disproportionately high rate of suicide, whether that be from people rejecting them, their own confusion, or a combination of these factors will vary from case to case.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast, with your host Alexander McCaig, and Jason Rigby. Where humanity steps into the future, and source data defines the path.

Alexander McCaig (00:25):

Let's get started here. And just to kick this off, I don't know anything about microaggressions, I don't look at this in any specific light, I just look at it as the value of a human being. This article loads this up, and it talks about LGBTQ communities. We're just referencing this, and the reason I'm saying this is because, people may blow something out of proportion. We're just talking about an article.

Jason Rigby (00:53):

Yeah, and I would ask that you would, instead of having it concerned with you, that you would look at it, in that other person, in service to someone else.

Alexander McCaig (01:05):

Yeah.

Jason Rigby (01:05):

Whenever you have an LGBTQ teen, and that's what we're going to talk about. The Trevor Project. In 2019, they did a study and this is crazy, Alex. LGBTQ youth, with at least, one accepting adult in their life, were 40% less likely to report a suicide tip, in the previous year.

Alexander McCaig (01:24):

Okay. First of all, suicide [dumbs me out 00:01:26]. Nobody wants anybody else to end somebody else's life, or their own life. Quitting an experience like that, that's a shame. Frankly, I would love to have everybody around, one big old family. But people do find themselves, I don't know much about suicide, or the mental states, but people do find themselves contemplating ending their lives.

Alexander McCaig (01:50):

And this data here, is showing that, when... If somebody can step-in to show that they care, show that they're being supportive, and that they'll listen, you can decrease a suicidal thought by 40% across that community. That's unbelievable.

Jason Rigby (02:05):

Well, they believe that 1.8 million LGBTQ youth, in America, [inaudible 00:02:10] consider suicide each year.

Alexander McCaig (02:11):

Yeah. So you have 1.8 million people contemplating suicide.

Jason Rigby (02:14):

Almost 2 million teens.

Alexander McCaig (02:16):

Okay. Think about this data here. You have 1.8 million, contemplating suicide, and they have outreach groups, and this one here at this Trevor foundation only has 600 people to 1.8 million, that ratio is way out of scale. So their context here is like, how do we use all of these data inputs, these interactions of conversation with these 1.8 million Americans, in this community, and how do we apply that to a machine learning system to create an artificial intelligence feedback, that can act as that caring, supporting, listening adult.

Jason Rigby (02:53):

Yeah, and they took all their previous conversations, because it's mostly done through text, which is the median that, somebody that is at that age group that would be hurting, would want to speak to, especially if they're not willing to share with their friends or family, sort of be able to remain anonymous, and be able to have this ability to be able to speak, whether it's an AI, or a counselor.

Jason Rigby (03:13):

But they can take all this information, and they can put it into machine learning, and then this AI can begin to learn how these human counselors interacted with this. And it can learn this much with these. So you have 600 counselors every day, interacting through texts with these teenagers, and then now you have this AI learning each of those inputs, and then it's going to be able to carry on conversations just like that. And it could do it, you could have 6 billion counselors then.

Alexander McCaig (03:46):

Yeah, and then if you think about it, it's not to replace counselors, but maybe when the conversation gets to a point of elevation, the system would then trigger and be like, "I think we should speak to a real human being now."

Jason Rigby (03:56):

Yeah exactly.

Alexander McCaig (03:56):

And then that allows one of those 600 within that workflow to actually handle it, and take that direct time with that person that may be having those albeit, not good thoughts.

Jason Rigby (04:05):

Well, I mean, I want to get more macro with this. Because, I think this is a huge project in the sense of looking at, the role of AI with humanity.

Alexander McCaig (04:17):

I agree. This plays into a lot of very strong metaphors too, for how we even choose to interact person to person. And now there's going to be an interaction between AI on servers, with individuals. The big thing about that is like baking-in, like this idea of being inclusive. Remove machines out of the picture by assuming how many even know that stuff. Do we have trouble being inclusive in our own communities?

Jason Rigby (04:53):

Oh yeah, of course, a 100%. Yes.

Alexander McCaig (04:54):

All the time, right? I'm asking, it's kind of like a pretty obvious question, rhetorical. Because the [inaudible 00:05:01] yeah. Well now we're towing into a thing that is, efficient at being many different people at once. And so, you got to make sure that if you were starting to magnify upon something that is going to have a greater outreach, you got to make sure that it is really inclusive. That we've totally thought through, how this thing processes.

Jason Rigby (05:24):

Yeah, because I mean, even Elon Musk, he talks about this as his biggest fear. One of my favorite movies Ex Machina.

Alexander McCaig (05:29):

The film?

Jason Rigby (05:31):

Yeah. And I know there's some over-exaggeration on AI, with that looking like a human and all that, but when you see... What did that AI do? And what was the premise of the whole movie? Is, the AI took a weakness and valued emotion as a weakness-

Alexander McCaig (05:47):

And then exploited that.

Jason Rigby (05:48):

... and then exploited that as a human. And so if we're going to have AI interact with our suicidal teens-

Alexander McCaig (05:56):

It cannot do exploitation?

Jason Rigby (05:57):

Yes.

Alexander McCaig (05:58):

It can not create exclusion?

Jason Rigby (05:59):

And then that's something that it was finding, that it was doing. What was the one? I want to make sure.

Alexander McCaig (06:03):

The one in South Korea, that led to an example of-

Jason Rigby (06:06):

Right.

Alexander McCaig (06:07):

I think I highlighted in there. What did it say?

Jason Rigby (06:09):

I want to make sure that we're on the right one. The chatbot uses GPT, for it's baseline conversation abilities, that models a train of 45 million pages from the web.

Alexander McCaig (06:19):

Okay. So OpenAI, this very awesome set up.

Jason Rigby (06:23):

Yeah, the GPT-3, is the good one.

Alexander McCaig (06:27):

GPT-2, is currently being used, went through a bunch of stuff, to almost take inputs from the internet to learn from itself, to enhance the AI algorithm, pretty smart concept. I forget what the guy's name is, that he wears the fedoras, that we talked about?

Jason Rigby (06:38):

Yeah, super cool guy. Yeah, he's [crosstalk 00:06:40], AI guy.

Alexander McCaig (06:41):

Super cool. And so, because it's open source, a lot of people can jump on and use it. And so, in this stage 2 one, it was learning, but-

Jason Rigby (06:48):

Lee Luda was the chat.

Alexander McCaig (06:49):

In South Korea, it was called the Lee Luda chatbot.

Jason Rigby (06:51):

It was a 20 year old university student.

Alexander McCaig (06:54):

Well, it became a 20 year old university student, in that context. So what it was doing is, it was taking inputs from a specific group of people, and then it was doing that... It was actually learning like, bad behaviors and exclusivities, and the way we interact outside of interacting with a machine. Some of the abysmal, or negatively oriented thoughts, actions, and words, that we use as human beings, were being embedded into this algorithm.

Jason Rigby (07:20):

Yeah. This is what the article said. It said "It embedded deeply racist, sexist, and homophobic ideas."

Alexander McCaig (07:25):

Right. And that's not the algorithm.

Jason Rigby (07:27):

That's it learning from-

Alexander McCaig (07:27):

That's it learning from people that are deeply sexist or whatever, has to do with the three things were on there, right?

Jason Rigby (07:33):

Right.

Alexander McCaig (07:34):

That's an issue. So when designing the algorithm, the AI, it needs to be able to flag something that may be a [biasy 00:07:43], something that may have that sort of negative orientation that creates exclusivity, and not inclusiveness. Does that make sense?

Jason Rigby (07:49):

Yeah. No, that makes perfect sense.

Alexander McCaig (07:50):

And so, when the Trevor foundation moves into the GTP-3, the next evolution of this OpenAI, they're very conscious about how that algorithm is being inclusive, so that as they're having these conversations, and it's creating its own sort of chatbot to respond back to these 1.8 million individuals, that it doesn't start to become biased, racist, homophobic any of those specific things, because you're already dealing with a community that is in a targeted spotlight.

Alexander McCaig (08:21):

They have sensitive emotional states, because of the way they're thinking, contemplating suicide. Okay. You have 1.8 million people contemplating suicide. The last thing you want to do, is start to attack them more. When you thought your machine was doing good.

Jason Rigby (08:34):

Yes. Yeah, exactly. But I mean, I want to get into a greater state of this house, when you look at these AIs, interacting with sensitivity towards humans, there's also a positive side, because I want to... There's also a positive side, because if I can have an AI that was a psychologist, that would be free or very inexpensive, so I wouldn't have to pay.

Alexander McCaig (08:58):

Because we know psychologists are expensive.

Jason Rigby (09:00):

Yeah, exactly. What was it the other day? Oh, we were talking yesterday in Santa Fe.

Alexander McCaig (09:07):

Yeah.

Jason Rigby (09:08):

How many psychologists and psychiatrists, are up in Santa Fe? [crosstalk 00:09:11].

Alexander McCaig (09:11):

All the doctors are psychologists.

Jason Rigby (09:12):

Yeah. Of course, I'm not upper middle class, where I can afford something like this, if I'm middle class, lower middle class, and-

Alexander McCaig (09:21):

What resource do you have?

Jason Rigby (09:22):

Yeah, what resource do I have? And then I'm able to have total privacy, and talk to an AI, because if I know the AI's smart and understands, I may be more apt to talk to the AI, over a real person.

Alexander McCaig (09:34):

Especially, if it's programmed not to have a bias.

Jason Rigby (09:36):

Even if I knew that it's an AI, I may actually favorably want to talk to an AI, because I know that it's not a genuine person. You see what I'm saying? So I could be more... I would open up more to an AI. Even me as a person, I feel like I would open up more. This is really interesting. I feel like I would open up more to an AI, because it's not a human.

Alexander McCaig (09:59):

You have better context on this than I would, through your own life.

Jason Rigby (10:03):

Like on the emotional state?

Alexander McCaig (10:05):

No, I'm just saying you and your preference.

Jason Rigby (10:08):

Yeah. But I'm older than you, so which is interesting.

Alexander McCaig (10:10):

Yeah. So, and I don't-

Jason Rigby (10:13):

There's safety in talking to an AI thing. I don't want to be-

Alexander McCaig (10:17):

If I can presume-

Jason Rigby (10:19):

... fearful of rejection, because I'm talking to someone that would... I would just look at the AI as like, "You just took in vast amounts of knowledge, you have more knowledge than any human could ever have, let me try to tap into your knowledge, to figure out this problem that I have." That's kind of how I would look at it. And I know you wouldn't judge me because you're an AI.

Alexander McCaig (10:37):

Yeah. I think it's just a really interesting dynamic. So I just think there has to be flags, and fail safes in there, that create this unifying outputs, right? Not inharmonious outputs.

Jason Rigby (10:47):

I think it's responsibility.

Alexander McCaig (10:51):

Who's responsibility for designing it?

Jason Rigby (10:52):

Of Google or wherever they may, because they've bought out these guys.

Alexander McCaig (10:54):

DeepMind?

Jason Rigby (10:55):

Yeah. Google and all those scientists, to be able to say, how can we make the system inclusive?

Alexander McCaig (11:00):

And when you think about designing systems around data, that everybody's interacting with, you got to make sure it's inclusive. If it's not inclusive, you're exacerbating the negative aspects of the climate, or the ability for people to get access to education. Educational access is an inclusive thing, human rights, giving those, and recognize them, is an inclusive [crosstalk 00:11:27].

Jason Rigby (11:27):

I mean, you have a problem, because you know, as well as I do, if you take an AI right now, the smartest AI out there, and you plug in all the worst problems, and then you put climate stability in there-

Alexander McCaig (11:36):

What's it going to say?

Jason Rigby (11:36):

It's going to tell you one thing.

Alexander McCaig (11:37):

Get rid of human beings.

Jason Rigby (11:38):

It's going to say, and it'll get real strategic and say, what areas? We need to get rid of human beings the most.

Alexander McCaig (11:45):

I bet you someone's done that.

Jason Rigby (11:47):

You know they have.

Alexander McCaig (11:48):

They just don't talk about it.

Jason Rigby (11:49):

No, they're not going to talk about it.

Alexander McCaig (11:50):

But I bet you, it has.

Jason Rigby (11:51):

So, okay, we can't do that. We can't have an AI just eradicating people in South America, to protect the jungles.

Alexander McCaig (11:57):

Yeah. It's got to be an inclusive thing. How do you solve while being inclusive, rather than solving exclusive?

Jason Rigby (12:02):

Yes.

Alexander McCaig (12:04):

It's been such a thing in our brief technological evolution, that everything we do is destructive. It's all about these ideas of separation, our nuclear technology is about splitting atoms, not bringing them together. Our advanced digital marketing is about... not us, that TARTLE, but just in general, the market itself. Is about putting people in buckets, separating them, laws deal with separation.

Alexander McCaig (12:33):

We put borders on our countries, deals with separation, right? We put gas in our cars, it's not about creating something, it's about using something, and then it's spent. It's always destruction. It's almost sucking a resource away, rather than fusing it to create a new type of energy.

Jason Rigby (12:49):

But think how much, instead of being in cooperation with it, we're always fighting, pressure, pulling, pushing. You see what I'm saying? It's just, you know, and I know that's part of evolution, but it's just so interesting to me, to see this concept that humans have, where we have to fight everything.

Alexander McCaig (13:08):

Constantly fighting. Why is it, we are always fighting nature, or we're fighting cancer. Don't make it a fight.

Jason Rigby (13:17):

Mm-mm (negative).

Alexander McCaig (13:19):

No learning has ever happened really in fighting. It's just combating one negative against another negative. What happens when you have this double resonance going on here?

Jason Rigby (13:27):

Well, it also has the ability to be able to, instead of fight that, what you have, that perceived enemy, is to be able to say, how can I learn from this enemy? And how can they teach me?

Alexander McCaig (13:37):

I don't want to [crosstalk 00:13:38] fight cancer, I want to integrate with it, so I can understand how to balance it, right? It's just-

Jason Rigby (13:43):

Cancer doesn't have any emotions.

Alexander McCaig (13:45):

No, it's just-

Jason Rigby (13:46):

It's like Coronavirus [inaudible 00:13:47], out there.

Alexander McCaig (13:47):

... it's just systems that get out of balance, and so it's... Don't chop it up, burn it out, and do this, and separate. Find a way to bring your thoughts, and people, and humanity, and your algorithms, everything together to unify.

Jason Rigby (14:02):

Yeah. And we want to do a huge shout-out, and we'd love to have them on the show, is for the Trevor Project.

Alexander McCaig (14:08):

I love to talk to them. The fact that, someone is focused on this, and they're worried about making sure that the algorithm is inclusive. They see the weaknesses in AI.

Jason Rigby (14:19):

Yeah. But the demand is so high. I mean, there's just so many teenagers, especially now... I was listening to a scientist today, and he's like, "We don't realize the value of human touch."

Alexander McCaig (14:30):

Oxytocin.

Jason Rigby (14:31):

And then with COVID happened, we took that away. And so now we have a group of kids that are being stuck in their houses, because that was the answer... The government's answer was "Stay in your house."

Alexander McCaig (14:42):

Separation. Again, every single time.

Jason Rigby (14:44):

All governments with, stay in your house. And even some governments were boarding people up in their house.

Alexander McCaig (14:50):

It's this idea of control, and when you do that control, you separate, you disunify, you create more chaotic environments. You can't move with nature, you're just always fighting nature.

Jason Rigby (15:00):

Yeah, and we're going to reach out to Trevor Project. We'd love to have them on the show, that would be awesome. And we encourage everybody to go to their website.

Alexander McCaig (15:09):

And I would love to thank them for their work. I would like to thank-

Jason Rigby (15:13):

All the volunteers.

Alexander McCaig (15:14):

I would like to thank the gentleman who came up with OpenAI. I would like to thank Trevor Project, I'd like to thank DeepMind, for looking at the inclusivity, all those specific things.

Jason Rigby (15:23):

Yeah. And I think it's just of course, always great an MIT article.

Alexander McCaig (15:27):

And I would like to thank those 1.8 million people, for being a part of this world.

Jason Rigby (15:31):

Yes.

Alexander McCaig (15:31):

Because I understand that, you may be going through something difficult, I get that. But thank you for helping us learn from your situation, so we can be there with you, and meet you where you are.

Jason Rigby (15:41):

I just love that, those in that community that are displaying courage with this, because for so long, this has been this divisiveness, that we in this world, that we haven't needed, and it's created a lot of violence towards that community.

Jason Rigby (15:58):

And there are people in the community that are coming out, and just being able to have this conversation, and not being afraid. And the more conversations we have, and like Trevor Project and others, the more... And we're all about this building bridges-

Alexander McCaig (16:15):

Yeah, inclusively sharing?

Jason Rigby (16:17):

Yes. Then in that moment, we can have healing.

Speaker 1 (16:30):

Thank you for listening to TARTLE Cast, with your hosts, Alexander McCaig, and Jason Rigby, where humanity steps into the future, and where source data defines the path. What's your data worth?