Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 25, 2021

AI Artificial Intelligence Learning and Reading Human Symbols Part 1

AI Learning Reading Human Symbols
BY: TARTLE

AI and Symbols

People have been working on Artificial Intelligence for years. No, not to create HAL 9000 or Skynet. Well, hopefully not. The goal is to create programs that are better at analyzing data, helping us to make better decisions. 

One of the primary obstacles to that goal is being able to recognize the meaning of symbols. Why should that be so hard? Program various symbols and their meanings into the algorithm and everything should be fine. Right? Wrong. There are some symbols that should be very easy to handle, such as a STOP sign. Program the meaning of the word ‘stop’ and the color and shape of the sign and your automatic car will now be able to stop when it is supposed to. Sounds simple, doesn’t it? You’d think it would be.

Yet, STOP signs have been known to be used as décor as well, or be included on a storefront, or used to say something other than ‘stop at the intersection’. For an automated car trying to navigate busy city streets, this is an extremely daunting task. It has to be able to not just recognize the symbol but to recognize its context. This means taking into account where the symbol is located, its size and the other factors that affect the temporary context. If the vehicle’s AI can’t sort out the context and make a correct judgement as to whether or not to stop the vehicle or wash hands before returning to work then it isn’t all that great. 

Imagine another example. If I give the middle finger to someone, it could be interpreted in a number of ways. One is the obvious, ‘go away, I don’t like you’, another is that it could be humorous. Another could be simply that the finger in question hurts and it’s being held up to display a bruise or cut. We are able to intuit the context of the situation and interpret accordingly. However, missing just one piece of that will lead to differing interpretations with potentially dangerous results.

Building programs capable of making even these very simple kinds of distinctions is more difficult than it might sound. This is because you can’t literally program every single variable into the software. At some point, your AI software will have to be able to truly function on its own. And to get there, it has to train.

Think of training a dog. When you teach a dog to sit, does it hear the world ‘sit’ and understand its meaning and act accordingly? No. What is going on is that the dog recognizes the word but also is able to understand the context of the command to ‘sit’, such as the tone of voice used, a light push to sit, and even facial expressions. All of that factors into understanding the simple meaning of a simple world. 

If it is that hard to explain how a dog goes about responding to the command to sit, or there is so much to consider in a simple and common hand gesture how much harder is it going to be to get an AI to explain the level of symbolism used in Dante? Answer, virtually impossible.

Fortunately, we don’t need these programs to do the impossible, we just need them to do a little better than the dog. The truth is, that will be hard enough, hard, but doable at least. The AI will need to be taught how to recognize many different symbols before it finally ‘learns’ how to do so and no longer needs to be trained. 

New methods of doing this very thing are being tested right now. What we hope happens is that the complexities of all these systems are understood by the programmers involved. Whether or not they do keep that complexity in mind is the difference between teaching these programs to control through making decisions for us, or programing them to learn and to teach in order to help us make better decisions for ourselves. 

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Summary
AI Artificial Intelligence Learning and Reading Human Symbols Part 1
Title
AI Artificial Intelligence Learning and Reading Human Symbols Part 1
Description

People have been working on Artificial Intelligence for years. No, not to create HAL 9000 or Skynet. Well, hopefully not. The goal is to create programs that are better at analyzing data, helping us to make better decisions. 

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to [Turtle Cast 00:00:10] with your hosts Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path.

Alexander McCaig (00:26):

Good morning, everybody. Welcome to Turtle Cast with your host this evening, Jason Rigby.

Jason Rigby (00:32):

Yes, I'm pointing at you.

Alexander McCaig (00:33):

You.

Jason Rigby (00:34):

You. You know who you are. You listener, you watcher. That one that watches us, but never subscribed to YouTube. They've never pushed the subscribe button because they're so afraid of commitment.

Alexander McCaig (00:45):

Yeah. They don't want to commit to us. They don't want to commit to themselves by signing up on Turtle. Yeah. We're looking at you.

Jason Rigby (00:51):

That's the person we want to talk to. That one that's in the middle looking around.

Alexander McCaig (00:57):

You know what this reminds me of? It's like politics.

Jason Rigby (00:59):

Yes.

Alexander McCaig (01:01):

They're like, "We don't want the people on the right or the left. We can get those easy. It's the ones in the center."

Jason Rigby (01:06):

Yeah. The independent.

Alexander McCaig (01:07):

It's those independent folks.

Jason Rigby (01:09):

Yeah, exactly. Those are the ones we want to make a commitment to Turtle.co. Sign up today and hit subscribe on the YouTube.

Alexander McCaig (01:18):

No big deal. Get it done. No harm, no foul. Subscribe's not going to hurt you.

Jason Rigby (01:23):

No, because we get way more audio. I was looking at that this morning. It's way more audio than...

Alexander McCaig (01:29):

Visual?

Jason Rigby (01:29):

Yeah. Yeah. Then the video part. [crosstalk 00:01:32] So

Alexander McCaig (01:31):

I think it's easier for people to do other things with the audio on.

Jason Rigby (01:36):

I think we're in a 1920s, 30 boom again. Some podcast shows are getting more than CBS news at night.

Alexander McCaig (01:49):

CBS news is garbage.

Jason Rigby (01:50):

Or any of that. Any of the sitcoms or TV shows. They're getting hundreds of millions of views. It's just like you know there's not a hundred million people watching this one television show. Dancing with the Stars, really?

Alexander McCaig (02:06):

And if you think about it too, if it's a live news thing, right, they don't really get everyone's attention. People are listening or watching to our podcasts right this second.

Jason Rigby (02:16):

I always picture the CBS, ABC and NBC shows, the people that are watching that are just pill, tons of carbohydrates. Drinking light beer.

Alexander McCaig (02:28):

Angry with themselves and the world [crosstalk 00:02:32]

Jason Rigby (02:31):

Their nine to five job that they've hated for 30 plus years. But they've got a pension.

Alexander McCaig (02:39):

The real smut they get comes from CNBC [crosstalk 00:02:43].

Jason Rigby (02:43):

They're just waiting for something to trigger them so they can just yell at the TV.

Alexander McCaig (02:47):

Yeah.

Jason Rigby (02:49):

That's where you're from. Everybody just yelling at TVs and fighting with each other.

Alexander McCaig (02:53):

I watch people read the newspaper and get upset about it. Why are you getting at the newspaper?

Jason Rigby (03:01):

I know some pubs in Boston can be pretty rowdy at nighttime.

Alexander McCaig (03:04):

They get very rowdy. Yeah. There's a couple Irish ones in Southie that... Listen, I'm not Irish. I'm Scottish. I wouldn't even want to step in there. Very tribal is the only way [crosstalk 00:03:21].

Jason Rigby (03:21):

When Irish people came over here in the United States, you had the potato famine. You had everything going on so there was super bad poverty. And then, and we know this, in the industrial revolution, there was no... That's where unions came aboard and everything else because they were just taking Chinese workers and Irish workers and paying them pennies and having them work seven, eight days.

Alexander McCaig (03:45):

What'd they call the Irish workers who built the train tunnels under the Hudson?

Jason Rigby (03:51):

Yeah.

Alexander McCaig (03:51):

What are they called? Sand something or...

Jason Rigby (03:55):

Yeah, I don't remember, but I know they would even do... It was seven days a week and it was 12, 15 hour days, and then they would have a cot provided for you. That was it. And then you switched the cot with the next person. So you would tap them on the back.

Alexander McCaig (04:07):

You know what that reminds me of? Russia during world war two. We don't have enough rifles for everybody. So I'm going to give you ammo and you're going to run behind the guy with the rifle. So when he gets shot, you tap him on the back and be like, "All right, I'm going to take you out. Give me the rifle."

Jason Rigby (04:23):

Yeah. That's just crazy to me. [crosstalk 00:04:28]. You had these Irish immigrants that were just... They came over here. They had nothing, absolutely nothing, and then they just, dude, shirt wheeled. Banded together and then became a part of...

Alexander McCaig (04:42):

You need a community, especially when you're in a foreign land.

Jason Rigby (04:45):

I think the boxing and all that stuff that they... In the old days, they would box outside the pub. [crosstalk 00:04:51]. I think that was just a way to... They were just so frustrated with life. And that's what you saw over there, where you're from.

Alexander McCaig (05:03):

Just people just getting frustrated over just the most ridiculous stuff.

Jason Rigby (05:09):

When I look at the Irish community in Gaelic and all that, a lot of that is symbol oriented.

Alexander McCaig (05:16):

Well, yeah. I don't know if a lot of people know this, but the majority of Irish Celtic art comes from the middle east. Oh, interesting. So what happens is you have... There were great [inaudible 00:05:32] that happened across Europe. So there were these very highly religious artifacts that certain churches or families would hold. And so people would go from... In the middle east, in certain areas, the holy land and move themselves across Europe to go visit these places, right? Those missionaries, those people that are on that... I don't know what you call that path. Gosh, there's one in Portugal too, that it's like a couple of hundred miles long.

Jason Rigby (06:03):

Oh, I know what you're talking about. Yeah.

Alexander McCaig (06:04):

Anyway, whatever, right? Pilgrims. Okay, they're pilgrimaging around, that's all it matters, but they carry their culture with them. They carry that artwork, those symbols and those symbols then begin to infiltrate, not in a bad way. But the consciousness, the behaviors and the culture of people that are not... Have no local representation of what that symbol means. So until there's some sort of explanation from the people in the middle east explaining what this Catholic artwork actually means, then you can get an idea of its interpretation and its value. And then from that, it can then begin to evolve into its own meeting specific to the people of Ireland or a CELT or a druidic person, whatever that might be.

Alexander McCaig (06:53):

So we get into this very interesting idea of symbols themselves. Okay. And this is our bridge here. Why do we use the word bridge? Bridge gives you a representation, a reference point in your mind. What do bridges do? Because our convention of us as human beings understand that bridge brings one point over here to another one that otherwise couldn't be traversed easily. So that's what the bridge does. It helps you unify. We use that word a lot. So is it so much that when I speak something to you, Jason, is it the words you're listening to or is it the visual representation in the mind creating almost the symbols within the mind to represent what is being said?

Jason Rigby (07:36):

Yeah. When you mentioned bridge, I thought of a bridge.

Alexander McCaig (07:42):

That's right. When people say, "Don't think of a giant pink elephant.", what do you do?

Jason Rigby (07:45):

Then I thought of C4 and then I thought...

Alexander McCaig (07:48):

Yeah, but that's what happens. The visuals. You go through stages of visuals, depending on your life and your own experience. And your experience designs what your interpretation of that symbol might be. Now, the question then becomes, when you look at a symbol, is the symbol a function of its own interpretation of itself? Can it stand alone objectively? Or do symbols care their meaning from a subjective standpoint? From how us as viewers or the collective of society actually looks at a symbol? People are probably like, "What the heck are we all talking about? What does this even lead to?" So the idea here is that the obvious next stage for artificial intelligence that would be extremely beneficial to it is symbolic behavior. I was explaining this, this morning, to Amanda. If I give you the middle finger, maybe it causes some sort of emotional charge, right? Or we think that it has some sort of meaning telling someone to bugger off.

Jason Rigby (09:03):

Yeah. Or it can be funny.

Alexander McCaig (09:04):

Or it could be funny, right?

Jason Rigby (09:05):

Yeah. There's lots of different meanings of it.

Alexander McCaig (09:07):

So she's going to interpret it one way. I'm interpreting it one way with how I'm delivering it, but somebody else may see that and interpret it completely differently as a third party observing. And then the collective of everybody coming together looking at it says, "Well, it's an albeit negative thing."

Jason Rigby (09:24):

But a symbol has to have proper context. And that changes its meaning so there's variability with it.

Alexander McCaig (09:30):

So there's variability with it, right? So that becomes the question of, "Do symbols... Are they only interpreted subjectively or are they interpreted objectively?" Which means they can actually stand on their own. And if it is subjective, then how would artificial intelligence create a symbol or read symbols from a subjective standpoint? Because if I have the collective input of many different people, those data points coming together to say, "This is what we feel it is. Or this is..." What's the word that starts with a C? This is our contention? No, that's not the word.

Jason Rigby (10:07):

Yeah, contention would be like strife.

Alexander McCaig (10:10):

Yeah. It's not a strife. Oh, it's our convention, I'm sorry. Oh, one letter. It's our convention of its meaning as a collective. And that's the interesting part. So how would artificial intelligence then begin to create algorithms that made sense for the interpretation of this? But that also leads to something else. The person that is pre-programming the initial algorithm for the interpretation for the machine learning or deep machine learning to refine itself, is that the proper platform for it to begin off of?

Alexander McCaig (10:47):

So if I am inputting something, do I put the input of symbols and their meaning as something objective in my algorithm or a subjective algorithm? If it's something that's subjective, when it comes to machine learning and artificial intelligence, then I need to collectively take the understanding from all cultures across all periods of time and put them together. But if it is something objective, then what is it from an experiential stance of something that deals with a universal part that sits outside of human consciousness, but is defined by the laws of physics and other things of that nature? Is that how I'm going to define it.

Jason Rigby (11:23):

Let me get real practical. We've even taken the point of mapping out stars and then associating them with Archers.

Alexander McCaig (11:33):

Excellent.

Jason Rigby (11:33):

You know what I mean?

Alexander McCaig (11:34):

Excellent job. So there's an interesting connection that happens here psychologically with the brain, right? Is that we create these maps to help us define understanding of something that is otherwise separate. So if you look at the etymology of the word symbol, it's a Prodo Indo European word, which means it goes way before any of the languages that might've developed in Europe at the time. And it kind of almost goes back towards Sanskrit. And what it's saying is that if you look at the word SIM, which comes from Syn, S-Y-N, it's a combination. It is defined as togetherness.

Alexander McCaig (12:23):

So if I'm looking at these stars, okay? I need to put them into togetherness. I have to bring them together, but bringing them together requires work, and that's the bole part, the B-O-L-E, which comes from the Prodo Indo European word called Gwel, G-W-E-L, and that means to cast. So I'm actually fusing these things together. Maybe it's not only just stars that we look at, but maybe I'm fusing multiple ideas together. So the syn, the togetherness, plus the work, the fusing of those things define the word symbol. So a symbol is the bringing together of maybe many disparate things to actually define a picture that is representative of those things.

Jason Rigby (13:10):

Yeah. It would be the same as when I look at an image of a firetruck, let's just say. Or I see there's symbolism all over that. Everything has its own badges and... But when you see a firetruck, there's so many variables like where artificial intelligence had such a hard time, because now, you have this congruence of when you were a child and you play with firetrucks. You put the little ladder up and all that. And then you have this whole other idea.

Jason Rigby (13:42):

So we have an endearment to a firetruck that especially, from when you're a child and then you hear a fire truck and the firetruck is going to something proceeding as bad [crosstalk 00:13:57]. Yes. Yeah. But then little kids are getting excited. See, if you're a computer, you're like, "This is a firetruck. This is for something that's... A fire that's destroying their place of residence." Which is important on Maslow's hierarchy is to have shelter. And then people are getting excited? These little kids are running outside? How would a computer even understand any of that? You see what I'm saying?

Alexander McCaig (14:21):

How can it conceptually understand that human life has to be protected?

Jason Rigby (14:24):

You would think we would run from a firetruck.

Alexander McCaig (14:26):

But people are excited in the idea of an imminent threat just because of-

Jason Rigby (14:31):

We have that psychological aspect of it too.

Alexander McCaig (14:33):

It's just a psychological aspect of it. And so, this poses a bigger question. What color is the firetruck?

Jason Rigby (14:38):

Yeah, exactly.

Alexander McCaig (14:39):

Fire truck's red. So then what's the symbolic interpretation when you see the color red? When someone gets angry-

Jason Rigby (14:45):

Or stop, or anger. When someone gets angry, [crosstalk 00:14:49]. Red, or if you go to the spiritual side of things, people think Red chakra, right? Root chakra. That's the base. So it's like this basal color, which is indicative of some sort of Maslow hierarchy very natural charge that is occurring and it's represented in large format. And that's also represented with noise. So I'm going to take noise in a specific format, I'm going to take color and I'm going to take a machine and I'm going to take the idea of what that machine is used for and I'm going to bring them together and cast it into a symbol, which is the fire truck.

Alexander McCaig (15:28):

Yes. Yeah.

Jason Rigby (15:29):

Now, when we did that, okay, we... There are a couple of things here that-

Alexander McCaig (15:36):

You created so a network of web of data.

Jason Rigby (15:39):

A web of data, right? And a good portion of that data is objective. Red. Red has a specific wavelength. Red has a specific representation. It is albeit red. That's what it is. Right. Sound is a thing based within physics itself, okay? And that machine is a function of engineering, okay? Physics, force, chemistry coming together to actually build this thing out, but the design of it is something that comes from human thought. This is now subjective. This is how a firetruck should look. Long and rectangular, right? The firetruck should only be used to then go help people, right?

Jason Rigby (16:26):

So a good portion of the firetruck, maybe 60% of it, is something that is in the strictly objective sense, but then we come in as human beings through our referential experience of interacting with a firetruck, whether direct or indirect to say that, "Okay, this is my subjective experience when I've dealt with the firetrucks." Either, "I was in the house that was burning down and my life was threatened and then the firetruck comes and saves me." Or, "I find joy as a small boy to look at that firetruck and be like, 'Wow, this is an exciting moment.'" But the people driving the fire truck are only thinking about business. We got a job to do. There's lives to save.

Jason Rigby (17:06):

So all these different interpretations come into play. So how is it that you would code that sort of interpretation into a machine learning algorithm or a deep learning algorithm so that it can be like, "This is the proper interpretation of it." So on that fundamental basis of the thought for how we actually interpret symbols is going to be a fundamental platform for how we tell artificial intelligence to interpret a symbol.

Alexander McCaig (17:31):

And that will be on the next episode.

Jason Rigby (17:33):

And that'll be on the next ep.

Speaker 1 (17:42):

Thank you for listening to Turtle cast with your hosts, Alexander McCain and Jason Rigby where humanity steps into the future and source data defines the path. What's your data worth?