Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 26, 2021

AI Artificial Intelligence Learning and Reading Human Symbols Part 5

AI Learning Reading Human Symbols
BY: TARTLE

AI and Symbols Pt. 5

Here we are at part five (or is it 50?) of our series on training Artificial Intelligence how to work with symbols, how to recognize and interpret them. Today, we are going to continue to wrestle with whether or not the method of training AI to do this should be based on agreed upon cultural standards or a universal standard. The truth is this, it is a difficult topic to contend with. Normally, it would definitely be preferable to go with a truly universal standard of interpretation. However, it has to be admitted that interpreting symbols presents unique challenges in that regard. This is because much of the meaning in nearly any symbol is dependent on the local culture. It also depends greatly on one’s view within that culture. 

Look back on the swastika that we discussed some time ago. Pretty much everyone agrees that what the Nazis stood for is evil and that the swastika is a symbol of all that evil. Yet, the Nazis didn’t regard their actions as evil, they regarded themselves as the good guys. Then there is the fact that before the Nazis appropriated the symbol, the swastika was a benign symbol in multiple eastern religions. The point is, this one symbol has at least three very separate meanings that depend on personal understanding and knowledge of context to understand. 

Or take a hand gesture, another subject we’ve touched on before. In this case, consider a salute. The Nazis salute with an arm extended at a 45 degree angle. Americans salute with the upper arm parallel to the ground with the forearm bent to bring the fingertips to the corner of the right eye. Other cultures may salute with a bow, or a closed fist over the heart. To many outside of a given culture, a particular salute will likely mean nothing. Go to some secluded Amazon tribe and they won’t recognize any of those particular gestures. 

Take another, the ever popular middle finger. Most in the western world will recognize its meaning right away. The same meaning was once conveyed in the plays of Shakespeare by the biting of one’s thumb. Other gestures convey the same meaning in other ways. However, a Mongolian tribe is likely to be wondering why you want them to see your middle finger. Perhaps they’ll think something is wrong with it or you just have a very strange way of pointing.  

Now, it might seem at first that all of these different gestures and that fact any given culture might not know of one or any of them should be a silver bullet to any thought of establishing an objective standard for interpreting symbols. However, I would argue that is too superficial. Instead of looking at the gesture, the symbol itself, look at the concept it symbolizes. In the case of a salute, it conveys respect and it does so regardless of the particular gesture being used. In the case of the middle finger, it conveys anger and active disrespect. Again, it does this independent of the particular gesture being used. By digging past the appearance of the symbol to the ideas that it is meant to convey we reach something much more like a universal standard. When we see these symbols in a context we understand, we know what they mean because we have learned to associate them with other symbols like body language and tone of voice. We’ve learned this unconsciously through years of observation. When we see a gesture we don’t recognize, we pay attention to those other elements of body language, tone of voice, environmental context to get to the concept behind the gesture. 

AI will have to be taught to do the same. It will have to learn what we mentioned last time, how to recognize the universal concept behind the local and subjective expression of it. Once we can figure out how to clear that hurdle, we will have really gotten somewhere with actually making AI as intelligent as it is artificial.

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Summary
AI Artificial Intelligence Learning and Reading Human Symbols Part 5
Title
AI Artificial Intelligence Learning and Reading Human Symbols Part 5
Description

Here we are at part five (or is it 50?) of our series on training Artificial Intelligence how to work with symbols, how to recognize and interpret them.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast, with your hosts Alexander McCaig and Jason Rigby, where humanity steps into the future, and source data defines the path.

Alexander McCaig (00:24):

Welcome back to part 10,000 of AI and symbols. Let's go to the third section of this article, which we're having the hardest time getting through, because it's such a distraction when we talk about how much this affects every part of our lives as human beings.

Jason Rigby (00:43):

And it's just an amazing written paper too.

Alexander McCaig (00:45):

It's phenomenal.

Jason Rigby (00:45):

It's so deep.

Alexander McCaig (00:47):

It's phenomenal.

Jason Rigby (00:48):

I mean it's like every sentence we could talk about.

Alexander McCaig (00:50):

Ugh, all right, let's go.

Jason Rigby (00:51):

So symbolic behavior. The previous section offered a view of symbols that emphasize the role of an interpreter. It claimed that identity is not a symbol, and you need objective sense. But rather, get this, it's a symbol for an interpreter who treats it as such.

Alexander McCaig (01:04):

Yeah. So it's only a symbol if I treat it as a symbol.

Jason Rigby (01:07):

We will now ask, I love this statement, we will now ask, "How does an interpreter behave if it interprets something as a symbol?"

Alexander McCaig (01:15):

And the thing about this in the sense of the AI. So how will we want our AI to behave now that it's interpreting something as a symbol? What do we want it to do now that it recognizes, "Okay, here's the symbol. What do I do with my algorithm next to put out some sort of output? How do I deal with this in the sense of me being an AI algorithm and analyzing what this thing is? I am now in a point of reference, a vector, where I'm analyzing it, recognize what it is, now what is my choice of action next now that I realize that this symbol means something?"

Jason Rigby (01:46):

Yeah. And this is so cool. If we can identify the particular behavioral traits that are consequences of engaging with symbols, then we can use them as tangible goals for creating AI that is a symbolically fluent as humans.

Alexander McCaig (02:02):

Yeah. So we engage different ways with many different things. We can choose to join with it or run away from it. Okay? If I see a sign on a building, or here in New Mexico, if I'm walking around the desert and I see a post in the ground that has an arrow pointing down that says, "Radiation," and there's a skull and crossbones, I'm not going to walk over there. That tells me to move away from it. My interaction. So I'm walking around, I see the symbol. Then I start to interpret it. And then I'm saying, "Okay, now that I've interpreted, what's my behavior? Where am I going to go now that I understand that there's radiation buried here? I don't want to be in this area. This can lead to my death." And this is a very important stance now for taking the next step after a symbol has been recognized for that AI.

Jason Rigby (02:48):

Yeah. It says, "If we can identify the particular behavioral traits that are consequences of engaging with the symbols." So if the AI can say, "This behavior," every time they saw that sign, they were walking away from it. Now that's the consequences of the symbol.

Alexander McCaig (03:03):

What if I get that dude on YouTube, that psycho dude that's walking through Chernobyl filming it?

Jason Rigby (03:07):

Yeah.

Alexander McCaig (03:07):

And he's Geiger counting himself later because he realized he sucked up some radiation. Well, then that model fails at that point. So I mean, this is the ... I want to talk, I want to hit on this. If you continue to build on ground that is not firm, that truly does not apply to everything, then it becomes very weak at the top if you put your reliance, all that weight, up here. It's like those things at car dealerships, the inflatable guys that are flying back and forth?

Jason Rigby (03:34):

Yeah. [crosstalk 00:03:35]. Yeah.

Alexander McCaig (03:35):

That's essentially what's happening with that algorithm. There's no stability to it. I'm sorry. So go [crosstalk 00:03:40].

Jason Rigby (03:40):

No, no, no, no. I agree 100% of what you're saying. It says, "And further use their presence as evidence that a system interprets something as a symbol." So their presence as evidence that a [inaudible 00:03:52]. So what we hope to show is that the behavioral traits exhibited by symbol users indicate an active participation if an infrastructure meaning by convention, with the accompanying understanding that meaning is conventional.

Alexander McCaig (04:08):

Okay. So that's just-

Jason Rigby (04:10):

That's a lot of words, but it's really easy to understand.

Alexander McCaig (04:11):

Yeah. It's just going back to our previous thing that, yes, this flag is here in this country, representative of this country, and everyone here agrees that this is the flag of Brazil. This is the flag of Australia. That's what they're talking about. Convention. Seeing it, recognizing it, being in relationship with it.

Jason Rigby (04:28):

Yeah. And active participation.

Alexander McCaig (04:29):

Yeah. You're actively participating in the convention of it. Yes. We all say-

Jason Rigby (04:33):

Of the infrastructure of that convention.

Alexander McCaig (04:35):

That's correct. Yeah. So if we all think that this is what this means, I'm going to participate and agree or disagree.

Jason Rigby (04:40):

Mm-hmm (affirmative). Yes.

Alexander McCaig (04:41):

Yay or nay.

Jason Rigby (04:42):

And he says, "We will begin with a few examples." So I think [inaudible 00:04:45] rolling through these examples.

Alexander McCaig (04:46):

I feel edgy. I'm like all over the place.

Jason Rigby (04:49):

Suppose a symbolic thinker creates a symbol. For example, a person can use the word [dax 00:04:54] As a command for the dog to sit. Consequently, their dog might learn to sit when he hears the word dax. Is the word dax a symbol to the dog?

Alexander McCaig (05:04):

No, it's not. The dog recognizes the tone and that you've reinforced the behavior of having a dog sit. [inaudible 00:05:12] it's not like the dog knew what the symbol was and you showed it to it, and then it sat down. No, you had it go through an action, a movement of their body, a mechanism of that. And it sits down. And then you say something during that occurrence. And it's not that the dog recognizes the symbol, it just then starts to associate that sound, dax, with sitting. And a dog doesn't know what the concept of sitting is. You just moved it into a position through this training regimen to get it to do a function it naturally does.

Jason Rigby (05:46):

So now, that was voice. But now we're going to look at, similarly, consider apes that learn some simple sign language. Now we're getting into kind of symbols language. Apes can enact the motor program for certain gestures, and they can form associations between these gestures and some outcomes.

Alexander McCaig (06:04):

How do I ... So, gosh. Grass. What's first thing that came to your head when I said that?

Jason Rigby (06:12):

Watching a dog go back and forth with their back on the grass?

Alexander McCaig (06:15):

Okay. That's your association.

Jason Rigby (06:16):

Yeah, yeah. [crosstalk 00:06:17].

Alexander McCaig (06:16):

Okay? For me, when I think grass, green. Boom, done. That's it. That's my association. Of course, anything that is said has to have some sort of association. By and large. If I'm telling you a thought, you're essentially doing that visualization in your mind, but it's not the word that came first. You as an infant child observed the world first, and then later get the audible inputs that are then attributed to that visual stimulus.

Jason Rigby (06:43):

That reinforces, yeah.

Alexander McCaig (06:44):

Thank you. Not vice versa.

Jason Rigby (06:46):

Right.

Alexander McCaig (06:47):

Go ahead.

Jason Rigby (06:47):

Yeah. He says, "Apes can enact the motor program for certain gestures."

Alexander McCaig (06:52):

So they can move their hands in certain ways.

Jason Rigby (06:54):

"And they can form associations between these gestures in some outcomes. For example, suppose they learned that a particular hand movement results in them receiving a treat."

Alexander McCaig (07:04):

I do the same thing with my dog. It's not [crosstalk 00:07:05].

Jason Rigby (07:05):

There's nothing inherent to the motor movement itself that entails that the ape should get a treat. So the gesture and its outcome are more or less arbitrarily linked, which aligns with the [inaudible 00:07:16] we previously outlined about symbols. So what they're talking about is-

Alexander McCaig (07:22):

Yeah, from a top down-

Jason Rigby (07:24):

Did the gesture linked to an action in a behavior?

Alexander McCaig (07:28):

Yeah. From a top down view. But that's what it is. The animal is learning-

Jason Rigby (07:34):

But is the gesture a symbol for the ape?

Alexander McCaig (07:36):

The gesture is not the symbol for the ape. What's symbolic for the ape is knowing that it can sit down or do one of these things. It's not that ...

Jason Rigby (07:48):

I don't think the AI should be messing with animals in the first place. I think that these are poor examples because here-

Alexander McCaig (07:55):

Yeah. They are poor examples, that's why I'm having trouble wrestling with this.

Jason Rigby (07:56):

Because it should only be humans. Because listen to what they say next. "Now, consider a human performing the gesture. Humans understand to a degree not parallel in other animals that they are participating in a cooperative interaction involving a shared understanding of a situation." And I want to stop right there. Cooperative interaction in a shared understanding of a situation. No animals.

Alexander McCaig (08:18):

I've seen-

Jason Rigby (08:18):

They're not self-aware.

Alexander McCaig (08:19):

Okay. First of all. I have seen, and I've also got to talk about this, I've seen ants be more cooperative than human beings in a shared interaction. The whole definition of what they're saying-

Jason Rigby (08:32):

No, no they said a shared understanding of a situation.

Alexander McCaig (08:36):

You're telling me all these ants don't understand that when it rains, we got to get underground? We all don't understand we need to protect the queen?

Jason Rigby (08:42):

Yeah. But the difference, you know as well as I do, between the ant and the human is the self-awareness of-

Alexander McCaig (08:48):

Correct.

Jason Rigby (08:49):

... what they're doing. Whenever a Brazilian looks at a flag, he's not saying, "This is a flag that I must work for." There's the national anthem, the music behind the-

Alexander McCaig (08:59):

Everything that goes with it.

Jason Rigby (09:00):

... the motion, the thought processes. An ant doesn't say, "That tree inspires me."

Alexander McCaig (09:09):

You're right. You're right. I'm just looking at this fact here of how convention is defining the meaning of this symbol. The dog doesn't understand convention. You cannot say that this is how a symbol is defined if it does not apply to everything. Is that right or wrong? I don't know. There's a very difficult point here.

Jason Rigby (09:39):

Yeah. And so listen to what ... We're going to stick with humans here. Here we go. We're going to go down this. We're going down this human trail. "A human gestures with the knowledge that other humans understand the gestures meaning, and can use the gesture themselves for their own ends. Wherever they know that the gestures is useful because other humans agree on its meaning. They can even participate with others to alter the gesture's purpose to be like, "I'd like to sweet treat, not the salty one."

Alexander McCaig (10:09):

What if I-

Jason Rigby (10:09):

Choice.

Alexander McCaig (10:10):

What if I give you the middle finger?

Jason Rigby (10:11):

Yeah.

Alexander McCaig (10:12):

What if you, but you're a tribe. Here's an interesting point, and we've seen this. This is actually, this is logged. If you go down into parts of the Amazon where they have those lost tribes, when the people were standing across the river bank from them, the way they hold their arms down, or have interactions symbolically with one another, it's completely different from what's going on with us. So if you gave them the middle finger it could be like a gesture of hello. Right?

Jason Rigby (10:42):

Yeah, they would perceive it-

Alexander McCaig (10:43):

Or it could mean absolutely nothing at all. So then how can you say that the convention truly defines the symbol? It doesn't work. It's truly not applicable across these different areas.

Jason Rigby (10:58):

I think they are way more rudimentary in their thought processes and staying really one dimensional than we are, because listen to this. "To humans the gesture is not just a tool or a means to an end that might happen to involve others. It is a movement that others could similarly use or modify because of its shared conventional meaning." So they're staying on a really ... We're getting really philosophical with it, and really getting deep into it. They're staying on the purpose of like a dog wants a treat, so it sits. A monkey wants a treat, so it sits. We say, "Oh, there's two treats in front of us? Oh okay. I want salt, not sweet," Which is a function of our-

Alexander McCaig (11:37):

It's a different sort of cooperative interaction.

Jason Rigby (11:39):

Yeah, it's the ego, the pleasure side. But whenever ... The problem that I'm having is this shared conventional meaning, because you can't say what defines animals from humans is because of the shared conventional meaning. Animals are looking at it, which is self-involved, and, "I want to eat this and I need this treat. And if I do this, I get that." I get that. But you can't say an animal is different from a human because of conventional meaning only.

Alexander McCaig (12:08):

Yeah. That's-

Jason Rigby (12:08):

There're so many different variables.

Alexander McCaig (12:09):

It's so limited.

Jason Rigby (12:11):

Yeah.

Alexander McCaig (12:11):

And I think that-

Jason Rigby (12:12):

I hope you know where I'm-

Alexander McCaig (12:13):

Yeah, of course I do. And I say that limitation, when you start to put that lens on it, especially for the AI algorithm, when you start to define it that way, it actually weakens how this thing does its analysis.

Jason Rigby (12:24):

Yeah. Because you're telling this machine that there's one variable that distinguishes us.

Alexander McCaig (12:31):

That's just not true though.

Jason Rigby (12:32):

No. Yeah.

Alexander McCaig (12:33):

There're so many things. It's so complex within itself. So we talked about this in the previous episode, your focus is on writing an algorithm that would have to take in an infinite number of variables to find truth in the understanding of what a symbol means. And every time it changes. That is not a sustainable way to program artificial intelligence.

Jason Rigby (12:54):

No, that's what I'm saying. I mean, they may have to start with this and go this way just because it's so complex. But whenever you ... This can get really scary. Let's think of it this way. So if you think humans are a little bit better than animals, this AI is thinking, and that it's all conventional meaning, it's cooperative. Well, let's go to Nazi Germany. Let's look at that symbol and the conventional meaning of humans.

Alexander McCaig (13:22):

Think about the symbolism behind that.

Jason Rigby (13:24):

Yeah. And it created ... It almost destroyed. I mean-

Alexander McCaig (13:28):

It destroyed a lot of Europe for a long time.

Jason Rigby (13:30):

For a long time. Yeah. And we're still feeling repercussions from the Holocaust and everything else. So that one symbol created way more than just cooperation. Do you see how the web infiltrated to the point of killing millions of Jews? And I mean, this can go-

Alexander McCaig (13:46):

There's two conventional understandings too. It meant one thing for the Nazis. And the rest of the world was like, "Whoa, bad news." Nazis are like, "This is good." Everyone's like, "This is bad." So how can you say that that specific symbol then, that's really what it means?

Jason Rigby (14:01):

Now, is this shared conventional meaning for the greater good of humanity?

Alexander McCaig (14:05):

That's an interesting question.

Jason Rigby (14:06):

And then now you can judge that symbol based off of ... Now AI could judge that symbol based off, "Okay. Yeah, I see Germany was all about this, and there was death," and there'd have to be some moralistic rules in there, "so that is a bad idea, a bad symbol."

Alexander McCaig (14:21):

Well then who defines what the morals are?

Jason Rigby (14:23):

Yeah. Yeah. See, you're-

Alexander McCaig (14:24):

And then you got to say, "Well, which-"

Jason Rigby (14:25):

Well no, you would do it based off of humanity in general. Even that would be flawed, but it would still-

Alexander McCaig (14:30):

Well, not necessarily, if it's for the greater good of humanity for AI interpreting the data of a symbol. It would have to be truthful. It can't be subjective stuff. It has to be something that is legitimately timeless. It has to be a law like gravity. You see what I'm saying?

Jason Rigby (14:46):

Right.

Alexander McCaig (14:46):

Because if I put the subjective nature into it and I'm trying to uplift humanity, that is too flexible. It will change over time. It changed for the Buddhist symbol, right, for the swastika? That changed in a matter of like 10 years.

Jason Rigby (15:04):

Yeah. In 10 years a symbol that was used all over the world-

Alexander McCaig (15:06):

To describe peace and all these other things-

Jason Rigby (15:08):

Yeah, and Native Americans used it. Like you said, Tibetan Buddhists [crosstalk 00:15:12].

Alexander McCaig (15:12):

So you have a whole piece of world history that's been around for 1000s of years.

Jason Rigby (15:17):

Yeah, and then from-

Alexander McCaig (15:18):

And it's changed in 10.

Jason Rigby (15:19):

From 1930s to, when did Hitler die, in 1946?

Alexander McCaig (15:23):

I-

Jason Rigby (15:24):

Well, I mean, now I was looking up-

Alexander McCaig (15:27):

We're not even really sure he shot himself.

Jason Rigby (15:29):

Yeah. We were talking about a certain character on YouTube, and he was saying he was part of this group and they have a flag. And I looked it up and they had a crazy like Nazi flag with the-

Alexander McCaig (15:39):

Yeah.

Jason Rigby (15:39):

And so it was like there's still a subgroup of people that identify with a horrible ideology, and that symbol is still being used today for hate.

Alexander McCaig (15:48):

That's what I mean. But they may think that it's elevating humanity. The only thing that elevates anything is truth.

Jason Rigby (15:53):

Yeah.

Alexander McCaig (15:53):

Hands down. Okay? And a law is a law if it's truthful and immovable and not bound by time. Hands down, that's it.

Jason Rigby (16:02):

And there's no way to have deep machine learning, because that's what this would be. It's having to ... Without imposing upon the machine these laws that are for the greater good of humanity. Somebody has to take that and program that into-

Alexander McCaig (16:28):

Somebody has to put it in there.

Jason Rigby (16:30):

I mean, there's no ... I mean, because-

Alexander McCaig (16:33):

It's not just writing it itself.

Jason Rigby (16:33):

Because an AI, until it develops a consciousness, maybe in the future it will do that and inspire other AIs, and then we'll have like these AI conscious beings. But-

Alexander McCaig (16:43):

Would you want it to be on the subjective understanding of elevating? How many times do you see ... like even the Avengers film that came out. They had Ultron. This is for the greater good of humanity. I'm going to wipe out majority of the Earth's population. Is that really the greater good? Is that a truthful thing to do?

Jason Rigby (16:59):

Well, it's horrific because if we had less population, the Earth would heal itself. And that was his whole premise.

Alexander McCaig (17:11):

That isn't ... Listen. That's Obvious. We're consuming too much. That's an obvious thing. But the circumstance, the subjective idea that you have to kill people to fix it-

Jason Rigby (17:22):

Yeah, take their free will.

Alexander McCaig (17:24):

That's the opposite. And now you're going against something that is truthful and timeless. You know what I mean?

Jason Rigby (17:28):

Yeah. Yeah. Exactly. Yeah.

Alexander McCaig (17:29):

So the whole subjective nature of analyzing a symbol, it's weak here.

Jason Rigby (17:36):

Well, I mean that's where, when you look at this philosophical part of this, just to wrap this up because we've got to get it done, is it is the opportunity not to control, but to teach.

Alexander McCaig (17:46):

Yes.

Jason Rigby (17:46):

And the AI would have to learn that, because the AI is going to want to control. It's going to be its desire to control.

Alexander McCaig (17:53):

It shouldn't be programmed for making decisions-

Jason Rigby (17:54):

Solutions or control.

Alexander McCaig (17:56):

Here we go. It shouldn't be programmed for making decisions, it should be programmed for learning and teaching. That's all it does. That's its input output, and vice versa.

Speaker 1 (18:15):

Thank you for listening to TARTLE Cast, with your hosts Alexander McCaig and Jason Rigby, where humanity steps into the future, and the source data defines the path. What's your data worth?