Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 25, 2021

AI Artificial Intelligence Learning and Reading Human Symbols Part 3

AI Learning Reading Human Symbols
BY: TARTLE

AI and Symbols pt 3

“Symbols lie at the root of intelligent action”. This is a quote from the paper, Symbolic Behavior in Artificial Intelligence that inspired this series. This statement says a lot about just how important symbols and their interpretation is to how we navigate our daily lives. Even if we are mindlessly going through life fulfilling only basic needs, we are still dependent to a degree on symbols and interpreting them.

One of the chief aspects that we’ve focused on so far is the subjectivity that seems inherent in the interpretation of the symbol. Many, including the authors of the paper in question will go so far as to contend that symbols only exist in terms of their interpretation. They argue from there that interpretation is dependent on agreed upon, yet often shifting behaviors and consensus in society. If that is the case, AI should be taught how to interpret symbols based on those behaviors. It’s easy to see why this might be a tempting notion. After all, in part two of this series we talked about how the swastika’s meaning was changed from a symbol of divinity to a symbol of hatred and oppression by the Nazi’s. Symbols can often change in meaning depending on the context of the time and place. Yet, this seems like a shaky basis for training an AI. One would hope that an AI would be something that could be used universally, regardless of time and place. To base one of the most important aspects of its training on a purely local and subjective standard means it would need to constantly be retrained. Even more, it would have to be trained differently based on local culture and customs, leading to multiple versions of the AI, versions that would have as much difficulty communicating with each other as different cultures do now. 

So, how to resolve this issue? If behavior, custom, and subjective interpretation doesn’t work, what does? We have to find some kind of objective standard to work from. Part of that is casting aside silly rhetorical tricks. If a tree falls in the woods and no one is there to hear it, of course it makes a sound. Or one that is taken far too seriously, Schrodinger’s Cat. For those unfamiliar with it, the cat is a device used to illustrate a principle of quantum mechanics that subatomic particles can be in two states at once. That aside, the cat is a silly example. The idea is that if our feline is in a box you don’t know whether it is alive or dead, which means it is both. That’s ridiculous, the cat is either alive or dead; the fact you don’t know which doesn’t change anything. How does all this relate to our problem of symbols? Simple. The symbol exists on its own, independent of any interpretation. Just like the cat, just like the sound of the tree falling, the symbol does not need an interpreter simply to exist. 

How to do that? Wherever possible we should be looking in the direction of fundamental truths of the universe and how given symbols are based on those. Math in its various forms is an excellent tool for understanding the universe and even various symbols. Many, including medieval sword makers understood this, and incorporated mathematical proportions to imbue their products with rich symbolic meaning. 

Naturally, turning strictly to math won’t always be possible. How does one use math to interpret the meaning of a flag for example? Or a novel? In these cases, we should go to the original intent of the symbol’s creator. Yes, there may be additional meanings that are there outside of the creator’s intent, but those are accidental. 

Given all of this, there is still the fact that meanings do change overtime, that certain understandings and expressions of symbols are local. How do we reconcile this? To paraphrase the late GK Chesterton, “men don’t disagree much on what is good, but they do disagree a great deal on how they understand that good”. The idea is that while there are definitely universal truths, those truths will be expressed differently based on a variety of different circumstances. A good place to start for training an AI would be to recognize that fact and look for the deeper truths that lie beyond the local understanding. 

What’s your data worth?

Summary
AI Artificial Intelligence Learning and Reading Human Symbols Part 3
Title
AI Artificial Intelligence Learning and Reading Human Symbols Part 3
Description

Even if we are mindlessly going through life fulfilling only basic needs, we are still dependent to a degree on symbols and interpreting them.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to Tartle Cast with your hosts Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path.

Alexander McCaig (00:25):

Okay. We are back for part three of symbols and artificial intelligence here on Tartle Cast. And we want to start going through this actual paper and explain from the context of the authors of it, where they're headed with their thinking and how this should be applied to artificial intelligence.

Jason Rigby (00:45):

Yeah. And the article is symbolic behavior and artificial intelligence from deep mind. It says this. We suggest that AI Research explore social and cultural engagement as a tool to develop the cognitive machinery necessary for symbolic behavior to emerge.

Alexander McCaig (01:00):

Yeah, and -

Jason Rigby (01:01):

But I liked the social and cultural engagement. They are trying to get the AI to capture that.

Alexander McCaig (01:06):

Well, that's what I'm talking about. Remember we talked in the previous episode about the subjective nature of it. The reason they have come to that sort of understanding is because, and they stated this, that symbols are not a function of objective or intrinsic, they are subjective in interpretation of the viewer. And on top of that, symbols need an agreed upon link. That link, in my opinion, is cause and effect, but they think it's from a cultural standpoint and that's why they want to look at that, which drives them to that initial claim, or statement of fact here in the article.

Jason Rigby (01:40):

Yeah. And they talk about symbols lie at the root of intelligent action. I love that statement. Symbols lie at the root of intelligent action. You were talking about the definition of symbol.

Alexander McCaig (01:54):

Yeah. To go back to the definition, 'sym' S-Y-M comes from S-Y-N, which means 'together' and 'bole' B-O-L-E is a throwing or a casting. So in the etymology of that, it's this Greek idea of bringing disparate things together and working with them to create an idea. And that's the intelligent action, essentially, is the bringing of those things together to create a new sort of concept or understanding.

Jason Rigby (02:24):

Yeah. And in this paper they say, our argument will center on the question. So this question is the whole basis of this white paper. How does a symbolic thinker behave if they interpret something as a symbol? And this is going to be a whole paragraph, so we're going to stop right there. How does a symbolic thinker behave?

Alexander McCaig (02:43):

Did you highlight that?

Jason Rigby (02:45):

Yeah.

Alexander McCaig (02:45):

I highlighted the same. I didn't even know. Totally separate point.

Jason Rigby (02:48):

If they interpret something as a symbol. So a symbolic thinker, which would be a human or RNAi.

Alexander McCaig (02:55):

Yep.

Jason Rigby (02:56):

So we did an episode on that.

Alexander McCaig (02:58):

An elevated level of consciousness in itself.

Jason Rigby (03:00):

Yes. How is it behaving? That's a key word. So how are we behaving based off of how we viewed the symbol?

Alexander McCaig (03:09):

Right. When we look at that, when, if, how am I looking at it? If I look at a symbol and I'm emotionally charged, that may cause me to have a specific behavior. So now that's going to act as a data point and that data point can then be ingested into this machine learning system and spit out some sort of output.

Alexander McCaig (03:31):

Now, maybe I see a symbol and I have absolutely no idea what it means.

Jason Rigby (03:36):

Right.

Alexander McCaig (03:36):

No idea where it came from, and it has absolutely no effect on me. And what if you sit there, Jason, and you ask me, "Well, Alex, what does this mean to you?" But I've never seen it before. Or the other question is, "How does this make you feel?" Now it's getting into the very subjective stance for it before I had any other sort of experiential outside input from a cultural standpoint about what the symbol is actually representative of.

Jason Rigby (04:02):

Yeah. And that's the problem, can we put these behaviors, can they be replicated in machines?

Alexander McCaig (04:09):

Right. But the thing is-

Jason Rigby (04:10):

The behavior part.

Alexander McCaig (04:12):

Is the behavior the correct way to look at it? Because the reason I say that is I can give you a behavior that is behavior that lacks any sort of behavior at all because I lack an understanding of how to interpret that symbol.

Jason Rigby (04:25):

You'd almost need tiers.

Alexander McCaig (04:27):

Behavior is a couple steps past actually knowing.

Jason Rigby (04:30):

Right. You could almost have a knowing behavior skill. So they really relate to this symbol. They really understand it. They really know. And so that behavior is this. Then next tier would be, they somewhat know about the symbol. They've had mild experience with it. So, that behavior is this. You see what I'm saying? Now you could define based off familiarity, the behavior.

Alexander McCaig (04:55):

Right.

Jason Rigby (04:55):

If that makes sense. And then you could tier that into AI and say, okay, level one, two, three, four, five, a familiarity with behavior. Oh, okay. So the people, and then you could program that into AI. Now, it has a reference point in a gauge or a spectrum-

Alexander McCaig (05:11):

To stand off of.

Jason Rigby (05:12):

To stand off of the behavior. If that makes sense, sorry?

Alexander McCaig (05:14):

But that scale, right? And this is an important fact about all of our human nature and even how people do things in marketing.

Jason Rigby (05:21):

Yes.

Alexander McCaig (05:22):

It's the scale at which we define a bucket, right. Or how we define, 12 inches is a foot.

Jason Rigby (05:32):

Yes.

Alexander McCaig (05:33):

Is that really a foot? Because then you start to look at the world mathematically in the sense of fractals.

Jason Rigby (05:39):

Right.

Alexander McCaig (05:40):

Which means any space within its space is infinite within that space. You just keep dividing it down. So essentially has an infinite distance. Well then if I'm telling the AI to look at something behaviorally, subjectively to say that this is 12 inches, and I need you to root yourself in that, when we actually as human beings don't know that for a fact, is that the proper way to look at something then? Is behavior really the platform in the scale, that symbol should be interpreted of their understanding off of?

Jason Rigby (06:14):

Yeah. I see what you're saying.

Alexander McCaig (06:14):

You see what I'm getting at here?

Jason Rigby (06:14):

Yeah. No, I see what you're getting at. I think it's, as we're going to see in this paper, I think it's a complex question and it's going to take years to figure this out. But listen to this, an intriguing aspect of our analysis is that the traits [shrouded 00:06:28] with symbolic behavior can be made thematically coherent if we reinterpret what symbols are and how they come to exist.

Alexander McCaig (06:36):

Right.

Jason Rigby (06:37):

And we talked about the last two episodes, but they reinterpret. That word really got to me.

Alexander McCaig (06:44):

We have an idea of what, and I'm holding up a coaster right here and it has math on it. It has geometry. Well, I am saying that, okay, this coaster, how I interpret it, I interpret it as something beautiful. Are symbols strictly to be interpreted as something beautiful? Or they to be interpreted as something that is mathematic? Are they interpreted to be like geometry? I'm classifying this as geometry.

Jason Rigby (07:11):

Yeah but there are so many variants to that one symbol.

Alexander McCaig (07:15):

There's so many variants to that one symbol.

Jason Rigby (07:15):

I guarantee you that symbol has been used in philosophy. It's been used in astronomy. It's been in-

Alexander McCaig (07:20):

So then how could you then reach your basis, which this paper is on. And this is where I'm going to contend with these other thinkers, that behavior, the subjectiveness of symbology is the proper way, in the historical linkage of how we interpret symbols, is the proper way to teach an AI system to look at symbols and understand them. I think that fundamentally in a logical sense is illogical and not a firm rooted basis for us to begin to design off of. My contention, in my thought, in my logical understanding is that if we are going to look at something, we have to make sure that it is 100% universal baked into physics.

Alexander McCaig (08:04):

Baked into experience and laws of this universe so that a line is a line, regardless of where it is. The reason we chose the word Tartle is because Tartle means one thing, one place in the world, it is what it is. It stands alone.

Jason Rigby (08:20):

Right.

Alexander McCaig (08:21):

It doesn't have a mixed interpretation dependent on where it is used across the globe. That for me, is a fundamental base to stand off of because watermelon somewhere might mean something somewhere else, completely different, positive or negative. So the root of this, when I think of how a symbol is defined, a symbol should strictly be rooted in a law of cause and effect. That allows it to objectively stand alone, and that seems like a firm platform for an artificial intelligence system to actually interpret those symbols.

Jason Rigby (08:53):

So you're talking about using the law of cause and effect, and then programming that law into the machine and then allowing it to judge these symbols based off of that?

Alexander McCaig (09:02):

Precisely, correct. Because if you look at cause and effect, right? -

Jason Rigby (09:05):

Right, you'd still have to look at a behavior.

Alexander McCaig (09:08):

[crosstalk 00:09:08]

Jason Rigby (09:10):

Because the cause and effect will define the behavior.

Alexander McCaig (09:12):

That's exactly right. So when you look at behavior subjectiveness, that is not the primary thing that is actually driving the interpretation of the symbol, it's not it's real meaning.

Jason Rigby (09:20):

Yeah, you wouldn't say, follow me with your thought process, you wouldn't say based off of, okay so we've had this flag for this country for 5,000 years, let's say different variations of the flag, the same flag, the AI recognizes this. This country and these people went to war with the flag, they start seeing all these pictures with the flag. They're putting all the puzzle pieces together, this AI is. Now it's saying those people in that region will go to war for that symbol. They will come together for that symbol, tribalism and all that. But when this AI begins to understand, and this is what I want to talk to you about, when this AI begins to understand and identify with a group of people, do you see what I'm saying? Because Tartle symbol will do that. You know, as we build and as we grow, that symbol is going to identify with, you can't solve that part.

Jason Rigby (10:25):

I'm not necessarily pushing back. I'm asking a question. If you have just cause and effect as the answer.

Alexander McCaig (10:31):

Yes.

Jason Rigby (10:32):

And you filter everything with that. I'm not understanding, that's a good word, I'm not understanding how we could judge the effect. I get the cause that's easy. The cause part is good, but how would you properly judge the effect? Because if I look at those people in Iceland, let's say with that flag for 700 years, and I look at the cause and effect, and then I say, well, this symbol has to do with war period. But I've judged that symbol based off of time and how the behavior of those group of people were around the symbol.

Alexander McCaig (11:04):

So you've essentially defined that symbol off of probability alone because that's time and I guess time and action.

Jason Rigby (11:13):

Are you talking about the effect of that tribal group of people and what they're doing?

Alexander McCaig (11:18):

No. Let me ask you something-

Jason Rigby (11:19):

When you go cause and effect-

Alexander McCaig (11:19):

Say that tribal group wakes up one day with divine inspiration and they say, we're never going to go to war, but they carry that same flag.

Jason Rigby (11:27):

Yes.

Alexander McCaig (11:28):

What then happens to the machine learning model?

Jason Rigby (11:29):

Right?

Alexander McCaig (11:30):

This is the same fundamental problem we have when we look in a subjective sense of defining who people are rather than let them define for themselves.

Jason Rigby (11:37):

Yes.

Alexander McCaig (11:38):

If you don't let something stand on its own and define itself through a law of cause and effect over so much time individually, then it's impossible for you to truthfully represent what the collective of those things actually means. And if you allow it to stand alone on something that is timeless and truthful, then I don't have to worry about the subjective representation of it. It means this here, regardless everywhere. Gravity means the same thing to everyone, everywhere.

Jason Rigby (12:07):

Because you know, this is going to be such a hard task because I'm thinking of how symbols change over time and now you have got to categorize that timeframe from 1620 to 1680 that symbol stood for this.

Alexander McCaig (12:19):

Yeah.

Jason Rigby (12:21):

And those group of people acted as this and then from 1900 to 1940, it meant this.

Alexander McCaig (12:27):

But Jason, there's no firmness in that though. But that's what I'm saying.

Jason Rigby (12:31):

Yeah. As you begin to categorize these symbols, I'm thinking as a machine, as you begin to categorize these symbols, because this is what machines love to do, to categorize us, to put us in buckets. So as you begin to do that based off of that symbol, because the symbol is what is saying, we're telling it that's the priority.

Alexander McCaig (12:54):

Yeah.

Jason Rigby (12:55):

And everything else filters off of that. So all this information you've collected from reading books, they're going to do matching, symbol here, symbol there, and then it's going to say, "Oh, okay, well what does that book say about that symbol on that page?" I mean -

Alexander McCaig (13:07):

Here we go.

Jason Rigby (13:07):

Do you see what I'm saying?

Alexander McCaig (13:08):

Here we go, bro. You ready for this?

Jason Rigby (13:10):

I'm still on effect though.

Alexander McCaig (13:12):

No, you're right. Let's think about that cause and effect. Let's talk about firmness right? In the idea that our current models like to put things in buckets one or the other.

Jason Rigby (13:20):

Yes.

Alexander McCaig (13:20):

I'm going to draw you a circle, okay. Is the circle full in the inside or is it empty?

Jason Rigby (13:27):

Now? That depends on whether I resonate with the circle to the point of saying, do I identify that there's enough to put myself inside the circle or will I stay on the outside and observe it?

Alexander McCaig (13:38):

What did I just do? I forced you to focus only on the internal aspects of it. I didn't tell you to focus on the fact that a circle is a line connected at both points.

Jason Rigby (13:47):

But by representing that symbol to me, you just put a responsibility on me for an action.

Alexander McCaig (13:53):

I did. Did I not?

Jason Rigby (13:54):

And machines love that.

Alexander McCaig (13:55):

And then that forced you to think a specific way that otherwise you wouldn't have thought if I didn't give you some initial input. So then essentially the idea of how I implanted that verbal thought with my mouth, into the visualization in your mind actually created a bias internally within you. So your thought actually didn't stand alone by itself at that point.

Jason Rigby (14:16):

Right?

Alexander McCaig (14:17):

You now took out some sort of outside influence that was subjective to how it actually teed you up to that. Now, if you look at the circle by itself, a line connecting, it doesn't really matter if it's empty or full.

Jason Rigby (14:31):

Right?

Alexander McCaig (14:31):

But the machine is requiring that sort of understanding because it's not rooting itself in the idea that a line connected only makes the symbol of a circle and that's all it is, is a circle. And so the issue with that too, is that a machine has to have one or the other. It lives in a world of binary. It's not in a quantum computing algorithm where it can actually take both states at the same time. Yes, the circle in a subjective stance is full and empty, right? But if I look at it from our current computational models, it has to be full or empty.

Alexander McCaig (15:07):

So we are right now, essentially limited in the ability for a computer to truthfully process something. And we're forced to be subjective because we cannot match the fundamental timeless idea of a circle, just being a circle itself, the algorithms require a subjective input. So if we look at over time, this paper and the way they're describing this, we are driving ourselves into this idea of having to say a symbol is subjective because linguistically, when we look at it, we are defining that as black and white, where our mind can actually look at it and say in a mental image, that it is both the same time.

Jason Rigby (15:51):

Well, they said this in the article and this is where [crosstalk 00:15:54] ... they said that the behavior of the symbol was an identity, but then they say symbols only exists with respect to an interpreter. And I was like-

Alexander McCaig (16:05):

That's the problem.

Jason Rigby (16:05):

If the symbol stands on its own. It exists outside of itself?

Alexander McCaig (16:11):

See Schroeder's cat. Is the cat dead or alive in the box? I don't know if there's a cat in the box though.

Jason Rigby (16:16):

Right?

Alexander McCaig (16:16):

So only until I open it through, do I then decide whether or not it's alive or dead, but when the box is closed, it's both at the same time, the cat can still stand alone in the universal sense.

Jason Rigby (16:26):

But they said the symbol only exists with respect to an interpreter. I was like, no, the symbol exists.

Alexander McCaig (16:30):

No. In the fact that they say it only exists because of someone interpreting it-

Jason Rigby (16:34):

The interpreter basing everything off of behavior.

Alexander McCaig (16:36):

Is already a fundamental issue.

Jason Rigby (16:37):

Right.

Alexander McCaig (16:38):

And then they're linguistically crippling themselves and also a function of behavior to say that behaviors are the primary platform we define a symbol off of, rather than allowing a symbol to stand for itself for what it really is.

Jason Rigby (16:49):

Yeah. Because now you're putting humanity first in the actions and the behavior of humanity. And you're putting the symbol, now you've got to filter all of it down to the symbol instead of the symbol being at the top and you filter down through humanity.

Alexander McCaig (17:05):

[inaudible 00:17:05] clear geometry and math existed before the biological organism showed up.

Jason Rigby (17:10):

Oh yeah. Yeah. You could see it.

Alexander McCaig (17:11):

Do you think big bang was like person first, then big bang then symbol. No, it was big bang, no consciousness happening with a human being, no biological organism and its behavior interpretation. It was just math structure, physics and chemistry that were all occurring.

Jason Rigby (17:28):

Yeah. And if I was going to get into quantum physics on this, what I would say is how symbols only exist with respect to an observer.

Alexander McCaig (17:34):

Yeah. Now we're talking.

Jason Rigby (17:36):

You see what I'm saying?

Alexander McCaig (17:37):

Yeah. And so if, if you're looking at it from only a sense of the symbols existence from observation-

Jason Rigby (17:42):

Yes.

Alexander McCaig (17:43):

Then you are essentially limiting also everything else in this world, because I'm only observing.

Jason Rigby (17:48):

Yes.

Alexander McCaig (17:49):

And then like the double slit experiment with the light photon, it changes depending on if the people are actually observing the experiment or not observing.

Jason Rigby (17:55):

Yes.

Alexander McCaig (17:56):

So now I'm having a quantum effect. And again, this is a problem with the computer and its own interpretation, right. It has to live in both states at the same time. But if you define it by both of those states, well, then does it become self-limiting in the real meaning of that symbol? Or do you allow the symbol to just be what it is? I allow the light photon to be what it is rather than say it is a particle or a wave.

Jason Rigby (18:18):

Yeah, because they say it only exists with respect to an interpreter. And then they say the symbol meaning is independent of the properties of their substrate. So I'm saying-

Alexander McCaig (18:26):

How could it be independent of the property of the substrate? How can a symbol possibly be independent of itself. It is what it is. This is our problem is that we continue to choose to define things and things they are not. This is a root of all the issues of how we interpret data and our world collectively. This is what causes our biases, our separation, war, religious fights, issues of police brutality, racism, all these separate things, these fundamental issues that we're looking at, is because we root it subjectively on human behavior and don't allow it to stand alone. And then look for the unifying aspects of things that stand alone and understand we are all part of these standalone thing.

Jason Rigby (19:05):

Well, I mean, when you say symbol meaning is independent of the properties of the substrate, now you're going against biology. You know what I mean? Because you're looking at it in the sense that symbol is an entity. So in, and it being an entity, which it is, and it stands alone on and of itself. Now you're looking at it and you're saying symbol, meaning is independent of the properties. No, it's all one.

Alexander McCaig (19:32):

And the paradox is solved.

Speaker 1 (19:43):

Thank you for listening to Tartle Cast with your hosts, Alexander McCaig and Jason Rigby, where humanity steps into the future and the source data defines the path. What's your data worth?