Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 25, 2021

AI Artificial Intelligence Learning and Reading Human Symbols Part 4

AI Learning Reading Human Symbols
BY: TARTLE

AI and Symbols Pt. 4

In order to take AI from a mere program that is very good within a narrow sphere and elevate it to the level of actual intelligence we need to find a way to teach it how to recognize and interpret symbols. One of the main topics of discussion in this series has been whether we should take a subjective, interpretive based approach, or an objective, universal one to training the AI as to the nature of symbols. Before we delve further down this discussion though, we should take a step back to truly recognize the enormity of the task. To do that, we need to recognize that there is disagreement over the very nature of symbols in the first place. What makes a symbol a symbol?

For the purpose of our discussion we should briefly look at the definition given by the authors of Symbolic Behavior in Artificial Intelligence, the paper that has been the basis of our discussion in this series. They draw in part on a definition given by Nolan Simon describing a symbol as “a series of interrelated physical patterns that can designate any expression whatsoever”. All right. What does that mean?

At first glance, one could read it as meaning that any symbol, any “series of interrelated physical patterns” can literally represent anything. That may or not be the way Simon intended it. If he did, we’re not sure how to help him. Obviously certain patterns can only represent certain things. A statue of an elephant clearly represents an elephant and not a mouse. “But what if you call an elephant a mouse?” says the gadfly in the back. Then it represents something that someone calls a mouse. The point is, it’s very clear what that statue represents, no matter what name you give to the animal. 

So, what else might Simon’s definition mean? A better meaning, one that makes a lot more sense is that one can use some kind of “interrelated physical pattern” to represent anything. More simply, literally everything can be symbolized. The immensely complicated and intense concept of love is symbolized with a heart. A circle is a circle wherever you go. If anyone in virtually any culture anywhere sees a small disc with a person’s head on it, they know they are looking at some kind of money. 

So, what should we use as the definition of a symbol? A simple phrase would be that a symbol is anything that represents something else, whether it be a sound, an action, a thing, or a concept. Now that we have that out of the way, let’s get back to the idea of an objective interpretation of symbols.

We’ve already talked about the fact the paper’s authors favor an interpretation based approach to training AI. They do correctly identify that symbols get a lot of their meaning from the culture in which they originate. Based on this, would it be fair to criticize an objective approach as being impossible or anemic at best. Impossible? No. Anemic? Perhaps. 

However, consider an opera. They are still very often done in Latin, or German. Even if it’s in your local language, the singing will often be so stylized that you may not be able to recognize anything. Yet, despite not being able to understand all the symbols being presented to you, you still pick up something. You can pick up on the tone of the music, the melodies presented, the pitch of the singer’s voice, all of which convey meaning to the listener. In short, there are universal aspects to the symbols being presented that transcend particular cultures. 

This can be done with a variety of symbols. When we see a statue, we know it represents a particular thing. We can tell from the expression of an illustration something of the mood of the character presented. 

Because this universal element is identifiable, it seems clear that we can and should explore a universal basis to teaching an AI how to properly interpret symbols.

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Summary
AI Artificial Intelligence Learning and Reading Human Symbols Part 4
Title
AI Artificial Intelligence Learning and Reading Human Symbols Part 4
Description

In order to take AI from a mere program that is very good within a narrow sphere and elevate it to the level of actual intelligence we need to find a way to teach it how to recognize and interpret symbols.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path, the path.

Alexander McCaig (00:24):

Hello, everybody. Welcome to TARTLE Cast on this beautiful snowy day here in New Mexico. We do need the snow because I like the slow melt. Doesn't make our ground flood and the top soil erode too quickly. It's a nice way to put some of that water back in the dry arid earth we have over here.

Jason Rigby (00:40):

Yeah, especially at such high altitude [crosstalk 00:00:43]-

Alexander McCaig (00:42):

Yeah, very high altitude. When you look at a snowflake, you know it's a snowflake. Each one has its own very distinct shape. But the snowflake is, oh my God, I'm on something, the snowflake is bound to certain laws of physics that define how it can grow. Even though the structures may be random for its outcome, making each one characteristically itself, the way that it becomes characteristic of itself has to go through the same process and pattern that everything else does. And so even though as a symbol there's different variants of it, we know it's a snowflake. We know that that crystal grows regardless bound by the same laws of physics or cause and effect that are happening here.

Alexander McCaig (01:29):

So then if you look at nature as an example for symbols, is it really then of the subjective sense that's necessary for culture to define what that symbol means? Or just the laws of physics are enough to say, "Yeah, regardless of who interpreted it, it's a snowflake regardless of how it looks."

Jason Rigby (01:47):

Um, yes. Yes.

Alexander McCaig (01:47):

Wouldn't you say? So then in the objective sense, the definition of what a symbol is and how to interpret it, maybe it's better for the AI strictly to stick to the objective nature of it, rather than looking at, as we have up here on the screen, the history and the culture that has gone into certain symbols, even religious ones, that define what they are and also how it's represented to a specific subgroup.

Jason Rigby (02:09):

Yeah. And I was thinking about that, whether it's objective or subjective, and I was thinking about the process of how humans learn. And then, of course, we're doing neural networks and all that, and we want to put that into AI. But I think when it comes to symbols, the best thing that you can do is view the AI as an evolution in the sense of it's learning and it takes time. Now you can speed up time, which is cool with AI.

Alexander McCaig (02:39):

Yeah.

Jason Rigby (02:39):

You can have it learn really, really quickly. It doesn't have to sit there and learn every 24 hours. It's not bound by time. So you can make it learn a year's worth in two hours.

Alexander McCaig (02:50):

Go through experiences faster than you can.

Jason Rigby (02:52):

Yeah, yeah. Yeah, exactly. As quickly as the computational processes of it. But whenever you look at this, it's like, why wouldn't you say, "Okay, this may take five, 10 years, but as far as the symbol side of things, let's allow this AI to build upon itself this knowledge." So yeah. We can have an objective view and look at, let's say, okay, a snowflake. Yeah, there's all these geometric patterns falling from the sky, which is really freaking wild if you think about it.

Alexander McCaig (03:22):

If you think about it, it's nuts. There are crystals falling from the sky.

Jason Rigby (03:25):

And all of them have a different pattern.

Alexander McCaig (03:27):

Yeah.

Jason Rigby (03:27):

And they're just absolutely intricate.

Alexander McCaig (03:29):

And they're all light enough. And they have enough of a structural integrity to hold their own even when they're in a pile. That is-

Jason Rigby (03:35):

Dude, there is like, I don't want to freak people out. There is math falling on you.

Alexander McCaig (03:39):

Yeah. You're just drowning in math right now.

Jason Rigby (03:43):

But when we look at, and I'm going to get into the second part of this on this amazing paper that we were talking about, but we look at properties of symbols and symbolic systems.

Alexander McCaig (03:52):

Yeah.

Jason Rigby (03:53):

And there's a symbolic behavior and artificial intelligence paper that we got and it was from, I want to say, what's it say, DeepMind?

Alexander McCaig (03:58):

DeepMind.

Jason Rigby (03:59):

Yeah. DeepMind. Yeah, so you guys can look this up. "Although symbols are conceptually and historically important in the study of AI, they do not have a canonical definition in the literature. Working definitions are often implicit or only vaguely referenced, presumably because we can lever intuitions gained from the study of logic, computer science, philosophy, mathematics, and semiotics. But the lack of depth of definition has led to tension and disagreement."

Alexander McCaig (04:24):

Yeah.

Jason Rigby (04:24):

And so, go ahead.

Alexander McCaig (04:26):

[crosstalk 00:04:26] we should just talk about that. So what actually defines a symbol as being a symbol? That's what they're saying.

Jason Rigby (04:34):

Mm-hmm (affirmative).

Alexander McCaig (04:35):

And because no one has come to an agreement over time as to how you would define a symbol developing, and then when a symbol becomes a symbol, that leaves them at odds with themselves. They're saying, "If we can't clearly define what a symbol is, how are we supposed to teach an artificial intelligence systems to recognize symbols, regardless if you take a subjective stance or an objective stance for that observational understanding of it."

Jason Rigby (05:00):

I actually liked the Newell and Simon, in 1976, according to the paper they wrote this, they defined a symbol as a set of interrelated, which I think that puts that word right there, the interrelated part, puts the whole concept. So think of interrelated physical patterns that could designate any expression whatsoever.

Alexander McCaig (05:18):

Yeah. That's really cool.

Jason Rigby (05:19):

Mm-hmm (affirmative).

Alexander McCaig (05:19):

So that something that is interlinked, regardless, you'll be able to create a symbol out of it. Shouldn't there be a symbol for just about everything, right? If the universe is built off of math, right, then something has to equal something else. So either way you do that formula-

Jason Rigby (05:40):

Right.

Alexander McCaig (05:41):

... it's got to have some sort of structured output. It requires balance. The universe is always in balance. So if we look at these symbols here, not to get too way out there, metaphysically, they are all these different interlinking things that come to a definition of what it may be, to describe something. And if we look at these religious ones, like the om or the cross or the hamza, right, or the yin and the yang or the pentagram, or the Star of David, each of these have so much historical events that are interlinked that define the culture around it but that visual representation embodies all of those different things that have been brought together.

Jason Rigby (06:25):

Symbols have evolved. And we talked about this last time, so I don't want to get into that. But symbols haven't, like these symbols here especially, they've evolved over time to mean different things.

Alexander McCaig (06:33):

Correct.

Jason Rigby (06:34):

So, now that's where it gets crazy. And I like this, they said, "Symbols are subjective entities." And then they quote a philosopher, Charles Sanders Peirce, and he outlines, and I want you to get into this and we need to get into this, outlines three categories of relation.

Alexander McCaig (06:50):

Yeah.

Jason Rigby (06:50):

So relation is very important because in everything that we put value, there's always a relation to it, or we get our word relationship from.

Alexander McCaig (06:57):

Yeah.

Jason Rigby (06:58):

"Icons, indices and symbols whose definitions illuminate the rules of convention and establishing meaning."

Alexander McCaig (07:04):

So, icons?

Jason Rigby (07:05):

Right.

Alexander McCaig (07:05):

You're so muted out because of the-

Jason Rigby (07:08):

Oh, really?

Alexander McCaig (07:08):

... the covers on the... Yeah, it's much better.

Jason Rigby (07:10):

Oh, okay.

Alexander McCaig (07:10):

So you said, no, no, go ahead. Go and tell me, so [crosstalk 00:07:12]-

Jason Rigby (07:11):

So, icons, and we'll get into, let's just get into icons. So they talk about icons make reference by way of similarity. A sculpture and an elephant bears literal physical resemblance to a real elephant and hence, is iconic.

Alexander McCaig (07:23):

Yeah. It becomes iconic. We know that is an elephant.

Jason Rigby (07:26):

Right.

Alexander McCaig (07:26):

We know it's a sculpture of an elephant, even though it isn't truly an elephant-

Jason Rigby (07:30):

Yes.

Alexander McCaig (07:30):

... in the flesh.

Jason Rigby (07:31):

Right.

Alexander McCaig (07:31):

Yeah. All right. So, go ahead.

Jason Rigby (07:32):

And then indices, on the other hand, leverage some temporal or physical connection to things in which they refer. The mercury in a thermometer, for example, changes it's height in accordance with the temperature.

Alexander McCaig (07:42):

Yeah. So we don't look at mercury as the mercury in the thing that says, "Oh, fantastic. It's going up and down."

Jason Rigby (07:49):

Right.

Alexander McCaig (07:49):

That mercury is representative of an idea of measurement within our consciousness that tells us it is either warm or cold.

Jason Rigby (07:58):

Right.

Alexander McCaig (07:59):

It's describing a state of being through our representative idea of something being higher or lower to say how it will have an effect on us, but it's not the fact that it's mercury moving up and down-

Jason Rigby (08:10):

Right.

Alexander McCaig (08:10):

... it's what that alludes to. It's what is saying, how do we then take the mercury, put it against a system of measurement to then represent another idea.

Jason Rigby (08:18):

Yeah. And according to this Charles Sanders Peirce philosopher, he said, "Symbols depend on agreed upon link, regardless of the actual physical or temporal characteristics of the medium."

Alexander McCaig (08:27):

Mm-hmm (affirmative).

Jason Rigby (08:27):

"It is only by agreement that a flag comes to symbolize one country rather than another."

Alexander McCaig (08:32):

Yeah. We know the flag of the United States.

Jason Rigby (08:35):

Right.

Alexander McCaig (08:35):

Everybody here in the United States agrees to it and across the globe, we know that that is the flag of the United States of America.

Jason Rigby (08:41):

Right.

Alexander McCaig (08:42):

We know-

Jason Rigby (08:42):

['merica 00:08:42].

Alexander McCaig (08:42):

'merica. We know what Australia looks like.

Jason Rigby (08:45):

Right.

Alexander McCaig (08:45):

We understand what the United Kingdom and Angola and the Democratic Republic of the Congo, each one of those things has its own representation, but it's an understanding and acceptance that everyone has said, "Yes, this is yours. Yes, we agree to that measurement of conscious thought to say that association is in fact an association with those people within that border."

Jason Rigby (09:07):

And they say, "The idealized notion of a symbol wherein meaning is established purely by convention."

Alexander McCaig (09:14):

Yeah. And that's we're looking at convention alone.

Jason Rigby (09:18):

Yeah, it's-

Alexander McCaig (09:19):

And you got to look up the definition of the word convention here. I had it up. Oh, [crosstalk 00:09:26]-

Jason Rigby (09:26):

And he says, "Well, first describe the relationship between representation and meaning as determined by convention." And in the later section titled Symbolic Behavior, which we'll get into, they discuss the role of the symbolic interpreter.

Alexander McCaig (09:38):

Yeah.

Jason Rigby (09:38):

So you're looking at representation, convention and interpreter.

Alexander McCaig (09:43):

Yeah. So maybe it represents the elephant, okay? Because there's a picture of an elephant. In convention, the agreement of all of us, we're convening, coming together, say, "Oh, that is in fact an elephant. It is in fact representative of the elephant." And then you have the third stance. I have someone viewing that elephant. Maybe they say, when they look at that elephant, that gives them a sense of fear. Maybe that elephant is strictly a representation of a place. And it's not about the elephant itself, but how that person has subjectively applied that visual to whatever their conscious thought is that allows them to say, "Oh, I'm thinking of a place or an emotion," rather than say, "Oh, it's a symbol of an elephant. It's just the elephant." You see what I'm saying?

Jason Rigby (10:24):

Yes. Yeah, no, I see. And that's what he says here. "Symbols meaning is rendered independent of the properties of the symbol's substrate." And we kind of got into that yesterday.

Alexander McCaig (10:33):

The substrate.

Jason Rigby (10:34):

Yeah. But I mean, the symbols meaning is rendered independent of the properties. So I can look at this and say, "There's a pentagram with the circle on it."

Alexander McCaig (10:40):

And what are the properties? It has mathematic properties.

Jason Rigby (10:42):

And there are so many, there's positive and negative towards the pentagram.

Alexander McCaig (10:45):

Yeah. But that's the subjective part.

Jason Rigby (10:47):

Yes.

Alexander McCaig (10:48):

But the substrate of mathematics they're saying is not fundamentally important for the definition of what that symbol means.

Jason Rigby (10:53):

Right.

Alexander McCaig (10:54):

They think that the definition of that symbol is then beyond that substrate of math and geometry and rather around the social cultural conventions about what happens when I look at a pentagram.

Jason Rigby (11:03):

Yeah. And they said they use a really easy example is the difference between written texts and oral speech.

Alexander McCaig (11:07):

Sure. If I go to deliver a speech, it's going to have a different type of effect. There's a visual. There's a lot going on and you receive it audibly. But if I hand you that same speech on paper, some would be like, "Oh, I'm just editing a person's speech." Doesn't have the same sort of impact.

Jason Rigby (11:22):

Yeah. And they said the mediums are distinct.

Alexander McCaig (11:24):

Yeah. One is vocal and one's a very physical one. And through those mediums gives off a different effect of the symbolism of what is actually occurring.

Jason Rigby (11:36):

Yeah. And he goes, this is really interesting with the Newell and Simon's 19 6, "But with the different folks is important to speak about symbols with reference to a particular interpreter who generates a particular interpretation and to not speak about symbols as if they transmit objective meanings to any possible interpreter."

Alexander McCaig (11:53):

Yeah.

Jason Rigby (11:54):

We have to understand they're looking at it with AI because that's what the AI is going to do.

Alexander McCaig (11:58):

It's observing.

Jason Rigby (11:59):

Yeah. It's going to impose that here's a rule for this symbol so it's going to impose it for its interpretation and just the AI is going to think, "Well, everybody across the globe views it as simple as this, because this is what the rules [crosstalk 00:12:13]-

Alexander McCaig (12:13):

This is how I should look at it.

Jason Rigby (12:14):

Yes.

Alexander McCaig (12:15):

But that's half the issue there is that you bake in that totally subjective idea into AI and so when the AI becomes subjective, it lacks all objectivity. It's essentially out of balance. So then how do you properly weight in your algorithm to say, "Look at it subjectively in this stance with this percentage and look at it objectively in this stance of it's substrate, it's mathematical substrate, and it's meaning, right, as to what are those key drivers, but that's kind of the decision there. So if the observer, the perspective of AI, is strictly a subjective perspective, you can see how that can become dogmatic and biased depending on the symbol that it receives.

Jason Rigby (12:54):

Yeah. And he goes, Shannon said this, and I think this is really cool. It's exactly what you're talking about. "Shannon realized that we can't communicate symbols efficiently because we can exploit the fact that symbols meaning is imposed, and hence because we can indiscriminately factor meaning out of the equation when we shuttle signals around it." Signals are really important. They use a great example.

Alexander McCaig (13:12):

Telephone?

Jason Rigby (13:12):

I want you to talk to that. Crucially to a telephone or an electrical cable or drum, electrical pulses do not mean nor symbolize anything. And there is nothing you can glean from the physical properties of these pulses to make what they mean for a human.

Alexander McCaig (13:25):

So just because a pole signals-

Jason Rigby (13:28):

It happens. Yeah.

Alexander McCaig (13:28):

How am I supposed to understand it? A lot like the thermometer, okay? How am I supposed to understand that measurement? Or if I'm hearing tones come through a phone, how am I supposed to understand those tones into language? There has to be some sort of construct around it. So then is it not possible to say that a specific line or a squiggly or whatever it might be-

Jason Rigby (13:49):

Right.

Alexander McCaig (13:50):

... can just symbolically stand on its own. And the contention here is that using the telephone metaphor and tone without a tool to render it into something audible for you can actually capture it and then subjectively apply your own experiences to it. If that doesn't exist, then there's actually no meaning that happens through that tone.

Jason Rigby (14:08):

Yeah. Because you know what-

Alexander McCaig (14:09):

Then let me ask you something then.

Jason Rigby (14:10):

Go ahead.

Alexander McCaig (14:11):

This is interesting. Say I go to an opera. Opera's in German or Russian or Italian. I don't know what the hell the person's saying, but I am receiving something from that signal. It is still triggering something regardless of me knowing the language.

Alexander McCaig (14:30):

So then is that necessarily true about the phone? Maybe someone's screaming at me through the phone. How am I recognizing that tone? Maybe it's in a different language and they are just totally pissed off. I can still send something. There's still meaning to it beyond the idea of saying, "Oh, if I look at it objectively, it won't tell me anything." But if I objectively say, "This is audibly louder. This person is objectively aggravated"-

Jason Rigby (14:57):

Yes.

Alexander McCaig (14:57):

... then how is it to say that that sort of stance that they're taking specifically saying that if it's just tone alone nothing can be rendered from it? There's a lot that can be rendered just from a line itself standing alone, right? So that's where there's contention for me around that sort of idea philosophically.

Jason Rigby (15:14):

But they get into this and this is the part I liked. "A second consequence of construing symbol meaning as a matter of convention, therefore something that can be assembled from the perspective of one system but not another." So now we're getting into systems theory which I like. But, and he talks about this, "Bob, turning to communication theory again, suppose Bob establishes a coding scheme with Alice, but not with Sally. The sequences of electrical pulses that Bob sends Alice are symbolic to Alice-"

Alexander McCaig (15:40):

But not to Sally.

Jason Rigby (15:41):

Yeah. "They have associated meaning based on agreed convention." That's where I was like, "Oh, okay. There's clarity now." "But to Sally, they are just electrical pulses." So, we'll go back to this. This is a funny, but the pentagram thing, and they already did a whole documentary on this. Remember in the 1980s? Well, you probably weren't born yet.

Alexander McCaig (16:00):

Wicca.

Jason Rigby (16:01):

Yeah, yeah. No, well, you also had this huge thing where like, there was backmasking of records. And then there was like everybody's a Satanist and everybody...

Alexander McCaig (16:10):

Oh.

Jason Rigby (16:11):

And then did you ever, did you remember? And then all these cool rock bands are like, and so there was this huge movement, especially on the religious side of things and you can look it up and they made a documentary on it, how hysteria works.

Alexander McCaig (16:25):

Okay.

Jason Rigby (16:26):

All of a sudden now we've got police officers coming in and saying they have crimes on Satanist and there's babies. It's like this adrenal gland and all this BS with, like really do you think Tom Hanks is taking little kids and sucking blood out of them?

Alexander McCaig (16:41):

No.

Jason Rigby (16:42):

I mean, like really?

Alexander McCaig (16:43):

Come on.

Jason Rigby (16:43):

Come on.

Alexander McCaig (16:43):

Yeah, let's go.

Jason Rigby (16:44):

So, we've gone to QAnon and all this. We get to this point to where we have this. And this is what the documentary said is that now you have a detective coming in looking at a crime scene and there's a pentagram there, automatically that symbol-

Alexander McCaig (17:00):

Satanic worship.

Jason Rigby (17:01):

Yes. Okay. And they label, do you see what I'm saying?

Alexander McCaig (17:03):

Yeah.

Jason Rigby (17:04):

So to him, because of the-

Alexander McCaig (17:05):

What if I'm going around and-

Jason Rigby (17:07):

... [crosstalk 00:17:07] the meaning, like Sally.

Alexander McCaig (17:08):

Yeah.

Jason Rigby (17:09):

That's what I'm saying.

Alexander McCaig (17:09):

What if I'm a mathematician? I like doing my math as graffiti on walls?

Jason Rigby (17:13):

Yes.

Alexander McCaig (17:14):

What if I like Euclidean geometry?

Jason Rigby (17:16):

Well, I mean, a pentagram is also used for protection back in the-

Alexander McCaig (17:18):

Yeah I know.

Jason Rigby (17:19):

People used it as protection.

Alexander McCaig (17:20):

It was a symbol of protection-

Jason Rigby (17:20):

Right.

Alexander McCaig (17:21):

... and then if you invert it, it becomes more of a satanic symbol.

Jason Rigby (17:23):

Yeah.

Alexander McCaig (17:24):

But that's the contention, the idea around it.

Jason Rigby (17:26):

But that's a Satanist putting an identity to it. And then we turned around and was like the whole, we knew what happened here in the United States with witches back in the 1700s, the Puritan era. They were like, "You look like a witch. You have a bigger nose. Come with me."

Alexander McCaig (17:39):

Yeah. Come on. Let's see if you float. You know what I mean? You fireproof? No. So, if you look at that, say the person-

Jason Rigby (17:47):

No, this is funny. [crosstalk 00:17:47] Did you hear about the guy who had a heart attack? He overly weight?

Alexander McCaig (17:49):

This is funny? And he had a heart attack?

Jason Rigby (17:51):

No, no, no. This is funny.

Alexander McCaig (17:51):

No, all right. Go for it.

Jason Rigby (17:52):

No, no, no. So this guy was walking on a path next to this lady and the lady walked past him.

Alexander McCaig (17:57):

Yeah.

Jason Rigby (17:57):

This is Puritanical. You know, this is the great history of the United States, guys.

Alexander McCaig (18:01):

Okay.

Jason Rigby (18:01):

And when he was walking past her, he started to have a heart attack.

Alexander McCaig (18:05):

Okay.

Jason Rigby (18:06):

And then he dropped dead. So the town said she killed this guy-

Alexander McCaig (18:12):

Just because she was in the wrong place at the wrong time?

Jason Rigby (18:13):

Not that he was obese-

Alexander McCaig (18:15):

But what's the symbology?

Jason Rigby (18:17):

... and drank beer a lot.

Alexander McCaig (18:17):

But the symbology of her being there as a woman at that time-

Jason Rigby (18:21):

Yes. And there being a witness of it.

Alexander McCaig (18:23):

... then immediate classifies that woman as a witch.

Jason Rigby (18:25):

Yes.

Alexander McCaig (18:26):

Are you kidding me? She's like, "I don't even identify with witches."

Jason Rigby (18:30):

Yeah, exactly.

Alexander McCaig (18:31):

But you're saying it identifies to me. So when you look at that subjective stance, it becomes quite difficult. And so say you take that tonal, again, representation of the phone or electrical pulses.

Jason Rigby (18:38):

Yes.

Alexander McCaig (18:39):

Say I explain it to you very, very well in sign language because I'm deaf. So when I received those tones, I don't hear anything. So how can it possibly mean the same thing even though it's been explained, but received in a different manner? So even in the subjective sense, regardless-

Jason Rigby (18:57):

Yes.

Alexander McCaig (18:57):

... you can't fundamentally put the AI on that basis for interpreting the data of a symbol subjectively because the objective nature of what is actually occurring. And so that makes me wonder, is the information that's being transmitted something that is of an objective nature, something that is truly truthful and beneficial? Because when the subjective aspect comes into it, that's when things become difficult. When you start to render the truth in a different format, or start to say, "This is how I view the world." But if it steps away from true experience and observation of what's really going on in these natural laws, well, then you're like, "Eh, I'm not really sure about that. That doesn't make much sense. I am not able to interpret it in that same exact way," because your subjective nature is twisting the view of the truth or how I'm supposed to receive this symbol or electronic tone rather than me looking at objectively, whether I'm deaf, blind, happy, completely unable to think, regardless. We all just recognize that is what it is standing on its own, giving off that same sort of information.

Jason Rigby (19:57):

Yeah. And it depends on the environment that the person's in. And we'll use, I think there's a great example, Nike, the swoosh.

Alexander McCaig (20:04):

Yeah.

Jason Rigby (20:04):

So if you're in the United States, you're like, "I need to not be lazy. I need to eat good. Just do it is their motto."

Alexander McCaig (20:12):

Yeah.

Jason Rigby (20:12):

"And I need to put these shoes on and go to the gym. And they're cool and I like them." If you're in a third world country, you may not be able to buy a pair of Nike's because they're not available, but you're going to idolize people that are wearing them and someday you hope that you can get to that level to where you could purchase a pair of those shoes. You see what I'm saying? But do you see how-

Alexander McCaig (20:34):

So you've got to make sure you're putting your AI in the proper environment then.

Jason Rigby (20:37):

Yes. Yes. That's where I was getting at. Yeah.

Alexander McCaig (20:38):

Because if AI is experiencing the data of this symbol and needs to interpret that data, is the experience in which it is interpreting it-

Jason Rigby (20:48):

Yes.

Alexander McCaig (20:49):

... one that is truly applicable to all experiences. But if it becomes subjective, then you can't really say that the AI has given you a proper output-

Jason Rigby (20:57):

Right.

Alexander McCaig (20:57):

... because it actually does not account for the totality of experiences because when something is subjective, it then becomes naturally limited.

Jason Rigby (21:04):

Yeah. And they said this, "In characterizing symbols it is often not enough to make reference to a fixed system into which they happen to be placed, or to treat symbols as isolated, independent identities. Instead, we must consider the bi-directional influences caused by a symbol's placement in a broader symbolic system."

Alexander McCaig (21:21):

So, well that becomes a problem. Then you would need an infinite number of inputs for understanding all the different subjective natures of a symbol and how it could possibly be represented in someone's mind or in a society.

Jason Rigby (21:32):

You would basically have to break up the world in all different categories. In Australia, how is this symbol viewed?

Alexander McCaig (21:37):

You have to break everyone's mind.

Jason Rigby (21:38):

Yeah. Yeah.

Alexander McCaig (21:40):

First of all, you don't have the computational power and it's a very inefficient way of understanding how a symbol should be interpreted.

Jason Rigby (21:44):

Yes.

Alexander McCaig (21:44):

So just on that basis, what an outlandish, ridiculous statement for someone who probably doesn't even work with computers to say something like that. Could you imagine putting something together, the feat of strength that would be required just to fundamentally take that undertaking and then, "Oh, here's your problem. A new baby has been born. I got to go capture the thought of that baby and how the symbols represented to it." You have a million babies a day.

Jason Rigby (22:10):

Yeah.

Alexander McCaig (22:10):

You would never be able to keep up.

Jason Rigby (22:11):

No, you couldn't. I mean, it's infinite. What does that symbol look like to a Boomer, compared to a Millennial?

Alexander McCaig (22:15):

The AI would just cripple itself.

Jason Rigby (22:16):

Yeah, we don't have enough computational power.

Alexander McCaig (22:18):

And if the AI took a deductive pattern, it would realize that there has to be an objective stance, that regardless of the experience of what the symbol is received, it is still standing on its own.

Jason Rigby (22:28):

But it can take vast amounts of data and then make and bring, this is what I love about data is, I mean, this is what I love about AI is the potential that it can take mass amounts of information and create unity.

Alexander McCaig (22:40):

Yeah. That's the cool part.

Jason Rigby (22:41):

Yeah.

Alexander McCaig (22:42):

But that's if you take that deductive approach and look for the interconnected line between all of us that threads us all together, that golden thread.

Jason Rigby (22:49):

Because that would be, to me, the knowledge and the wisdom of the symbol would be, how does it unify?

Alexander McCaig (22:54):

That's exactly right. Just think about the symbol of millions of people signed up on TARTLE. It's a symbol that we're here to unify.

Jason Rigby (23:00):

Yes.

Alexander McCaig (23:00):

We're here to share and understand our experiences together. We're here to support those systems that are trying to understand how something is represented and understood. We're here to stand for that. And we can all come together with those inputs to better enliven our world and how we look at it and use a tool like a computer to come back and help us analyze this thing in a more efficient manner.

Jason Rigby (23:21):

So, let me ask you this, and we'll close on this, because we're going heavy on this. If I'm a think tank and I'm in Switzerland or Canada or the United States, and I'm sitting here and I'm saying, "I need to do a study. I have this group of people and I want to do a study, but I want to ask them questions about symbols."

Alexander McCaig (23:38):

Yeah.

Jason Rigby (23:38):

Or, "And I want to collect that information and that data." How could they use TARTLE to do that?

Alexander McCaig (23:44):

This is great. So you would go on TARTLE. You'd sign up as a buyer, and then you'd go to generate your first data packet. Maybe you upload a picture of one symbol in this data packet. "Where are you located? Did you grow up poor? How do you feel about yourself? How do you feel about this symbol? And does it mean anything to you culturally?"

Alexander McCaig (24:03):

You could ask those things and you could kind of test those waters. So now you're receiving human input to help look at this theory that they think everything should be analyzed in a subjective nature for the AI. But you can take all of those things. You can buy that data packet from millions of people across the globe. Now you've got some real source stuff to understand from the people who actually interpret symbols and use them every day, what it means to them rather than me subjectively thinking, "This is how everybody should be looking at it."

Jason Rigby (24:33):

No, I love that. That's awesome. Well, Alex, I want to encourage whether you want to sell your data, whether you want to earn some money, whether you want to change your world, help with our Big 7! or whether you want to purchase data, that can all be accomplished at the marketplace. What's the website?

Alexander McCaig (24:50):

Oh, it's called TARTLE.co.

Speaker 1 (24:50):

Thank you for listening to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby, where humanity steps into the future and the source data defines the path, the path. What's your data worth?