Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 25, 2021

AI Artificial Intelligence Learning and Reading Human Symbols Part 2

AI Learning Reading Human Symbols
BY: TARTLE

AI and Symbols Pt. 2

Last time we talked about Artificial Intelligence (AI) and the difficulty it has with recognizing the significance and meaning of symbols. We provided a rough outline of some of those difficulties AI has in this area and how it is the chief obstacle to making truly intelligent machines, as opposed to making machines that are good at the one or two things they are designed for. In this the next few pieces we’ll be going deeper, exploring the many kinds of symbols and how people take their recognition and use for granted, rarely fully appreciating all the complex processes involved in doing so. 

Let’s begin with language. Any language is a series of sounds and/or written words, each of which is a symbol that stands for a thing, action or concept. Even just that first fact, that languages are usually both spoken and written hints at the great deal of complexity involved. If we are fluent in a given language, we easily can hear a series of sounds and understand the written words that would correspond to them. We further understand the thing, action, or concept that they represent. However, someone who is just learning a new language will appreciate just how difficult it is to wrap his head around all of these relationships. This is especially true in regards to trying to match the written language with the proper pronunciations. Just try to be an English speaker learning French. Another place where difficulty arises even with people is the fact that different cultures and their languages have different concepts that don’t translate perfectly into other languages. All of these are reasons why Google Translate often has such entertaining results. 

Yet, all of that is in some ways the easy part. When we are trying to interpret the audible and written symbols of a language, it is relatively straightforward compared to trying to interpret other kinds of symbols. With a language, there are still meanings that can be checked using tools like dictionaries. What about paintings? On one level, a still life seems very simple. A bowl of fruit symbolizes a bowl of fruit. Yet, art is rarely so superficial. Very often, arguably always, the artist imbues his work with additional meaning. That meaning can be intended or not, something that makes the interpretation of art such a contentious and interesting subject. Many times, someone will look at a painting and see things in it that the artist could never have anticipated, yet are there nonetheless. 

Let’s not forget that the meaning of symbols can change over time as well. Perhaps the most famous example of this phenomenon is the swastika. Once, it was a fairly obscure symbol of divinity used in a variety of eastern religions. However, virtually no one can see it now and not think of the worst kinds of violence and bigotry in human history. The swastika has become the flag for fascism in the western mind since WWII, quite the change from its original use.

Another example from WWII shows how a single image can symbolize a great many things. The iconic picture of the raising of the US flag at Iwo Jima symbolizes victory, liberation, camaraderie and many other things besides. While an AI might pick up on some of that, a full appreciation of everything symbolized in that one image is impossible without some historical knowledge of the actual context behind it. 

While it might seem from this that the interpretation of symbols is a hopelessly subjective enterprise, the truth is that symbols still have a genuinely objective meaning, though it is largely dependent on context. The swastika does have an objective meaning but one must be aware of it. A painting of an apple can symbolize many things. But it definitely symbolizes an apple and definitely does not symbolize an orange. The difficulty in determining the meaning of many different symbols lies precisely in trying to sift through the many subjective meanings of things in order to get to the objective. A task that AI has thus far not been up to. We’ll see later in this series whether there is any hope of overcoming this hurdle or not.

What’s your data worth?

Summary
AI Artificial Intelligence Learning and Reading Human Symbols Part 2
Title
AI Artificial Intelligence Learning and Reading Human Symbols Part 2
Description

We provided a rough outline of some of those difficulties AI has in this area and how it is the chief obstacle to making truly intelligent machines, as opposed to making machines that are good at the one or two things they are designed for.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast, with your hosts Alexander McCaig and Jason Rigby, where humanities steps into the future and source data defines the path.

Alexander McCaig (00:20):

Welcome back, everybody, to this little mini series on symbols and artificial intelligence here on TARTLE Cast. It was too much to put into one episode, so we wanted to break this up so that it's a little bit more palatable here.

Jason Rigby (00:39):

Yeah, and I kind of want to start us off, Alex, where we were continuing, but I want to get into understanding, I think I want to go a little bit more macro, understanding evolution and symbols. So if somebody is watching this for the very first time, this episode and they didn't go to the last one, whenever you look at as we as a species have evolved, from the very beginning of lighting a fire or whatever it is, I mean, we have symbols on caves.

Jason Rigby (01:06):

So, symbols have been a part of us. I say that symbols evolve as we evolve. Everything that we do every day, we're interacting with symbols. So, it's almost a, I don't want to say it's a separate language, but it is an integral part of humanity.

Alexander McCaig (01:25):

You're on to something very strong here. So if we go back to the oldest cave paintings we find in France, there's no written language. It's strictly symbolically based. So, I think the root of how we're looking at symbols and artificial intelligence in this paper, which we'll talk about in a second, is that if we consider human evolution, we started with symbols first, and that symbol was representative of these acts that happened in life.

Alexander McCaig (01:53):

So regardless of the person that came in the cave, it could be understood what was being said. But if you wrote it in a specific language, it'd be quite difficult for someone to come in and even figure out what that language might be. Even if that language died, how do I actually bring that language back?

Jason Rigby (02:09):

Yes.

Alexander McCaig (02:09):

We did an episode on that with MIT and bringing back these dead languages. But the fact here is that when I look at this transition is that the root actually of linguistics starts with symbols. Symbols are just a way that we have decided to turn into words so that we can communicate in a different format that is more audible.

Jason Rigby (02:31):

Yes.

Alexander McCaig (02:32):

Because I can't audibly sound out a symbol. I mean, I suppose I could, if I really wanted to think about my own sort of alien language that I wanted to create, but the base of our evolution in our existence started with those symbols and how we interpret threats and other things. Like you said, as we begin to evolve, our interpretation of those symbols change also. Then, we found efficiency in our communication by thinking that language was the thing that was best represented to pass off a thought or an idea, or an understanding collectively of those thoughts or ideas. But what we find is that words themselves, even the Anglo-sized versions of many thoughts or ideas, they lack a lot of value in actually explaining something's meaning.

Jason Rigby (03:14):

Well, I mean, even look at how beautiful and symbolic Japanese writing is.

Alexander McCaig (03:19):

Yeah, kanji.

Jason Rigby (03:22):

I mean, it's like symbol, symbol, symbol, symbol, symbol. Each of them stand in their own, even though it's an alphabet.

Alexander McCaig (03:27):

They stand on their own.

Jason Rigby (03:28):

Right.

Alexander McCaig (03:28):

But together, you have something quite special that's actually happening. So, it's part artwork, actually part linguistics. Now when I think about going into an art museum, okay, you can tell me what this painting was about. You can write it down in text underneath it on a little plaque and tell me who it was by and what year it was about that time, but I'm going to have my own interpretation, right, of a Monet, right? Some sort of impressionist art, or a Van Gogh or whatever it might be, and I'm going to say that this means and feels something different to me. Even though we all collectively understand that this is a piece of art, my interpretation of it is different. When I begin to communicate a message to you, Jason, the way you receive it is through your own interpretation.

Jason Rigby (04:16):

Right.

Alexander McCaig (04:17):

Now, if we understand the value in that sort of sharing of knowledge and the evolution of human experience, it would make more sense that it would behoove us to actually do things that are always more representative of symbols rather than representative of the word itself to describe it.

Jason Rigby (04:35):

Yeah, and it's so easy. I mean, now when you look at the global aspect of things, how many flags do we have flags?

Alexander McCaig (04:43):

Flags, right?

Jason Rigby (04:44):

Yeah.

Alexander McCaig (04:44):

But you got to consider something about the flag. It's the meaning of the flag. So, all flags are essentially the same. It's a cotton linen blend and it has color and image on it. So, we understand that the structure and it being on a pole is symbolic. Then, there's another representation of it. The fact that because it has a certain structure of colors and where it's geographically placed is representative of something else.

Jason Rigby (05:10):

Right. Yeah. I mean, you can look at Uwajima, perfect example of graphically placed.

Alexander McCaig (05:14):

Yeah.

Jason Rigby (05:14):

Then you can look at Braveheart and watch all the flags and the clan things.

Alexander McCaig (05:19):

Right, all the tarps and everything.

Jason Rigby (05:20):

Their last thing is, as they're dying in battle and giving up themselves for that, they're looking at the flag.

Alexander McCaig (05:26):

For that symbol.

Jason Rigby (05:27):

Right.

Alexander McCaig (05:27):

But each one of those gentlemen that are dying, and I say gentlemen because at that time it was strictly men that were in battle, each one of them had their own interpretation of how they felt about Scotland, how they felt about going to war and being at Uwajima.

Jason Rigby (05:42):

Right.

Alexander McCaig (05:42):

That was very specific to each person, okay?

Jason Rigby (05:44):

But I mean, you can look on the other axis side with Germany. I mean, that symbol, unfortunately, they ruined the symbol because it was a beautiful-

Alexander McCaig (05:55):

The swastika, right?

Jason Rigby (05:56):

Yeah, it was a beautiful symbol from before. Which-

Alexander McCaig (05:58):

Yeah, so it was a Buddhist thing.

Jason Rigby (06:00):

... have you seen all those symbols from the Native Americans here in New Mexico?

Alexander McCaig (06:03):

No.

Jason Rigby (06:04):

There's like three or four tribes that have them and drew them on rocks here-

Alexander McCaig (06:10):

Of the petroglyphs?

Jason Rigby (06:11):

Yes, that had that symbol on it, and was done so long ago. He just took it, and then unfortunately ruined it.

Alexander McCaig (06:18):

Yeah, and people do that. So then, that really begs the question. If I input an algorithm into the machine learning for it to root itself in, am I doing destroying all future interpretation of what that symbol actually means?

Jason Rigby (06:37):

It's ruining the process for it to put a priority to... I mean, whenever they saw that symbol, how much did it bring everyone together in unity, whatever symbol it may be that inspires people?

Alexander McCaig (06:53):

Yeah.

Jason Rigby (06:53):

But you can put the peace symbol in, which we have a whole story with that, but the symbol in the sixties and seventies that brought, "Peace, man. Peace, love, joy. Yeah, yeah."

Alexander McCaig (07:03):

Yeah. It's interesting, when you-

Jason Rigby (07:04):

But do you see what I'm saying? We build tribes and clans, and even to the point of us, a machine would not understand this. Like, "Hey, this person's willing to die for symbol?" Like, "You're willing to die for that? That doesn't make sense to me."

Alexander McCaig (07:17):

But you said tribe and clan.

Jason Rigby (07:19):

Right.

Alexander McCaig (07:20):

That's something that's not unifying. So then essentially, the meaning of what that thing is collectively could not be truthful if it creates a separation of groups.

Jason Rigby (07:28):

Yes, yeah, but-

Alexander McCaig (07:29):

But the meaning would have to apply to everyone everywhere.

Jason Rigby (07:34):

Right.

Alexander McCaig (07:35):

That would truly define-

Jason Rigby (07:36):

That would be union, yeah, yeah.

Alexander McCaig (07:36):

... in an objective sense-

Jason Rigby (07:38):

Yeah, exactly.

Alexander McCaig (07:39):

... something that is truly universal in symbols.

Jason Rigby (07:42):

But, and you know this is as well as I do, having polarity, people have a choice. When you're confronted with a symbol, you have to say, "I like that, I don't like that. What is that?"

Alexander McCaig (07:53):

But that's the subjective part.

Jason Rigby (07:55):

My upbringing and all my pastors coming together.

Alexander McCaig (08:02):

Oh. So, you're being dogmatic towards it?

Jason Rigby (08:02):

Yes. We'd say, "Okay. Yeah, that's beautiful," or "That's ugly," or, "What is the symbol associated with?" You see what I'm saying?

Alexander McCaig (08:13):

Right. So now, you're taking the symbol and you're creating another symbol of it.

Jason Rigby (08:18):

Right. Yes.

Alexander McCaig (08:18):

So, how am I linking that symbol to other things that actually sit outside of that symbol, that really may not be representative of what that symbol means? When I think of a swastika, people's first thought is Nazis. Right.

Jason Rigby (08:30):

Right, right.

Alexander McCaig (08:31):

But the real representation of it was something quite mathematical and unifying to a specific subset of a religious group in an area way away from Germany.

Jason Rigby (08:39):

Yes. Yeah, yeah, exactly.

Alexander McCaig (08:41):

So the question is then, if we look at that and we're designing symbols and we want to actually put that idea of interpretation of symbol into AI for its own learning and how to look at that symbol and say, "Oh, what does this mean?" then you would have to look at it strictly from a universal stance of cause and effect within that symbol itself.

Jason Rigby (09:03):

Yes, yeah.

Alexander McCaig (09:05):

Line is line. Line is line must be accepted everywhere in the laws of physics, in the mind of a human being, everywhere. It cannot be something that is subjective, it has to be something that stands on its own. The creation of symbols can be a combination of things that stand on their own.

Alexander McCaig (09:23):

So then, the representation of the combination of these things, the sin plus the bol, right?

Jason Rigby (09:28):

Right.

Alexander McCaig (09:29):

Then defines a full characteristic or linkage of standalone items in an objective sense that divine a total picture of something outside of any subjectivity of human representation. So that whether a machine looking at it, spitting out an output, says it's one thing, or a human looking at it, they both come to the same exact understanding. Then, you've removed all bias out of that dogmatism and subjective referential experience that a human being may have.

Jason Rigby (09:58):

We're going to get into this article, but it's so tricky, dude. This is an undertaking. I was thinking when you were talking about, you said lines. So we have a blue line, and we have this blue line thing with supporting cops, and then you have Black Lives Matters, which is a group that is trying to say, "Hey, there's racism and there's horrific things that are happening," but you put a Black Lives Matter symbol next to a blue line, right now in 2020 and 2021, we see that as being something that's polarizing, but-

Alexander McCaig (10:32):

But that's changed with time, right?

Jason Rigby (10:33):

But yes, exactly. That's where...

Alexander McCaig (10:34):

So, here's where you're going. So if time is changing, the interpretation of it is changing?

Jason Rigby (10:40):

Yes.

Alexander McCaig (10:40):

That means that it cannot altogether, in the infinite body of time within our physical realm, decide that as something truthful. The representation, the idea of it is not a truthful representation of what it is objectively. It is a subjective representation with changes with time. If you do things strictly on subjective interpretations of it-

Jason Rigby (11:00):

Yes.

Alexander McCaig (11:00):

... then you're moving further away from a truthful idea and an evolutionary stance of how to interpret a symbol.

Jason Rigby (11:06):

Yeah, and like you said, you have to look externally. If I'm at the airport and I walk in the door, and there's two yellow lines on the concrete, I'm automatically going to associate the lady at the desk, with me being in the facility of an airport, the location, and then I'm going to walk and go right in between those yellow lines, even subconsciously.

Alexander McCaig (11:29):

So, what have you done? I've taken this disparate point, and this one over here, and this one, and I brought them all together to form an idea.

Jason Rigby (11:36):

Yes. How does it do that? AI's like, "I can interpret languages, machine learning, that's easy. But now you're having me try to..."

Alexander McCaig (11:44):

Look at a symbol and understand how to interpret the symbol. My contention, and this is something that I think is essentially missed with this article, and again, we had to front load this whole thing.

Jason Rigby (11:54):

Yes. Yeah, yeah.

Alexander McCaig (11:55):

Right?

Jason Rigby (11:56):

Yeah. We, haven't gotten to the article yet, guys, which is an amazing article.

Alexander McCaig (11:58):

Is that when we begin to speak and our language evolves, and as we continue to evolve, we have to make sure that we step on a platform that is truthful, a platform that is timeless, a platform that is truly rooted in some sort of universal law. So when you look at symbols, something that is so incredibly important, you need to make sure that the design of that symbol is truly universal, that each piece that brings it together can stand alone and collectively can stand alone.

Alexander McCaig (12:32):

The reason I say that is because if we all have a subjective interpretation, then it moves us away from the logical linear path of our own evolution. Rather, it drives us off into these areas where I'm going to interpret something one way, and I'm going to be in my own specific group and me on the other.

Jason Rigby (12:47):

Well, we have that now.

Alexander McCaig (12:48):

I know, and it creates that sort of dis-unification.

Jason Rigby (12:52):

Right.

Alexander McCaig (12:52):

We cannot truly learn from one another because we don't stand on the same firm, rooted ground of understanding for what something actually means.

Jason Rigby (12:59):

Yes.

Alexander McCaig (13:00):

If we can remove all the bias out of something, then we can begin to understand one another. If you interpret symbols as language, as if I begin to talk to you and you interpret those thoughts that I'm saying to some sort of visual picture in your mind, then I would want to make sure that the words that I use, they're unifying words that are timeless, words that are truthful, and they're not really subjective. They will all be objective. Because in the objective nature of it leads us towards something that is more truthful, so that when we do act, we act in the best interest for all and the best interest for ourselves at the same time. But if I live in a world that's completely subjective, it separates groups and causes me to live in a world that is, I guess I would call it, service to self at that point.

Jason Rigby (13:40):

Yeah. Whenever you look at this article, and we can kind of broach it for a few minutes and then we'll have to do another episode.

Alexander McCaig (13:49):

Yeah.

Jason Rigby (13:49):

That's fine. It's from DeepMind, it's called Symbolic Behavior in Artificial Intelligence. This is what we've been saying, but I highlighted this because I want you to speak to this, "A re-interpretation of what symbols are, how they come to exist, and how a system behaves when it uses them." I want to stop right there. How a system behaves, that's a key word, "behaves."

Alexander McCaig (14:13):

Yeah. Behavior, because behavior can be learned or programmed.

Jason Rigby (14:19):

Right.

Alexander McCaig (14:19):

So when we're looking at behavior in AI, how do we continue to replicate something and then possibly learn from it? Then, you're truly defining an artificial intelligence. But if you look at a symbol and you talk about behavior, is the AI really learning from the symbols? Because we would have to understand our own learning of them first.

Jason Rigby (14:39):

Yes.

Alexander McCaig (14:40):

This is really important here. So, this paper doesn't come to a solve, but it's more of a suggestion of a course of action interpreted off of history that we've had in the past. Here we go, bringing all these disparate points together to symbolically bring them into some formulated idea of what it means to move forward and how a machine should learn from symbols. So, this paper is but a symbol of our idea of how we interpret symbols.

Jason Rigby (15:06):

Yeah, and I like is how they say that, "Symbols as entities."

Alexander McCaig (15:11):

Entities. Remember, an entity, it has its own energy. It's something that needs to be able to stand alone. Right? An entity can do many different things, and we choose how much life we want to give to that entity, material or immaterial. So, that's the thing we're going to kind of dive into. If you wouldn't mind just touching on what the title of the article is and the people that wrote it, and then we'll close out this episode, and then we'll start to get into the paper on the next one.

Jason Rigby (15:36):

Yeah. Symbolic Behavior in Artificial Intelligence. Adam Santoro, Andrew Lampinen, hopefully that just spelled that, Kory Mathewson, Timothy Lillicrap, and David Raposo. Raposo?

Alexander McCaig (15:49):

Yeah.

Jason Rigby (15:49):

Yeah.

Alexander McCaig (15:50):

So just before we close this out, shout-out to these thinkers. I think it's actually quite fantastic. We do a lot of reviews on white papers and other things around data and even articles. This was the most intellectually stimulating thing I've read in a long time.

Jason Rigby (16:08):

Yeah, that's pretty good. Symbolic Behavior in Artificial Intelligence, you can look it up by DeepMind. Thanks.

Alexander McCaig (16:11):

Right.

Speaker 1 (16:21):

Thank you for listening to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path. What's your data worth?