Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace

AI and Symbols Pt. 4

In order to take AI from a mere program that is very good within a narrow sphere and elevate it to the level of actual intelligence we need to find a way to teach it how to recognize and interpret symbols. One of the main topics of discussion in this series has been whether we should take a subjective, interpretive based approach, or an objective, universal one to training the AI as to the nature of symbols. Before we delve further down this discussion though, we should take a step back to truly recognize the enormity of the task. To do that, we need to recognize that there is disagreement over the very nature of symbols in the first place. What makes a symbol a symbol?

For the purpose of our discussion we should briefly look at the definition given by the authors of Symbolic Behavior in Artificial Intelligence, the paper that has been the basis of our discussion in this series. They draw in part on a definition given by Nolan Simon describing a symbol as “a series of interrelated physical patterns that can designate any expression whatsoever”. All right. What does that mean?

At first glance, one could read it as meaning that any symbol, any “series of interrelated physical patterns” can literally represent anything. That may or not be the way Simon intended it. If he did, we’re not sure how to help him. Obviously certain patterns can only represent certain things. A statue of an elephant clearly represents an elephant and not a mouse. “But what if you call an elephant a mouse?” says the gadfly in the back. Then it represents something that someone calls a mouse. The point is, it’s very clear what that statue represents, no matter what name you give to the animal. 

So, what else might Simon’s definition mean? A better meaning, one that makes a lot more sense is that one can use some kind of “interrelated physical pattern” to represent anything. More simply, literally everything can be symbolized. The immensely complicated and intense concept of love is symbolized with a heart. A circle is a circle wherever you go. If anyone in virtually any culture anywhere sees a small disc with a person’s head on it, they know they are looking at some kind of money. 

So, what should we use as the definition of a symbol? A simple phrase would be that a symbol is anything that represents something else, whether it be a sound, an action, a thing, or a concept. Now that we have that out of the way, let’s get back to the idea of an objective interpretation of symbols.

We’ve already talked about the fact the paper’s authors favor an interpretation based approach to training AI. They do correctly identify that symbols get a lot of their meaning from the culture in which they originate. Based on this, would it be fair to criticize an objective approach as being impossible or anemic at best. Impossible? No. Anemic? Perhaps. 

However, consider an opera. They are still very often done in Latin, or German. Even if it’s in your local language, the singing will often be so stylized that you may not be able to recognize anything. Yet, despite not being able to understand all the symbols being presented to you, you still pick up something. You can pick up on the tone of the music, the melodies presented, the pitch of the singer’s voice, all of which convey meaning to the listener. In short, there are universal aspects to the symbols being presented that transcend particular cultures. 

This can be done with a variety of symbols. When we see a statue, we know it represents a particular thing. We can tell from the expression of an illustration something of the mood of the character presented. 

Because this universal element is identifiable, it seems clear that we can and should explore a universal basis to teaching an AI how to properly interpret symbols.

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.