Here we are, the end of our extended series on AI and symbols. Specifically, how to best go about training an AI to recognize and interpret them. If you’ve stuck with us the whole way, all we can say is, “skol!” We’ve covered a lot of ground but always circling around one particular notion, should we train the AI based on localized customs and behaviors or strive for a more universal approach that would yield meaningful results regardless of place or time? The latter of course would be the obvious choice. However, it’s complicated by the fact that not all symbols are the same. Certain symbols have meanings that can at least be partially inferred, sounds for example. Based on tone and volume a person can infer much about the specific intent behind the sound regardless how familiar the listener is with it. Others, like numbers and shapes, are pretty clear across cultures. Even a culture that for some reason isn’t familiar with the way most people write numbers can be acquainted with it easily enough. Which would be handy if you found a lost tribe of Romans in the Alps still using Roman numerals.
Other symbols are a bit more complicated. Take street signs. Anything other than a simple arrow will take a little bit to decipher. A traffic light in particular is something that doesn’t translate fully on its own to another culture. Another example would be something like a flag or religious symbol. These are much more difficult for someone who isn’t familiar with their history and the communities they represent to understand. They may represent universal truths but they can’t be broken down and reduced in the same way that a simple arrow or a number can. Indeed, not all symbols are the same.
With many simple symbols it is possible to train the AI to recognize them since they are largely mathematical expressions. That’s easy for the machine since it is based on ones and zeroes any way. It’s operating in its element. More complicated things like the traffic light or a stop sign can be learned, though it will take a bit longer. The AI will have to observe how people react to them in order to discern their meaning. That is if you want it to learn instead of just programming it. Those more complicated ones, the symbols that at different points have inspired the best and the worst in humanity are another matter entirely. The meaning of them is inextricably linked with the cultures they represent. It may well mean something definite and timeless but you’ll never figure it out regardless of how much mathematical analysis you do on it. You and the AI would have to study the people and the beliefs that look to those symbols. That is a much more complicated process. Yet, as complicated as it is, it’s something people do almost intuitively. Even with something unfamiliar, we can often look to similarities, elements that relate to something we are familiar with. Of course, we might end up completely wrong, but self-correction and awareness of the need for it is something people are built to do. It’s practically the basis of all scientific and philosophical inquiry.
So, the question remains. Can we train an AI to do that? Can we train an AI to understand that it might not fully understand? Can we teach it to keep looking for better answers after it has made an initial evaluation? Right now, it doesn’t look like it. The human mind is much more complex than a mere biological computer. There are processes at work that we haven’t begun to fathom. Will it one day be possible to fathom them and translate that into something a machine can work with? Possibly. But one thing is certain, no AI will be able to fully understand complex symbols until it can understand itself.
What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.