Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace

Preserving Language with Machine Learning

Everyone talks about different species going extinct. And for good reason. Anytime a species goes extinct, something unique and unrepeatable has been lost. While most don’t stop to think of it, the same is true for languages. 

Over the course of human history, a great many languages have been lost to the sands of time. When a language disappears, it takes away more than just a few words or sounds, it takes with it a way of thinking, of seeing the world and expressing thoughts about it. When a language is lost, we lose the most important tool for understanding a culture. In fact, you could say that when a language dies, a culture dies with it. That’s because every culture has certain concepts or ways of putting thoughts together that are simply lacking in others. Just as an example, German has a feature that lets people string multiple words together to create one new word that represents a new concept. 

There are numerous old fishing villages in Ireland. In many ways, these villages are the last vestiges of the old Irish language. Not only are there still those who speak the language of their ancestors, there are words and concepts used that are unique to each village, words for the different waves and for different tools that might not exist anywhere else. 

There are a lot of different ways languages are lost. Sometimes, a language evolves so much that it becomes an entirely new one for all intents and purposes. Just try to read a copy of Beowulf in the original Old English. In the past, it was not unheard of for conquering power to outlaw the language of their defeated enemy in order to destroy their culture and assimilate them into that of the victor. Other times, the loss of language is a function of trade. As an upstart company or industry to move in, people will adopt the language that opens up the most economic opportunity. Coupled with the fact that shifting economics can do away with the need for certain concepts, it’s easy to understand how people might unconsciously let the old words and concepts disappear.           

What can be done to preserve languages that are on the verge of being lost? There must be something behind finding the couple of villages that are still speaking Gaelic and putting them in an isolated biosphere. What if we could actually use machine learning to help us preserve at-risk languages?  

These old words can be collected into databases, like giant digital dictionaries. Not only the words and their meanings but the concepts and histories of them can be stored in an easily searchable format. Not only that, but (as has been previously discussed here) machine learning is very good at recognizing patterns. As such, it can be used to help fill holes in the language. To determine the meaning of words whose definition no one recalls, or even point the direction towards whole words that have gone missing. Even better, machine learning can help researchers determine how words were pronounced or how entire sentences might have been put together and so not only preserve a dying language but resurrect one already dead. 

Why, though? Why is any of this important? Because these languages, these cultures are a part of our past and anyone with a hint of historical knowledge will tell you that if you want to know where we are heading, we need to know where we have been. If we want to preserve anything of our own culture, we had best learn why others disappeared in order to prevent ours from taking the same route. 

What’s your data worth? 

Algorithms and Dead Languages

Here is your fun fact for the day – Napoleon actually broke the Rosetta Stone. Go figure. In a way, it’s a great metaphor. The Rosetta Stone has been an incredible tool for translating multiple languages in the centuries since its discovery, proving itself a valuable aid in helping put back the pieces of many languages that tend to get broken and lost over time. The value though is not merely in being able to translate ancient languages, it’s in all the history that comes with being able to read ancient texts for the first time. Suddenly a whole perspective on historical events opens up, or knowledge of things we could never have known about otherwise is unlocked. Putting an ancient language back together doesn’t just open up words, it opens up literal worlds.

Now, the geniuses over at MIT have come up with another tool that we can use to unlock a few more. A new system has been developed by the Computer Science and Artificial Intelligence Laboratory (CSAIL) that can actually decipher lost languages. Best of all, it doesn’t need extensive knowledge of how it compares with already known languages to crack the code. The program can actually figure out on its own how different languages relate to one another. 

So, how does that wizardry work? One of the chief insights that make CSAIL’s program possible is the recognition of certain patterns. One of these is that languages only develop in certain ways. Spellings can change in some ways, but not others due to how different certain letters sound. Based on this and other insights, it was possible to develop an algorithm that can pick out a variety of correlations. 

Of course, such a thing has to be tested before it can be trusted. If you don’t test your language detector, you get bad languages. That’s probably how the whole “Aztecs said the end of the world would be in 2012” thing started. One intern with a bad translator program took it from, “And then I decided I could stop chiseling the years now. I’m a few centuries ahead,” to “the earth will stop completely rotating in 2012”. Fortunately, the researchers at MIT were a bit brighter than that. They took their program and tested it against several known languages, correctly pointing out the relationships between them and putting them in the proper language families. They are also looking to supplement their work with historical context to help determine the meaning of completely unfamiliar words, similar to what most people do when they come across a word they don’t know. They look at the entire sentence and try to figure out the meaning from the surrounding context. 

Led by Professor Regina Barzilay, the CSAIL team has developed an incredibly useful tool to help us understand not just the events of times gone by, but the way people thought back then. By better understanding the languages of the past, we can learn why people did what they did. We could gain valuable insight into cultures long dead to us. That knowledge will in turn help us to better understand our past and how we got to where we are. It gets us more information, information straight from the source, or at least closer to it. If TARTLE likes anything in the world, it’s getting information straight from the source. 

After all, that’s what we preach day in and day out around here. Getting our information from the source, minimizing false assumptions and bias when it comes to analyzing information. It’s great to see that same spirit at work in one of the world’s premier research centers and to see it being applied to our past. 

What’s your data worth?

An AI Counselor?

Unless you have been living under a rock for the last few years, you are probably well aware of the growing mental health problem in the Western world today. The causes for this issue are many. Lack of purpose, despair over the state of the world, and of course playing a bigger role than ever in the last year would be the lack of human interaction. 

Of course, regardless of the state of lockdowns in your area and the increased suicide rates that go along with that, there are some groups that have greater struggles with suicide than others.

The Trevor Project recognized that one of these is those identifying as LGBTQ. This group has a disproportionately high rate of suicide, whether that be from people rejecting them, their own confusion, or a combination of these factors will vary from case to case. The important thing is that the people behind the Trevor Project realized that when there is an adult who lets these people know they care about them and treats them like they are an important person worthy of respect, there is a 40% drop in instances of suicide. 

While the Trevor Project is well intentioned, they are also woefully undermanned for the task. With such a large number, 1.8 million annually contemplating suicide, Trevor only has 600 employees to handle the demand. This has set them on a path to working on new ways to serve those in distress. One of those ways being explored is the use of AI as a counselor. 

How can that work? How would someone respond to this? Knowing that they are being put in contact with a machine when what they really want is a person would seem to be something that would be upsetting. Honestly, that is intuitive. However, there could still be a role for AI in handling some of the basics. Sometimes a person just needs a little encouragement when they call and not a full psychological evaluation. A properly trained AI can help sift through the callers in those first few minutes. If it turns out that the person calling is someone with a more pressing issue than even the best trained AI can hope to deal with, it can then put the caller in touch with a person.

Speaking of training those AIs, the Trevor Project is feeding all of its collected years of conversations into their programs in order to teach them how to interact with people. By looking at the flow of the conversations and how certain people respond to various phrases and tones of voice, an AI can be trained to at least handle the more basic issues that might come up. 

There are also those who might have been putting off opening up precisely because they are afraid to talk to a person. There may be issues of judgement and anxiety at play that would actually make the prospect of talking to an AI more enticing. Sometimes, people just need to vent and an AI presents an opportunity to do exactly that.

These AI counselors have of course a far wider application. One will naturally think again of the separation issues caused by people adhering to lockdown orders around the world. There are some who have literally not seen their loved ones for over a year. That, in addition to the complete disruption of normal life for many has sent the suicide rate through the roof. We’re talking numbers far greater than any hotline can deal with. Or think of Japan. Young people commit suicide there at an alarming rate under the best of conditions. This is often in response to social pressures to be the best at everything. Given the pressures against being open about how one feels about such things and the country’s general acceptance of technology, an AI counselor might actually be preferable for many there. 

TARTLE is eager to help in any way to develop these kinds of projects so that more people can be helped. That’s why we are asking you to sign up and share your data and experiences with just these kinds of endeavors. It’s one small way we can contribute to getting people the help they need. 

What’s your data worth?

Facial Recognition and Consent

Facial recognition is quickly becoming a common tool in many aspects of life. It’s being used in stores to recognize customers as soon as they walk through the door. This can then feed back into Facebook and other social media in order to send you ads for the store. That of course gets fed into other algorithms so that you will be sent ads for similar stores. 

Another increasingly common use of facial recognition software is in device security. Phones, tablets, and PCs are now often unlocked by scanning the user’s face. If you don’t think your face is getting stored by Google, Apple and others to keep for some undisclosed purpose, then I have some swamp land on Tatooine I’d like to sell you. 

Then of course there is the security use of this software. You may have noticed cameras popping up here and there in a city near you. They’ve been in some places like Washington D.C. and London for years. These cameras constantly scan and record activity. Initially, this would have simply been to record any criminal activity so that the perpetrators could be swiftly apprehended. However, with facial recognition, they are constantly scanning faces in the crowd, looking for criminals. 

You might ask why that’s wrong. After all, don’t we want criminals apprehended? Of course we do. However, it should not come at the price of being treated as a criminal without having actually done anything. How many people were asked if they wanted cameras everywhere recording their every movement?

Come to think of it, how many people were asked if they wanted any of these new developments? Okay, when it comes to the screen unlocking it’s fair to say that people are agreeing to it when they buy the device and selecting their security preferences. But the rest of it? How many of us really want to be fed a bunch of ads just because we walked into the local GAP? Or even be bothered with a pop up asking us to opt in or out? And why does anyone think we would all like to be scanned to see whether or not we are wanted for any crimes? How hard would it be for that kind of technology to be used to locate not just criminals but people who the state does not approve of? Perhaps the most important question to ask is how, how do we find ourselves in a situation in which we even have to worry about the misapplication of this kind of technology?

There are too many reasons to explore here. However, one of the big ones is the simple fact that we have a hard time not doing something once we realize that we can, or even that we might be able to do a certain thing. Or to paraphrase Dr. Malcom in Jurassic Park “we are often so concerned with whether or not we can, but never stop to wonder if we should.” We develop a new technology and before we’ve even stopped to consider the implications, we are rushing ahead with new applications. Just think of nuclear technology. It has enormous potential for providing energy to the world but was first turned into a bomb. That tendency to leap before we look also manifests itself in the form of various justifications for whatever we are doing. For example, the fact that certain ‘ethicists’ openly wonder if consent is really necessary if people are being spied on without them knowing about it; ‘If they don’t know, does it really matter?’ The fact this question can even be asked and taken seriously by some should be deeply concerning to all. How many violations of liberties, how many crimes and injustices can be justified with exactly that same ‘reasoning’? 

How do we stop this? How do we fight this tendency of human nature without becoming luddites? By remembering that we are all individual human beings, full of dignity and worthy of respect as unique creations. If something is going to be happening to us, even something innocuous, we had better have a say in it. Only by treating each other in this way, with true respect, can we hope to preserve any kind of society that respects individuals and their choices.

What’s your data worth?

AI and Symbols pt 6

Here we are, the end of our extended series on AI and symbols. Specifically, how to best go about training an AI to recognize and interpret them. If you’ve stuck with us the whole way, all we can say is, “skol!” We’ve covered a lot of ground but always circling around one particular notion, should we train the AI based on localized customs and behaviors or strive for a more universal approach that would yield meaningful results regardless of place or time? The latter of course would be the obvious choice. However, it’s complicated by the fact that not all symbols are the same. Certain symbols have meanings that can at least be partially inferred, sounds for example. Based on tone and volume a person can infer much about the specific intent behind the sound regardless how familiar the listener is with it. Others, like numbers and shapes, are pretty clear across cultures. Even a culture that for some reason isn’t familiar with the way most people write numbers can be acquainted with it easily enough. Which would be handy if you found a lost tribe of Romans in the Alps still using Roman numerals. 

Other symbols are a bit more complicated. Take street signs. Anything other than a simple arrow will take a little bit to decipher. A traffic light in particular is something that doesn’t translate fully on its own to another culture. Another example would be something like a flag or religious symbol. These are much more difficult for someone who isn’t familiar with their history and the communities they represent to understand. They may represent universal truths but they can’t be broken down and reduced in the same way that a simple arrow or a number can. Indeed, not all symbols are the same. 

With many simple symbols it is possible to train the AI to recognize them since they are largely mathematical expressions. That’s easy for the machine since it is based on ones and zeroes any way. It’s operating in its element. More complicated things like the traffic light or a stop sign can be learned, though it will take a bit longer. The AI will have to observe how people react to them in order to discern their meaning. That is if you want it to learn instead of just programming it. Those more complicated ones, the symbols that at different points have inspired the best and the worst in humanity are another matter entirely. The meaning of them is inextricably linked with the cultures they represent. It may well mean something definite and timeless but you’ll never figure it out regardless of how much mathematical analysis you do on it. You and the AI would have to study the people and the beliefs that look to those symbols. That is a much more complicated process. Yet, as complicated as it is, it’s something people do almost intuitively. Even with something unfamiliar, we can often look to similarities, elements that relate to something we are familiar with. Of course, we might end up completely wrong, but self-correction and awareness of the need for it is something people are built to do. It’s practically the basis of all scientific and philosophical inquiry. 

So, the question remains. Can we train an AI to do that? Can we train an AI to understand that it might not fully understand? Can we teach it to keep looking for better answers after it has made an initial evaluation? Right now, it doesn’t look like it. The human mind is much more complex than a mere biological computer. There are processes at work that we haven’t begun to fathom. Will it one day be possible to fathom them and translate that into something a machine can work with? Possibly. But one thing is certain, no AI will be able to fully understand complex symbols until it can understand itself. 

What’s your data worth?

AI and Symbols Pt. 5

Here we are at part five (or is it 50?) of our series on training Artificial Intelligence how to work with symbols, how to recognize and interpret them. Today, we are going to continue to wrestle with whether or not the method of training AI to do this should be based on agreed upon cultural standards or a universal standard. The truth is this, it is a difficult topic to contend with. Normally, it would definitely be preferable to go with a truly universal standard of interpretation. However, it has to be admitted that interpreting symbols presents unique challenges in that regard. This is because much of the meaning in nearly any symbol is dependent on the local culture. It also depends greatly on one’s view within that culture. 

Look back on the swastika that we discussed some time ago. Pretty much everyone agrees that what the Nazis stood for is evil and that the swastika is a symbol of all that evil. Yet, the Nazis didn’t regard their actions as evil, they regarded themselves as the good guys. Then there is the fact that before the Nazis appropriated the symbol, the swastika was a benign symbol in multiple eastern religions. The point is, this one symbol has at least three very separate meanings that depend on personal understanding and knowledge of context to understand. 

Or take a hand gesture, another subject we’ve touched on before. In this case, consider a salute. The Nazis salute with an arm extended at a 45 degree angle. Americans salute with the upper arm parallel to the ground with the forearm bent to bring the fingertips to the corner of the right eye. Other cultures may salute with a bow, or a closed fist over the heart. To many outside of a given culture, a particular salute will likely mean nothing. Go to some secluded Amazon tribe and they won’t recognize any of those particular gestures. 

Take another, the ever popular middle finger. Most in the western world will recognize its meaning right away. The same meaning was once conveyed in the plays of Shakespeare by the biting of one’s thumb. Other gestures convey the same meaning in other ways. However, a Mongolian tribe is likely to be wondering why you want them to see your middle finger. Perhaps they’ll think something is wrong with it or you just have a very strange way of pointing.  

Now, it might seem at first that all of these different gestures and that fact any given culture might not know of one or any of them should be a silver bullet to any thought of establishing an objective standard for interpreting symbols. However, I would argue that is too superficial. Instead of looking at the gesture, the symbol itself, look at the concept it symbolizes. In the case of a salute, it conveys respect and it does so regardless of the particular gesture being used. In the case of the middle finger, it conveys anger and active disrespect. Again, it does this independent of the particular gesture being used. By digging past the appearance of the symbol to the ideas that it is meant to convey we reach something much more like a universal standard. When we see these symbols in a context we understand, we know what they mean because we have learned to associate them with other symbols like body language and tone of voice. We’ve learned this unconsciously through years of observation. When we see a gesture we don’t recognize, we pay attention to those other elements of body language, tone of voice, environmental context to get to the concept behind the gesture. 

AI will have to be taught to do the same. It will have to learn what we mentioned last time, how to recognize the universal concept behind the local and subjective expression of it. Once we can figure out how to clear that hurdle, we will have really gotten somewhere with actually making AI as intelligent as it is artificial.

What’s your data worth?

AI and Symbols Pt. 4

In order to take AI from a mere program that is very good within a narrow sphere and elevate it to the level of actual intelligence we need to find a way to teach it how to recognize and interpret symbols. One of the main topics of discussion in this series has been whether we should take a subjective, interpretive based approach, or an objective, universal one to training the AI as to the nature of symbols. Before we delve further down this discussion though, we should take a step back to truly recognize the enormity of the task. To do that, we need to recognize that there is disagreement over the very nature of symbols in the first place. What makes a symbol a symbol?

For the purpose of our discussion we should briefly look at the definition given by the authors of Symbolic Behavior in Artificial Intelligence, the paper that has been the basis of our discussion in this series. They draw in part on a definition given by Nolan Simon describing a symbol as “a series of interrelated physical patterns that can designate any expression whatsoever”. All right. What does that mean?

At first glance, one could read it as meaning that any symbol, any “series of interrelated physical patterns” can literally represent anything. That may or not be the way Simon intended it. If he did, we’re not sure how to help him. Obviously certain patterns can only represent certain things. A statue of an elephant clearly represents an elephant and not a mouse. “But what if you call an elephant a mouse?” says the gadfly in the back. Then it represents something that someone calls a mouse. The point is, it’s very clear what that statue represents, no matter what name you give to the animal. 

So, what else might Simon’s definition mean? A better meaning, one that makes a lot more sense is that one can use some kind of “interrelated physical pattern” to represent anything. More simply, literally everything can be symbolized. The immensely complicated and intense concept of love is symbolized with a heart. A circle is a circle wherever you go. If anyone in virtually any culture anywhere sees a small disc with a person’s head on it, they know they are looking at some kind of money. 

So, what should we use as the definition of a symbol? A simple phrase would be that a symbol is anything that represents something else, whether it be a sound, an action, a thing, or a concept. Now that we have that out of the way, let’s get back to the idea of an objective interpretation of symbols.

We’ve already talked about the fact the paper’s authors favor an interpretation based approach to training AI. They do correctly identify that symbols get a lot of their meaning from the culture in which they originate. Based on this, would it be fair to criticize an objective approach as being impossible or anemic at best. Impossible? No. Anemic? Perhaps. 

However, consider an opera. They are still very often done in Latin, or German. Even if it’s in your local language, the singing will often be so stylized that you may not be able to recognize anything. Yet, despite not being able to understand all the symbols being presented to you, you still pick up something. You can pick up on the tone of the music, the melodies presented, the pitch of the singer’s voice, all of which convey meaning to the listener. In short, there are universal aspects to the symbols being presented that transcend particular cultures. 

This can be done with a variety of symbols. When we see a statue, we know it represents a particular thing. We can tell from the expression of an illustration something of the mood of the character presented. 

Because this universal element is identifiable, it seems clear that we can and should explore a universal basis to teaching an AI how to properly interpret symbols.

What’s your data worth?

AI and Symbols pt 3

“Symbols lie at the root of intelligent action”. This is a quote from the paper, Symbolic Behavior in Artificial Intelligence that inspired this series. This statement says a lot about just how important symbols and their interpretation is to how we navigate our daily lives. Even if we are mindlessly going through life fulfilling only basic needs, we are still dependent to a degree on symbols and interpreting them.

One of the chief aspects that we’ve focused on so far is the subjectivity that seems inherent in the interpretation of the symbol. Many, including the authors of the paper in question will go so far as to contend that symbols only exist in terms of their interpretation. They argue from there that interpretation is dependent on agreed upon, yet often shifting behaviors and consensus in society. If that is the case, AI should be taught how to interpret symbols based on those behaviors. It’s easy to see why this might be a tempting notion. After all, in part two of this series we talked about how the swastika’s meaning was changed from a symbol of divinity to a symbol of hatred and oppression by the Nazi’s. Symbols can often change in meaning depending on the context of the time and place. Yet, this seems like a shaky basis for training an AI. One would hope that an AI would be something that could be used universally, regardless of time and place. To base one of the most important aspects of its training on a purely local and subjective standard means it would need to constantly be retrained. Even more, it would have to be trained differently based on local culture and customs, leading to multiple versions of the AI, versions that would have as much difficulty communicating with each other as different cultures do now. 

So, how to resolve this issue? If behavior, custom, and subjective interpretation doesn’t work, what does? We have to find some kind of objective standard to work from. Part of that is casting aside silly rhetorical tricks. If a tree falls in the woods and no one is there to hear it, of course it makes a sound. Or one that is taken far too seriously, Schrodinger’s Cat. For those unfamiliar with it, the cat is a device used to illustrate a principle of quantum mechanics that subatomic particles can be in two states at once. That aside, the cat is a silly example. The idea is that if our feline is in a box you don’t know whether it is alive or dead, which means it is both. That’s ridiculous, the cat is either alive or dead; the fact you don’t know which doesn’t change anything. How does all this relate to our problem of symbols? Simple. The symbol exists on its own, independent of any interpretation. Just like the cat, just like the sound of the tree falling, the symbol does not need an interpreter simply to exist. 

How to do that? Wherever possible we should be looking in the direction of fundamental truths of the universe and how given symbols are based on those. Math in its various forms is an excellent tool for understanding the universe and even various symbols. Many, including medieval sword makers understood this, and incorporated mathematical proportions to imbue their products with rich symbolic meaning. 

Naturally, turning strictly to math won’t always be possible. How does one use math to interpret the meaning of a flag for example? Or a novel? In these cases, we should go to the original intent of the symbol’s creator. Yes, there may be additional meanings that are there outside of the creator’s intent, but those are accidental. 

Given all of this, there is still the fact that meanings do change overtime, that certain understandings and expressions of symbols are local. How do we reconcile this? To paraphrase the late GK Chesterton, “men don’t disagree much on what is good, but they do disagree a great deal on how they understand that good”. The idea is that while there are definitely universal truths, those truths will be expressed differently based on a variety of different circumstances. A good place to start for training an AI would be to recognize that fact and look for the deeper truths that lie beyond the local understanding. 

What’s your data worth?

AI and Symbols Pt. 2

Last time we talked about Artificial Intelligence (AI) and the difficulty it has with recognizing the significance and meaning of symbols. We provided a rough outline of some of those difficulties AI has in this area and how it is the chief obstacle to making truly intelligent machines, as opposed to making machines that are good at the one or two things they are designed for. In this the next few pieces we’ll be going deeper, exploring the many kinds of symbols and how people take their recognition and use for granted, rarely fully appreciating all the complex processes involved in doing so. 

Let’s begin with language. Any language is a series of sounds and/or written words, each of which is a symbol that stands for a thing, action or concept. Even just that first fact, that languages are usually both spoken and written hints at the great deal of complexity involved. If we are fluent in a given language, we easily can hear a series of sounds and understand the written words that would correspond to them. We further understand the thing, action, or concept that they represent. However, someone who is just learning a new language will appreciate just how difficult it is to wrap his head around all of these relationships. This is especially true in regards to trying to match the written language with the proper pronunciations. Just try to be an English speaker learning French. Another place where difficulty arises even with people is the fact that different cultures and their languages have different concepts that don’t translate perfectly into other languages. All of these are reasons why Google Translate often has such entertaining results. 

Yet, all of that is in some ways the easy part. When we are trying to interpret the audible and written symbols of a language, it is relatively straightforward compared to trying to interpret other kinds of symbols. With a language, there are still meanings that can be checked using tools like dictionaries. What about paintings? On one level, a still life seems very simple. A bowl of fruit symbolizes a bowl of fruit. Yet, art is rarely so superficial. Very often, arguably always, the artist imbues his work with additional meaning. That meaning can be intended or not, something that makes the interpretation of art such a contentious and interesting subject. Many times, someone will look at a painting and see things in it that the artist could never have anticipated, yet are there nonetheless. 

Let’s not forget that the meaning of symbols can change over time as well. Perhaps the most famous example of this phenomenon is the swastika. Once, it was a fairly obscure symbol of divinity used in a variety of eastern religions. However, virtually no one can see it now and not think of the worst kinds of violence and bigotry in human history. The swastika has become the flag for fascism in the western mind since WWII, quite the change from its original use.

Another example from WWII shows how a single image can symbolize a great many things. The iconic picture of the raising of the US flag at Iwo Jima symbolizes victory, liberation, camaraderie and many other things besides. While an AI might pick up on some of that, a full appreciation of everything symbolized in that one image is impossible without some historical knowledge of the actual context behind it. 

While it might seem from this that the interpretation of symbols is a hopelessly subjective enterprise, the truth is that symbols still have a genuinely objective meaning, though it is largely dependent on context. The swastika does have an objective meaning but one must be aware of it. A painting of an apple can symbolize many things. But it definitely symbolizes an apple and definitely does not symbolize an orange. The difficulty in determining the meaning of many different symbols lies precisely in trying to sift through the many subjective meanings of things in order to get to the objective. A task that AI has thus far not been up to. We’ll see later in this series whether there is any hope of overcoming this hurdle or not.

What’s your data worth?

AI and Symbols

People have been working on Artificial Intelligence for years. No, not to create HAL 9000 or Skynet. Well, hopefully not. The goal is to create programs that are better at analyzing data, helping us to make better decisions. 

One of the primary obstacles to that goal is being able to recognize the meaning of symbols. Why should that be so hard? Program various symbols and their meanings into the algorithm and everything should be fine. Right? Wrong. There are some symbols that should be very easy to handle, such as a STOP sign. Program the meaning of the word ‘stop’ and the color and shape of the sign and your automatic car will now be able to stop when it is supposed to. Sounds simple, doesn’t it? You’d think it would be.

Yet, STOP signs have been known to be used as décor as well, or be included on a storefront, or used to say something other than ‘stop at the intersection’. For an automated car trying to navigate busy city streets, this is an extremely daunting task. It has to be able to not just recognize the symbol but to recognize its context. This means taking into account where the symbol is located, its size and the other factors that affect the temporary context. If the vehicle’s AI can’t sort out the context and make a correct judgement as to whether or not to stop the vehicle or wash hands before returning to work then it isn’t all that great. 

Imagine another example. If I give the middle finger to someone, it could be interpreted in a number of ways. One is the obvious, ‘go away, I don’t like you’, another is that it could be humorous. Another could be simply that the finger in question hurts and it’s being held up to display a bruise or cut. We are able to intuit the context of the situation and interpret accordingly. However, missing just one piece of that will lead to differing interpretations with potentially dangerous results.

Building programs capable of making even these very simple kinds of distinctions is more difficult than it might sound. This is because you can’t literally program every single variable into the software. At some point, your AI software will have to be able to truly function on its own. And to get there, it has to train.

Think of training a dog. When you teach a dog to sit, does it hear the world ‘sit’ and understand its meaning and act accordingly? No. What is going on is that the dog recognizes the word but also is able to understand the context of the command to ‘sit’, such as the tone of voice used, a light push to sit, and even facial expressions. All of that factors into understanding the simple meaning of a simple world. 

If it is that hard to explain how a dog goes about responding to the command to sit, or there is so much to consider in a simple and common hand gesture how much harder is it going to be to get an AI to explain the level of symbolism used in Dante? Answer, virtually impossible.

Fortunately, we don’t need these programs to do the impossible, we just need them to do a little better than the dog. The truth is, that will be hard enough, hard, but doable at least. The AI will need to be taught how to recognize many different symbols before it finally ‘learns’ how to do so and no longer needs to be trained. 

New methods of doing this very thing are being tested right now. What we hope happens is that the complexities of all these systems are understood by the programmers involved. Whether or not they do keep that complexity in mind is the difference between teaching these programs to control through making decisions for us, or programing them to learn and to teach in order to help us make better decisions for ourselves. 

What’s your data worth?