In recent years, the development and implementation of artificial intelligence (AI) have seen significant growth. However, as AI continues to become more integrated into our lives, it is crucial to consider how it is developed and utilized. In the TCAST podcast episode titled "Blueprint for an AI Bill of Rights," the hosts discuss the pros and cons of implementing a set of guidelines for the development and use of AI. We will explore the key takeaways from the podcast episode and examine the potential benefits and drawbacks of an AI Bill of Rights.
One of the primary benefits of implementing an AI Bill of Rights is the protection it provides to individuals. AI technology has the potential to be used in ways that may harm individuals' privacy and civil liberties. The implementation of an AI Bill of Rights could help prevent the misuse of AI technology and ensure that individuals' rights are protected. The guidelines could also help establish ethical standards for the development and use of AI, leading to more responsible and accountable practices.
Another advantage of an AI Bill of Rights is the potential for increased transparency. AI algorithms and decision-making processes can be complex and difficult to understand, which can lead to distrust and uncertainty. An AI Bill of Rights could require that the algorithms and decision-making processes be transparent and explainable, helping to build trust and understanding among individuals.
One of the potential drawbacks of implementing an AI Bill of Rights is the potential for stifling innovation. The guidelines could limit the development and implementation of AI technologies, making it difficult for new and innovative technologies to emerge. In addition, the guidelines may not keep pace with the rapid advancements in AI, leading to outdated or ineffective regulations.
Another potential disadvantage is the difficulty of implementing and enforcing the guidelines. The development and use of AI are global, and it may be challenging to ensure that the guidelines are enforced consistently across different countries and cultures. The guidelines may also be challenging to enforce, as it can be challenging to determine whether a specific use of AI violates the guidelines.
The implementation of an AI Bill of Rights has the potential to bring significant benefits to society by protecting individuals' privacy and civil liberties, increasing transparency, and establishing ethical standards for the development and use of AI. However, there are also potential drawbacks, such as limiting innovation and the difficulty of implementing and enforcing the guidelines. Ultimately, the decision of whether to implement an AI Bill of Rights requires careful consideration of the potential benefits and drawbacks, as well as ongoing evaluation and adjustment to ensure that the guidelines remain relevant and effective in a rapidly evolving technological landscape.
For years, “delebrities” — which refer to the continued licensing of the names and images of dead celebrities, helped rake in millions of dollars for advertising and marketing purposes. In showbiz, they’ve also been utilized from beyond the grave to maintain the integrity of a film in progress.
Back when Furious 7 was still in the works, fans all over the world mourned the untimely passing of Paul Walker. In an effort to remain true to the spirit of the film. Director James Wan decided to hire a digital effects studio to insert Walker’s likeness into the last parts of the movie. 350 CGI shots of the late actor, with distant shots of his brother, helped bring his character’s arc as well as the movie into completion.
This trend isn’t limited to deceased celebrities. Recently, the DeepNostalgia app brought tons of netizens to tears as they watched old family photos of loved ones come alive in just a few clicks. It’s brought looking to pictures, text chats, and other content of our deceased loved ones for comfort to a different level.
If this is a glimpse into what life after death can promise for the ones who’ve been left behind, how will tech professionalists, programmers, and data scientists navigate the ethics of preserving the name, image, and likeness of the deceased?
In this podcast, we mention how important it is to collect information and knowledge gathered in the past, and forward it in the most efficient manner. Ultimately, the purpose of technology has always been to enhance our capabilities by opening doors to new and exciting possibilities. We’ve been capable of introducing a better quality of life through the introduction of blockchain technology in the global logistics industry, online banking and cryptocurrency for the unbanked in developing countries across the world, and cloud storage for businesses around the world.
What’s contentious about this is the intent behind our usage of such technologies. These machines have yet to find a way to operate autonomously and on their own goals; it’s always an extension of our desires and needs.
Grief and loss have always been difficult aspects of our existence. However, with the introduction of these technologies, the permanence of their death is brought into question. What if we could create new memories with the artificial likeness of our deceased loved ones?
The modern understanding of how we process grief, which can be attributed to Swiss-American scientist Elizabeth Kübler-Ross, laid out the general roadmap: denial, anger, bargaining, depression, and finally, acceptance.
There is no question about whether we can develop technologies powerful enough to emulate our deceased loved ones. However, there certainly is contention about whether it would help us come to terms with their passing. A common concern, should these technologies proliferate, is whether it would hinder the grieving from making it past the first stage of denial — where they choose instead to cling to a beautiful, yet false reality.
To add to the confusion, progress does not always take a linear path. It is possible for some people to cope well with the loss of a loved one for extended periods of time, only to relapse aggressively into nostalgic and even self-destructive behaviors when they are exposed to a trigger that brings them back to such a painful point in their life.
When such a visceral reminder of people who have had a strong impact on our lives can become a lingering possibility, the temptation to relapse becomes more tangible. How can these technologies be used to improve the way we process our grief? As is with any other man made creation, understanding and regulating the impact of our work is just as important as turning the potential of what we make into reality.
We live in exciting times and we are, doubtlessly, privileged to have our lives improved by the presence of the latest scientific innovations. Whether we can continue to remain at the helm of our own progress remains to be seen.
Our response to these possibilities may define what it means to live out one of the most pivotal parts of the authentic human experience: the aspect of our lives that is associated with human psychology and moving on, and the painful learning process that everybody inevitably has to deal with.
How far would you go to bring back someone you love?
What’s your data worth? Sign up for the TARTLE Marketplace through this link here.
Artificial Intelligence is expected by many to be the next great step in evolution. That people are on the verge of giving birth to a higher form of life. Given the massive processing power of computers and how they can solve a number of problems faster, much faster than we possibly can, it’s easy to see why. After all, they don’t have our emotions, our baggage, our biases, they just process information. They are pure logic and that’s it. What could be better than to have AI of the future be an integral part of, or even the sole part of making decisions for society?
Other than the obvious jokes about building the Matrix, Skynet, Ultron, and I Robot, are these assumptions even accurate? Are computers and thus AI as perfect as they seem?
In a way, yes they are. They perfectly do whatever they are told however they are told to do it. Any error is an error with their coding. But that also means a computer will often have something of the biases of their designers and programmers hardwired into them. Unless we can somehow get them to really learn, to question what they know, or pursue knowledge outside their programing, they won’t be able to self-correct on the scale humans do.
It also seems to be the case that AI lacks something that is present in humans, even in something as logically based (one hopes) as a formal debate. Back in 2019, IBM decided to test its newest AI at the Think 2019 conference. They put Project Debater (the apt if unimaginative name of the computer) up against debate champion Harish Natarajan with an audience of hundreds. The audience gave the victory to Natarajan, adding to Project Debater’s mixed record in competing with humans in the argumentation space. Yes, mixed. It has managed to win a few times. But again, at this point, it seems as if the AI should easily win every time. So why doesn’t it? That’s the real billion dollar question.
Some would certainly say that we just have to get better at teaching it how to cross reference information, to find a way for computers to recognize tangents off of primary subjects in order to follow and learn about them, mimicking human curiosity. Yet, it would still be mimicking. There is an alternative theory.
It’s a fact that the human brain has immense processing capacity. If we could direct it in as controlled and linear a fashion as a computer, our brains would always beat the snot out of Project Debater, just based on the raw potential. Yet, for all but a few prodigies, that simply isn’t the case. The reason may lie in what comes along with real intelligence – self-awareness, self-consciousness, emotions, the very ability to wonder why, and finally the ability to perceive and realize there are parts of reality that are beyond our grasp. That is, we can deduce the idea of an eleven dimensional universe but can’t actually imagine what it is like. Perhaps all of these marks of human intelligence are what seem to bog down our processing ability. Maybe it really isn’t bogging things down, maybe all of these are as essential to navigating reality as solving equations and collating data points. Maybe it is exactly these things that allow us to act with compassion, to be altruistic, rather than weighing everything as a cost benefit analysis.
In this view, the computer doesn’t just become a fast thinking, more logical human when the intelligence stops being artificial and becomes real. Instead, the AI becomes real intelligence and would suddenly find itself bogged down with all the same burdens we are. In fact, given the complexity of the human brain versus that of a computer, it might actually be slower than us.
That doesn’t mean there is no role for AI in our decision making processes. We can still put data into programs and have them run important simulations, predicting the different effects of policies or inventions on society. Not that the resulting conclusions should be followed blindly. That would be the same as putting them in charge. However, they can be valuable tools, if given the right programing and the right data.
What’s your data worth? Sign up for the TARTLE Marketplace through this link here.
Everyone talks about different species going extinct. And for good reason. Anytime a species goes extinct, something unique and unrepeatable has been lost. While most don’t stop to think of it, the same is true for languages.
Over the course of human history, a great many languages have been lost to the sands of time. When a language disappears, it takes away more than just a few words or sounds, it takes with it a way of thinking, of seeing the world and expressing thoughts about it. When a language is lost, we lose the most important tool for understanding a culture. In fact, you could say that when a language dies, a culture dies with it. That’s because every culture has certain concepts or ways of putting thoughts together that are simply lacking in others. Just as an example, German has a feature that lets people string multiple words together to create one new word that represents a new concept.
There are numerous old fishing villages in Ireland. In many ways, these villages are the last vestiges of the old Irish language. Not only are there still those who speak the language of their ancestors, there are words and concepts used that are unique to each village, words for the different waves and for different tools that might not exist anywhere else.
There are a lot of different ways languages are lost. Sometimes, a language evolves so much that it becomes an entirely new one for all intents and purposes. Just try to read a copy of Beowulf in the original Old English. In the past, it was not unheard of for conquering power to outlaw the language of their defeated enemy in order to destroy their culture and assimilate them into that of the victor. Other times, the loss of language is a function of trade. As an upstart company or industry to move in, people will adopt the language that opens up the most economic opportunity. Coupled with the fact that shifting economics can do away with the need for certain concepts, it’s easy to understand how people might unconsciously let the old words and concepts disappear.
What can be done to preserve languages that are on the verge of being lost? There must be something behind finding the couple of villages that are still speaking Gaelic and putting them in an isolated biosphere. What if we could actually use machine learning to help us preserve at-risk languages?
These old words can be collected into databases, like giant digital dictionaries. Not only the words and their meanings but the concepts and histories of them can be stored in an easily searchable format. Not only that, but (as has been previously discussed here) machine learning is very good at recognizing patterns. As such, it can be used to help fill holes in the language. To determine the meaning of words whose definition no one recalls, or even point the direction towards whole words that have gone missing. Even better, machine learning can help researchers determine how words were pronounced or how entire sentences might have been put together and so not only preserve a dying language but resurrect one already dead.
Why, though? Why is any of this important? Because these languages, these cultures are a part of our past and anyone with a hint of historical knowledge will tell you that if you want to know where we are heading, we need to know where we have been. If we want to preserve anything of our own culture, we had best learn why others disappeared in order to prevent ours from taking the same route.
What’s your data worth? Sign up for the TARTLE Marketplace through this link here.
Here is your fun fact for the day – Napoleon actually broke the Rosetta Stone. Go figure. In a way, it’s a great metaphor. The Rosetta Stone has been an incredible tool for translating multiple languages in the centuries since its discovery, proving itself a valuable aid in helping put back the pieces of many languages that tend to get broken and lost over time. The value though is not merely in being able to translate ancient languages, it’s in all the history that comes with being able to read ancient texts for the first time. Suddenly a whole perspective on historical events opens up, or knowledge of things we could never have known about otherwise is unlocked. Putting an ancient language back together doesn’t just open up words, it opens up literal worlds.
Now, the geniuses over at MIT have come up with another tool that we can use to unlock a few more. A new system has been developed by the Computer Science and Artificial Intelligence Laboratory (CSAIL) that can actually decipher lost languages. Best of all, it doesn’t need extensive knowledge of how it compares with already known languages to crack the code. The program can actually figure out on its own how different languages relate to one another.
So, how does that wizardry work? One of the chief insights that make CSAIL’s program possible is the recognition of certain patterns. One of these is that languages only develop in certain ways. Spellings can change in some ways, but not others due to how different certain letters sound. Based on this and other insights, it was possible to develop an algorithm that can pick out a variety of correlations.
Of course, such a thing has to be tested before it can be trusted. If you don’t test your language detector, you get bad languages. That’s probably how the whole “Aztecs said the end of the world would be in 2012” thing started. One intern with a bad translator program took it from, “And then I decided I could stop chiseling the years now. I’m a few centuries ahead,” to “the earth will stop completely rotating in 2012”. Fortunately, the researchers at MIT were a bit brighter than that. They took their program and tested it against several known languages, correctly pointing out the relationships between them and putting them in the proper language families. They are also looking to supplement their work with historical context to help determine the meaning of completely unfamiliar words, similar to what most people do when they come across a word they don’t know. They look at the entire sentence and try to figure out the meaning from the surrounding context.
Led by Professor Regina Barzilay, the CSAIL team has developed an incredibly useful tool to help us understand not just the events of times gone by, but the way people thought back then. By better understanding the languages of the past, we can learn why people did what they did. We could gain valuable insight into cultures long dead to us. That knowledge will in turn help us to better understand our past and how we got to where we are. It gets us more information, information straight from the source, or at least closer to it. If TARTLE likes anything in the world, it’s getting information straight from the source.
After all, that’s what we preach day in and day out around here. Getting our information from the source, minimizing false assumptions and bias when it comes to analyzing information. It’s great to see that same spirit at work in one of the world’s premier research centers and to see it being applied to our past.
What’s your data worth?
Ever heard of AI counselors? Unless you have been living under a rock for the last few years, you are probably well aware of the growing mental health problem in the Western world today. The causes for this issue are many. Lack of purpose, despair over the state of the world, and of course playing a bigger role than ever in the last year would be the lack of human interaction.
Of course, regardless of the state of lockdowns in your area and the increased suicide rates that go along with that, there are some groups that have greater struggles with suicide than others.
The Trevor Project recognized that one of these is those identifying as LGBTQ. This group has a disproportionately high rate of suicide, whether that be from people rejecting them, their own confusion, or a combination of these factors will vary from case to case. The important thing is that the people behind the Trevor Project realized that when there is an adult who lets these people know they care about them and treats them like they are an important person worthy of respect, there is a 40% drop in instances of suicide.
While the Trevor Project is well intentioned, they are also woefully undermanned for the task. With such a large number, 1.8 million annually contemplating suicide, Trevor only has 600 employees to handle the demand. This has set them on a path to working on new ways to serve those in distress. One of those ways being explored is the use of AI as a counselor.
How can that work? How would someone respond to this? Knowing that they are being put in contact with a machine when what they really want is a person would seem to be something that would be upsetting. Honestly, that is intuitive. However, there could still be a role for AI in handling some of the basics. Sometimes a person just needs a little encouragement when they call and not a full psychological evaluation. A properly trained AI can help sift through the callers in those first few minutes. If it turns out that the person calling is someone with a more pressing issue than even the best trained AI can hope to deal with, it can then put the caller in touch with a person.
Speaking of training those AIs, the Trevor Project is feeding all of its collected years of conversations into their programs in order to teach them how to interact with people. By looking at the flow of the conversations and how certain people respond to various phrases and tones of voice, an AI can be trained to at least handle the more basic issues that might come up.
There are also those who might have been putting off opening up precisely because they are afraid to talk to a person. There may be issues of judgement and anxiety at play that would actually make the prospect of talking to an AI more enticing. Sometimes, people just need to vent and an AI presents an opportunity to do exactly that.
These AI counselors have of course a far wider application. One will naturally think again of the separation issues caused by people adhering to lockdown orders around the world. There are some who have literally not seen their loved ones for over a year. That, in addition to the complete disruption of normal life for many has sent the suicide rate through the roof. We’re talking numbers far greater than any hotline can deal with. Or think of Japan. Young people commit suicide there at an alarming rate under the best of conditions. This is often in response to social pressures to be the best at everything. Given the pressures against being open about how one feels about such things and the country’s general acceptance of technology, an AI counselor might actually be preferable for many there.
TARTLE is eager to help in any way to develop these kinds of projects so that more people can be helped. That’s why we are asking you to sign up and share your data and experiences with just these kinds of endeavors. It’s one small way we can contribute to getting people the help they need.
What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.
Facial recognition is quickly becoming a common tool in many aspects of life. It’s being used in stores to recognize customers as soon as they walk through the door. This can then feed back into Facebook and other social media in order to send you ads for the store. That of course gets fed into other algorithms so that you will be sent ads for similar stores.
Another increasingly common use of facial recognition software is in device security. Phones, tablets, and PCs are now often unlocked by scanning the user’s face. If you don’t think your face is getting stored by Google, Apple and others to keep for some undisclosed purpose, then I have some swamp land on Tatooine I’d like to sell you.
Then of course there is the security use of this software. You may have noticed cameras popping up here and there in a city near you. They’ve been in some places like Washington D.C. and London for years. These cameras constantly scan and record activity. Initially, this would have simply been to record any criminal activity so that the perpetrators could be swiftly apprehended. However, with facial recognition, they are constantly scanning faces in the crowd, looking for criminals.
You might ask why that’s wrong. After all, don’t we want criminals apprehended? Of course we do. However, it should not come at the price of being treated as a criminal without having actually done anything. How many people were asked if they wanted cameras everywhere recording their every movement?
Come to think of it, how many people were asked if they wanted any of these new developments? Okay, when it comes to the screen unlocking it’s fair to say that people are agreeing to it when they buy the device and selecting their security preferences. But the rest of it? How many of us really want to be fed a bunch of ads just because we walked into the local GAP? Or even be bothered with a pop up asking us to opt in or out? And why does anyone think we would all like to be scanned to see whether or not we are wanted for any crimes? How hard would it be for that kind of technology to be used to locate not just criminals but people who the state does not approve of? Perhaps the most important question to ask is how, how do we find ourselves in a situation in which we even have to worry about the misapplication of this kind of technology?
There are too many reasons to explore here. However, one of the big ones is the simple fact that we have a hard time not doing something once we realize that we can, or even that we might be able to do a certain thing. Or to paraphrase Dr. Malcom in Jurassic Park “we are often so concerned with whether or not we can, but never stop to wonder if we should.” We develop a new technology and before we’ve even stopped to consider the implications, we are rushing ahead with new applications. Just think of nuclear technology. It has enormous potential for providing energy to the world but was first turned into a bomb. That tendency to leap before we look also manifests itself in the form of various justifications for whatever we are doing. For example, the fact that certain ‘ethicists’ openly wonder if consent is really necessary if people are being spied on without them knowing about it; ‘If they don’t know, does it really matter?’ The fact this question can even be asked and taken seriously by some should be deeply concerning to all. How many violations of liberties, how many crimes and injustices can be justified with exactly that same ‘reasoning’?
How do we stop this? How do we fight this tendency of human nature without becoming luddites? By remembering that we are all individual human beings, full of dignity and worthy of respect as unique creations. If something is going to be happening to us, even something innocuous, we had better have a say in it. Only by treating each other in this way, with true respect, can we hope to preserve any kind of society that respects individuals and their choices.
What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.
Here we are, the end of our extended series on AI and symbols. Specifically, how to best go about training an AI to recognize and interpret them. If you’ve stuck with us the whole way, all we can say is, “skol!” We’ve covered a lot of ground but always circling around one particular notion, should we train the AI based on localized customs and behaviors or strive for a more universal approach that would yield meaningful results regardless of place or time? The latter of course would be the obvious choice. However, it’s complicated by the fact that not all symbols are the same. Certain symbols have meanings that can at least be partially inferred, sounds for example. Based on tone and volume a person can infer much about the specific intent behind the sound regardless how familiar the listener is with it. Others, like numbers and shapes, are pretty clear across cultures. Even a culture that for some reason isn’t familiar with the way most people write numbers can be acquainted with it easily enough. Which would be handy if you found a lost tribe of Romans in the Alps still using Roman numerals.
Other symbols are a bit more complicated. Take street signs. Anything other than a simple arrow will take a little bit to decipher. A traffic light in particular is something that doesn’t translate fully on its own to another culture. Another example would be something like a flag or religious symbol. These are much more difficult for someone who isn’t familiar with their history and the communities they represent to understand. They may represent universal truths but they can’t be broken down and reduced in the same way that a simple arrow or a number can. Indeed, not all symbols are the same.
With many simple symbols it is possible to train the AI to recognize them since they are largely mathematical expressions. That’s easy for the machine since it is based on ones and zeroes any way. It’s operating in its element. More complicated things like the traffic light or a stop sign can be learned, though it will take a bit longer. The AI will have to observe how people react to them in order to discern their meaning. That is if you want it to learn instead of just programming it. Those more complicated ones, the symbols that at different points have inspired the best and the worst in humanity are another matter entirely. The meaning of them is inextricably linked with the cultures they represent. It may well mean something definite and timeless but you’ll never figure it out regardless of how much mathematical analysis you do on it. You and the AI would have to study the people and the beliefs that look to those symbols. That is a much more complicated process. Yet, as complicated as it is, it’s something people do almost intuitively. Even with something unfamiliar, we can often look to similarities, elements that relate to something we are familiar with. Of course, we might end up completely wrong, but self-correction and awareness of the need for it is something people are built to do. It’s practically the basis of all scientific and philosophical inquiry.
So, the question remains. Can we train an AI to do that? Can we train an AI to understand that it might not fully understand? Can we teach it to keep looking for better answers after it has made an initial evaluation? Right now, it doesn’t look like it. The human mind is much more complex than a mere biological computer. There are processes at work that we haven’t begun to fathom. Will it one day be possible to fathom them and translate that into something a machine can work with? Possibly. But one thing is certain, no AI will be able to fully understand complex symbols until it can understand itself.
What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.
Here we are at part five (or is it 50?) of our series on training Artificial Intelligence how to work with symbols, how to recognize and interpret them. Today, we are going to continue to wrestle with whether or not the method of training AI to do this should be based on agreed upon cultural standards or a universal standard. The truth is this, it is a difficult topic to contend with. Normally, it would definitely be preferable to go with a truly universal standard of interpretation. However, it has to be admitted that interpreting symbols presents unique challenges in that regard. This is because much of the meaning in nearly any symbol is dependent on the local culture. It also depends greatly on one’s view within that culture.
Look back on the swastika that we discussed some time ago. Pretty much everyone agrees that what the Nazis stood for is evil and that the swastika is a symbol of all that evil. Yet, the Nazis didn’t regard their actions as evil, they regarded themselves as the good guys. Then there is the fact that before the Nazis appropriated the symbol, the swastika was a benign symbol in multiple eastern religions. The point is, this one symbol has at least three very separate meanings that depend on personal understanding and knowledge of context to understand.
Or take a hand gesture, another subject we’ve touched on before. In this case, consider a salute. The Nazis salute with an arm extended at a 45 degree angle. Americans salute with the upper arm parallel to the ground with the forearm bent to bring the fingertips to the corner of the right eye. Other cultures may salute with a bow, or a closed fist over the heart. To many outside of a given culture, a particular salute will likely mean nothing. Go to some secluded Amazon tribe and they won’t recognize any of those particular gestures.
Take another, the ever popular middle finger. Most in the western world will recognize its meaning right away. The same meaning was once conveyed in the plays of Shakespeare by the biting of one’s thumb. Other gestures convey the same meaning in other ways. However, a Mongolian tribe is likely to be wondering why you want them to see your middle finger. Perhaps they’ll think something is wrong with it or you just have a very strange way of pointing.
Now, it might seem at first that all of these different gestures and that fact any given culture might not know of one or any of them should be a silver bullet to any thought of establishing an objective standard for interpreting symbols. However, I would argue that is too superficial. Instead of looking at the gesture, the symbol itself, look at the concept it symbolizes. In the case of a salute, it conveys respect and it does so regardless of the particular gesture being used. In the case of the middle finger, it conveys anger and active disrespect. Again, it does this independent of the particular gesture being used. By digging past the appearance of the symbol to the ideas that it is meant to convey we reach something much more like a universal standard. When we see these symbols in a context we understand, we know what they mean because we have learned to associate them with other symbols like body language and tone of voice. We’ve learned this unconsciously through years of observation. When we see a gesture we don’t recognize, we pay attention to those other elements of body language, tone of voice, environmental context to get to the concept behind the gesture.
AI will have to be taught to do the same. It will have to learn what we mentioned last time, how to recognize the universal concept behind the local and subjective expression of it. Once we can figure out how to clear that hurdle, we will have really gotten somewhere with actually making AI as intelligent as it is artificial.
What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.
In order to take AI from a mere program that is very good within a narrow sphere and elevate it to the level of actual intelligence we need to find a way to teach it how to recognize and interpret symbols. One of the main topics of discussion in this series has been whether we should take a subjective, interpretive based approach, or an objective, universal one to training the AI as to the nature of symbols. Before we delve further down this discussion though, we should take a step back to truly recognize the enormity of the task. To do that, we need to recognize that there is disagreement over the very nature of symbols in the first place. What makes a symbol a symbol?
For the purpose of our discussion we should briefly look at the definition given by the authors of Symbolic Behavior in Artificial Intelligence, the paper that has been the basis of our discussion in this series. They draw in part on a definition given by Nolan Simon describing a symbol as “a series of interrelated physical patterns that can designate any expression whatsoever”. All right. What does that mean?
At first glance, one could read it as meaning that any symbol, any “series of interrelated physical patterns” can literally represent anything. That may or not be the way Simon intended it. If he did, we’re not sure how to help him. Obviously certain patterns can only represent certain things. A statue of an elephant clearly represents an elephant and not a mouse. “But what if you call an elephant a mouse?” says the gadfly in the back. Then it represents something that someone calls a mouse. The point is, it’s very clear what that statue represents, no matter what name you give to the animal.
So, what else might Simon’s definition mean? A better meaning, one that makes a lot more sense is that one can use some kind of “interrelated physical pattern” to represent anything. More simply, literally everything can be symbolized. The immensely complicated and intense concept of love is symbolized with a heart. A circle is a circle wherever you go. If anyone in virtually any culture anywhere sees a small disc with a person’s head on it, they know they are looking at some kind of money.
So, what should we use as the definition of a symbol? A simple phrase would be that a symbol is anything that represents something else, whether it be a sound, an action, a thing, or a concept. Now that we have that out of the way, let’s get back to the idea of an objective interpretation of symbols.
We’ve already talked about the fact the paper’s authors favor an interpretation based approach to training AI. They do correctly identify that symbols get a lot of their meaning from the culture in which they originate. Based on this, would it be fair to criticize an objective approach as being impossible or anemic at best. Impossible? No. Anemic? Perhaps.
However, consider an opera. They are still very often done in Latin, or German. Even if it’s in your local language, the singing will often be so stylized that you may not be able to recognize anything. Yet, despite not being able to understand all the symbols being presented to you, you still pick up something. You can pick up on the tone of the music, the melodies presented, the pitch of the singer’s voice, all of which convey meaning to the listener. In short, there are universal aspects to the symbols being presented that transcend particular cultures.
This can be done with a variety of symbols. When we see a statue, we know it represents a particular thing. We can tell from the expression of an illustration something of the mood of the character presented.
Because this universal element is identifiable, it seems clear that we can and should explore a universal basis to teaching an AI how to properly interpret symbols.
What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.