Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
June 26, 2021

AI Artificial Intelligence Learning and Reading Human Symbols Part 6

AI Learning Reading Human Symbols
BY: TARTLE

AI and Symbols pt 6

Here we are, the end of our extended series on AI and symbols. Specifically, how to best go about training an AI to recognize and interpret them. If you’ve stuck with us the whole way, all we can say is, “skol!” We’ve covered a lot of ground but always circling around one particular notion, should we train the AI based on localized customs and behaviors or strive for a more universal approach that would yield meaningful results regardless of place or time? The latter of course would be the obvious choice. However, it’s complicated by the fact that not all symbols are the same. Certain symbols have meanings that can at least be partially inferred, sounds for example. Based on tone and volume a person can infer much about the specific intent behind the sound regardless how familiar the listener is with it. Others, like numbers and shapes, are pretty clear across cultures. Even a culture that for some reason isn’t familiar with the way most people write numbers can be acquainted with it easily enough. Which would be handy if you found a lost tribe of Romans in the Alps still using Roman numerals. 

Other symbols are a bit more complicated. Take street signs. Anything other than a simple arrow will take a little bit to decipher. A traffic light in particular is something that doesn’t translate fully on its own to another culture. Another example would be something like a flag or religious symbol. These are much more difficult for someone who isn’t familiar with their history and the communities they represent to understand. They may represent universal truths but they can’t be broken down and reduced in the same way that a simple arrow or a number can. Indeed, not all symbols are the same. 

With many simple symbols it is possible to train the AI to recognize them since they are largely mathematical expressions. That’s easy for the machine since it is based on ones and zeroes any way. It’s operating in its element. More complicated things like the traffic light or a stop sign can be learned, though it will take a bit longer. The AI will have to observe how people react to them in order to discern their meaning. That is if you want it to learn instead of just programming it. Those more complicated ones, the symbols that at different points have inspired the best and the worst in humanity are another matter entirely. The meaning of them is inextricably linked with the cultures they represent. It may well mean something definite and timeless but you’ll never figure it out regardless of how much mathematical analysis you do on it. You and the AI would have to study the people and the beliefs that look to those symbols. That is a much more complicated process. Yet, as complicated as it is, it’s something people do almost intuitively. Even with something unfamiliar, we can often look to similarities, elements that relate to something we are familiar with. Of course, we might end up completely wrong, but self-correction and awareness of the need for it is something people are built to do. It’s practically the basis of all scientific and philosophical inquiry. 

So, the question remains. Can we train an AI to do that? Can we train an AI to understand that it might not fully understand? Can we teach it to keep looking for better answers after it has made an initial evaluation? Right now, it doesn’t look like it. The human mind is much more complex than a mere biological computer. There are processes at work that we haven’t begun to fathom. Will it one day be possible to fathom them and translate that into something a machine can work with? Possibly. But one thing is certain, no AI will be able to fully understand complex symbols until it can understand itself. 

What’s your data worth? Sign up and join the TARTLE Marketplace with this link here.

Summary
AI Artificial Intelligence Learning and Reading Human Symbols Part 6
Title
AI Artificial Intelligence Learning and Reading Human Symbols Part 6
Description

Here we are, the end of our extended series on AI and symbols. Specifically, how to best go about training an AI to recognize and interpret them.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Speaker 1 (00:07):

Welcome to TARTLE Cast with your hosts Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path.

Alexander McCaig (00:24):

Well, welcome back to TARTLE Cast with myself, Alexander McCaig and Jason Rigby. We are moving through this DeepMind paper on symbols and AI because the totality of AI systems and our language and our thoughts, they bleed into this paper so much with everything that we do. And I think it's important that we continue to round this out and talk about it because of how applicable it is to everything around data.

Jason Rigby (00:56):

[crosstalk 00:00:56]. Yeah.

Alexander McCaig (00:58):

Go ahead.

Jason Rigby (00:58):

In the particular symbolic behavior, we're talking about behavior, we did the receptive and I want to get into the constructive today.

Alexander McCaig (01:07):

Yeah. Creating constructs.

Jason Rigby (01:08):

Yeah. And I'm going to read the first sentence, because I think it's so important, constructive. A second trait of symbolic behavior is the ability to form new conventions by imposing new meaning on effectively arbitrary substrates.

Alexander McCaig (01:21):

So let's... Okay. How would I... I'm trying to think of a good example.

Jason Rigby (01:28):

Well, I think the word is new conventions.

Alexander McCaig (01:30):

Yeah. I'm trying to think of a good example of a new convention. Okay. This is fine. The Soviet Union collapsed. Okay? And then you get all these small Eastern, not even Eastern, I would say these smaller Russian provinces popping up, Uzbekistan, Turkmenistan. Kazakhstan, Kyrgyzstan. But before it was all just USSR. But then after everyone in the Soviet Union said, "That's no longer a thing." And then the rest of the world recognized this new convention, political convention that is no longer a thing. These people are like, "We want our own independence here."

Alexander McCaig (02:05):

And so now the thoughts, the behaviors and ideas around the meaning of this thing have changed that substrate of what the actual value and representation on a map is for what you say, this is Russia. So if Russia was a symbol, if the hammer and sickle, okay, was a symbol of all these things put together, but then that symbol disappears, and now it just becomes that big old Russian star. I don't even know what the symbol of Russia is right now. Everybody has their own now convention for what that meaning is. And some people in some substrates, some subgroups may still look at Russia as this big communist beast, right?

Jason Rigby (02:47):

Or Germany still being Nazi-ism.

Alexander McCaig (02:49):

Or Germany still being Nazi-ism, right? So it's like, it has... Those sort of conventions are still there in that mindset for a certain substrate that is saying, "Well, how we define this?" But those evolve over time.

Jason Rigby (03:02):

Yeah. And they all work with amongst themselves. So the constructive works with the receptive, it makes sense, whereas the previous behavioral trait receptiveness speaks to a system's ability to appreciate the meaning of assemble that is imposed by someone else. This property refers to the dual, keywords, dual ability to create a new symbol when doing so would be useful. And I think the Russian flag looks totally different.

Alexander McCaig (03:25):

Well, yeah. No, and that's the point here. So Ian and I had an old convention that we were all receiving. It's like a law of cause and effect to this point.

Jason Rigby (03:33):

Right.

Alexander McCaig (03:33):

Okay. We've created a new symbol, a new meaning for this. Now, are you going to receive this new construct? So you have the creator and the receiver at this point and that's what's happening here. And it's taken on that dual nature of the symbol itself. So as we create a new symbol or refine something or refine its meaning, somebody in that substrate has to receive it for that new conventional definition of what this symbol actually means or represents.

Jason Rigby (03:57):

Yeah. And I want to pull you away from that and push you into this direction.

Alexander McCaig (04:00):

Pull, push, push, pull.

Jason Rigby (04:01):

Yeah. Exactly. They said this, one substantial benefit of engaging with symbols, so here's the substantial benefit of how our behavior, because we're talking about behavior can engage with symbols is that one can reduce the mental burden of thinking about a complex concept by denoting it with a symbol.

Alexander McCaig (04:19):

Yeah, that's correct. So radiation, let's think about this. Radiation is an albeit complex concept. It has half-life. Too much of it can kill you, but you can't see it. Right? In some forms, right? Unless it's Cherenkov Radiation or wherever it is the blue one from inside of a nuclear power plant. But the idea here is that I'm going to represent a symbol for radiation for this thing, right? It's this yellow symbol and it's got these weird rounded-out bottom pyramids with the cut top and the three you're going to meet in the center and then have yellow opposing that. I know that that's radiation. I know that there's complexity around that, that something is generating it. Right? And you don't have to know all the physics of it-

Jason Rigby (05:03):

And that symbol, everyone in the world knows that symbol.

Alexander McCaig (05:05):

But it's a warning. Right?

Jason Rigby (05:07):

And it don't matter what language you speak. If you're in Japan, that symbol is very important to them.

Alexander McCaig (05:12):

That's exactly right. Or if you think about the peace symbol, antiproliferation of nuclear weapons-

Jason Rigby (05:19):

Right.

Alexander McCaig (05:19):

... that is a complex political concept, a concept around weapons technology, mutually assured destruction, the future of humanity, all baked into one specific thing. Oh, and it's also taking Elder Futharks, which is this Scandinavian... What do they call those things? Gosh, ruins.

Jason Rigby (05:41):

Yeah. Yeah, ruins.

Alexander McCaig (05:42):

You take the Scandinavian ruins and you're putting that together into the symbol. So I have weapons, politics, Scandinavian elements, social and emotional and political underpinnings, all these things coming together, but represented in one symbol.

Jason Rigby (05:56):

Well, you can also look at in a constructive way of how many empires have used an eagle.

Alexander McCaig (06:01):

Yeah. Think about the-

Jason Rigby (06:02):

I mean, the, United States Rome-

Alexander McCaig (06:03):

But what does it represent?

Jason Rigby (06:04):

Germany has an eagle, right?

Alexander McCaig (06:05):

Yeah. And think about the Ancient Egyptians.

Jason Rigby (06:08):

I think Russians have an eagle.

Alexander McCaig (06:09):

Yeah. Ancient Egyptians.

Jason Rigby (06:10):

Yeah, exactly. But each of those, that one symbol means many different things.

Alexander McCaig (06:16):

Right. And it's developed over time that conventions.

Jason Rigby (06:18):

Yeah.

Alexander McCaig (06:18):

The creator and the receiver had had sort of an interaction, but the complexity of that culture... So I would recognize what the hieroglyph might be for that eagle with the sun disk in it, right? I would know that this representation of the eagle as a symbol is one that is akin to Egyptian culture.

Jason Rigby (06:39):

Right. And so here's the problem with AI because you know, the articles on AI and this is what-

Alexander McCaig (06:43):

Yeah, let's get back into big data, please.

Jason Rigby (06:44):

Yeah, and the constructive side of things, where AI has issues. Evidence for this behavioral trait is scarce in current AI research. While much of what we do comprises creating models that engage with the convention humans have already established, less work probes and models capacity construct new conventions by opposing meaning on our arbitrary substrates.

Alexander McCaig (07:07):

Yeah. So the reason it lacks is because there's been so little human input to work with this model for the AI to learn from. So remember I said, in previous episodes, you need an infinite number of inputs because conventions change all the time. I said, that's a weakness of this model. Well, if you're going to continue to go down this path, then you would have to actually acquire all of this human input for all these people that have these conventions within these specific substrates to tell you how they define the specific thing.

Alexander McCaig (07:32):

So then as that evolves with people and you're pulling those inputs in them, into your data model, that can help that artificial intelligence algorithm actually figure out how those conventions change over time. Does a curve normalize? Is it chaotic? How does it evolve, right? Is it in a cancerous format? Like what does that actually look like for the evolution of the thoughts around that symbol? It requires large amounts of input. And that's what they're saying is currently lacking.

Jason Rigby (07:56):

Yeah. And they say this, evidence for this behavioral trait might instead look like the following. So they're talking about AI here. A model is ask a question and when explaining its answer, it invents a new symbol to streamline or clarify its reasoning process of communication. So now I'm getting excited because now the AI would create symbols...

Alexander McCaig (08:20):

So it's learn from past... So this [crosstalk 00:08:22].

Jason Rigby (08:22):

It's to simplify things. Yeah.

Alexander McCaig (08:22):

It's learn from past conventions of how it's evolved. So it is like, okay, I remember everything in my life. And so I remember how these things changed. Now, when somebody asks me a question, I can then create a symbol that is representative of a complex concept because I've received all this input from so many human beings over such a long period of time that I know how they think and transform things. So I can deliver them a symbol that they'll be like, "Oh, that makes a lot of sense."

Jason Rigby (08:49):

But don't you see that, this is how... I want to make sure I say this right. To me, this is the answer of AI bridging the gap with humans of communicating-

Alexander McCaig (09:04):

This is communication with a machine.

Jason Rigby (09:06):

And the machine... Because you remember, what was it, was it Contact or whatever that one where the aliens came in, they were doing all symbols.

Alexander McCaig (09:13):

Yeah, right here in New Mexico.

Jason Rigby (09:14):

Yeah. It was all symbols and symbols and it was changing and then a new symbol would pop up, and a new symbol would pop up. And I'm thinking-

Alexander McCaig (09:20):

It was a radio signal that [inaudible 00:09:22] a lot of symbols that ended up being a tesseract. And within that mathematical sequence they built a time machine. Great film.

Jason Rigby (09:28):

But I mean, if AI could turn around and say, okay, we've collected all this data, we've collected all this knowledge and wisdom from humanity, there's all these little complex... Because it can think greater than our mind in the sense of putting information together.

Alexander McCaig (09:42):

Not greater, quicker.

Jason Rigby (09:43):

Quicker. Yeah. Then why would it not create a language of symbols to be able to bridge our behavioral gaps?

Alexander McCaig (09:55):

It seems to me that's... I got hieroglyphs up on the screen.

Jason Rigby (09:57):

Well, I know. Yeah. That's what I'm getting at. You see what I'm saying?

Alexander McCaig (09:57):

Were the Egyptians ahead of their time? You know what I mean?

Jason Rigby (10:01):

Yeah.

Alexander McCaig (10:02):

They did build pyramids.

Jason Rigby (10:04):

Which we still can't do.

Alexander McCaig (10:05):

Yeah, and which was still can't do.

Jason Rigby (10:07):

And then they said, number two, agents cooperating on a team invent terminology so that they can communicate without exposing their strategy to their opponents.

Alexander McCaig (10:16):

Think about football.

Jason Rigby (10:17):

Yeah.

Alexander McCaig (10:18):

Or a guy, a third base coach. Is that what it is?

Jason Rigby (10:20):

Yeah.

Alexander McCaig (10:21):

And he's standing there like doing weird hand stuff.

Jason Rigby (10:23):

Yeah. Yeah. Yeah. And then you don't know what those are. Encrypting, encryption.

Alexander McCaig (10:25):

Encryption, blockchain technology.

Jason Rigby (10:26):

The U-boats. Yeah.

Alexander McCaig (10:30):

Yeah.

Jason Rigby (10:30):

This is number three, a model construction algebra... An algebra? Would that be how you would say it?

Alexander McCaig (10:32):

An algebra?

Jason Rigby (10:37):

Yeah. An algebra, and use it to prove something meaningful.

Alexander McCaig (10:42):

That's a ridiculous sentence.

Jason Rigby (10:43):

Yeah. Yeah.

Alexander McCaig (10:44):

A model does what? Say that again.

Jason Rigby (10:45):

A model constructs an algebra and use it to prove something meaningful. We withhold knowledge of a particular mathematical concept and its associated assemble and see whether a model proposes a way to solve a problem that invokes the concept and symbol. So I see what they're doing. They're just testing it. They're just testing it to say, hey, will it come up with the same conclusion that we've already came up with?

Alexander McCaig (11:05):

That's all they want to know.

Jason Rigby (11:08):

Yeah. That's a simple... But see, here's the craziness of it. So now we're going to throw a mathematical concept, which machine learning loves math. And here's where I want to get with you. You're deducing the behavior of humans down to an equation.

Alexander McCaig (11:26):

How can you say that we're that linear?

Jason Rigby (11:29):

Do you see what I'm saying? The behavior... I was all on board with all of this. I love the idea of the language with the symbols. I love the idea of understanding reasoning processes. I love the idea of us exposing the strategy of the machine by testing it.

Alexander McCaig (11:46):

Remember how I said it's too complex? And they're saying, "Well, if it's going to be so complex, we need to put it into a mathematical algorithm."

Jason Rigby (11:51):

We always do this with everything.

Alexander McCaig (11:53):

Don't put me in a box to define who I am as a human being. And don't say that this math equation defines who I am and what I'm going to do next.

Jason Rigby (12:01):

Well, here's the problem with it. Okay. So now we already know the outcome. We've judged the outcome. So now we're going to teach the machine so that we know if it's right or not.

Alexander McCaig (12:14):

I'm going to impose my judgment.

Jason Rigby (12:16):

Yeah, on the machine.

Alexander McCaig (12:17):

Am I good at judging people?

Jason Rigby (12:19):

I think we're kind of fucked up in that, Alex. Look at the world right now.

Alexander McCaig (12:23):

Oh, I know. That's all it is. It's a system-

Jason Rigby (12:24):

So why would I want to create something that can just think more fucked up [inaudible 00:12:29].

Alexander McCaig (12:29):

Then it's like your algebra sentence they've [crosstalk 00:12:32].

Jason Rigby (12:32):

Yeah. Yeah. Yeah. Yeah.

Alexander McCaig (12:33):

But that's what it is. It's these consistent concepts of judgment.

Jason Rigby (12:37):

Yes. And we want it to think faster, judging faster.

Alexander McCaig (12:40):

The AI does not live in a world of self-definition.

Jason Rigby (12:43):

No.

Alexander McCaig (12:44):

It's only judgment, depending on the algorithm of the human being that wrote the algorithm of judgment.

Jason Rigby (12:49):

Until they develop consciousness, which I think they will be able to, but until they develop consciousness, they don't have the ability to be self-aware and you can't create self-awareness through a mathematical formula.

Alexander McCaig (13:01):

No, it won't happen. And if you just look at it, that's why, when we talk about these symbols, a symbol needs to stand alone. It cannot be judged or have a change in conventional value. Then it's not truly something that is resonant and representative, that means the same thing everywhere we are and all the time.

Jason Rigby (13:24):

Well, here's how I would put it at. The symbols at the top of the pyramid, the symbol-

Alexander McCaig (13:29):

I like this pyramid thing.

Jason Rigby (13:32):

And then from there, now we're going to create layers, depending on the culture, depending on the substrate, depending on the people, depending on this. Now that one symbol that's the truth, now it has all these different types of meanings that correlate it.

Alexander McCaig (13:44):

Like distortions.

Jason Rigby (13:44):

Yes. The distortions, yes. That correlate it. But the symbol does not change its meaning based off of a mathematical formula, which is like... Because I see what they're saying. Well, if 80% of the people believe this way, then this must really mean what the symbol means. Why can't the symbol stand alone as its own? We're not going to change the ox on there based off of our limited interpretation of what an ox is because an ox in India is totally different than an ox to us. An ox to us is there's a nice steak. In India, it's an animal to be worshiped.

Alexander McCaig (14:17):

Yeah, exactly.

Jason Rigby (14:18):

So that symbol means lots of different... I'm just picking [crosstalk 00:14:21].

Alexander McCaig (14:21):

Well, you know what that tells me? That tells me that it's not a truthful representation of what the symbol is.

Jason Rigby (14:24):

Yes.

Alexander McCaig (14:27):

If it's not standing alone and it means different things in different places, that's not really what it is.

Jason Rigby (14:31):

Yes.

Alexander McCaig (14:31):

That couldn't be its definition that.

Jason Rigby (14:33):

Yes. Yes.

Alexander McCaig (14:34):

That's the problem with this whole thing. And then to say that, listen one plus two equals three somewhere-

Jason Rigby (14:40):

Right.

Alexander McCaig (14:40):

Okay? And then you can go to another place and they'll say, "Okay, well, what about like negative one plus four?"

Jason Rigby (14:46):

Well, in different dimension it has different... I mean, mathematicians are doing that.

Alexander McCaig (14:49):

Yeah. And then negative one plus four equals three. But it's a different flavor of how we got to three.

Jason Rigby (14:53):

Yeah.

Alexander McCaig (14:54):

But in the end it's only three.

Jason Rigby (14:55):

Yes.

Alexander McCaig (14:55):

If I show you the number three, does that give you an emotional charge?

Jason Rigby (14:59):

Yeah, I mean-

Alexander McCaig (15:00):

No, we all just know-

Jason Rigby (15:02):

Right. Right. Right.

Alexander McCaig (15:02):

... that the number three is the number three.

Jason Rigby (15:04):

I always want to make it affinity though if it's a three.

Alexander McCaig (15:06):

No, I know.

Jason Rigby (15:06):

I always want to put it together, two threes.

Alexander McCaig (15:08):

Yeah. You know, the eight.

Jason Rigby (15:10):

Yeah.

Alexander McCaig (15:10):

But that's the thing though. That's the cool part about that part of math. When I look at numbers, they can stand alone.

Jason Rigby (15:17):

Right.

Alexander McCaig (15:17):

They just are what they are.

Jason Rigby (15:19):

But that's what I'm saying. The symbol is the symbol period. That's a hieroglyphic. That's what it is period.

Alexander McCaig (15:30):

A human being is a human being period.

Jason Rigby (15:32):

How I associate with that symbol in the thought of that symbol is what creates my behavior.

Alexander McCaig (15:38):

That's correct. That's right. So it can stand alone. But how do me, myself choose to then go interact with that stand alone thing?

Jason Rigby (15:48):

And that's what I'm saying. So if you taught AI to say, hey, this is the symbol. This is it. You can associate with an ox or whatever you want to associate it with, but this is a symbol. But the truth of the symbol is not based on humanity's thought of the symbol.

Alexander McCaig (16:02):

No.

Jason Rigby (16:02):

The symbol stands alone.

Alexander McCaig (16:03):

The symbol stand alone.

Jason Rigby (16:04):

Just like three stands alone.

Alexander McCaig (16:05):

So don't try and define the symbol. Let it be its own thing. Let humanity decide for itself how it wants to go look at it, but keep them separate. They are standalone things. We're trying to put these two things together. [crosstalk 00:16:18].

Jason Rigby (16:18):

But you're not going to...

Alexander McCaig (16:23):

People have no time to [crosstalk 00:16:23].

Jason Rigby (16:23):

A symbol is not a problem. It's not a mathematical problem.

Alexander McCaig (16:25):

No.

Jason Rigby (16:25):

It's symbol's not a mathematical problem. A symbol's a symbol.

Alexander McCaig (16:27):

A person is not a mathematical problem.

Jason Rigby (16:28):

Yes. [crosstalk 00:16:29]. Oh yeah, don't tell Mark [crosstalk 00:16:31].

Alexander McCaig (16:31):

It just is what it is. No, but the thing is a human being is a human being. But then when we start to judge or say that it means one thing somewhere else, or because someone's a different skin color that it has a different sort of value or substrate to it, how do you know? How do you know that's the truth?

Jason Rigby (16:44):

Well, in Indian, is that wrong? Based on me being in the United States in Albuquerque, New Mexico, is that ox symbol how they believe that symbol? Are they wrong?

Alexander McCaig (16:53):

Yeah. Are they wrong?

Jason Rigby (16:54):

Well, 80% of the world eats meat, but India.

Alexander McCaig (16:56):

Yeah. And if I'm [crosstalk 00:16:57].

Jason Rigby (16:57):

So they're wrong and we're right according to math.

Alexander McCaig (17:00):

Yeah. The math then says, oh, if over 51% says this, then that must be what it is. Paradox. Wrong. False. Not true, nothing about it. This whole thing is so freaking contradictory. Just really refine your thoughts. Years ago, these people here writing this paper, have you even figured out who you are? Do you know how you represent something to yourself in your own thoughts, even through your own data? Have you even looked at your own data?

Jason Rigby (17:27):

Do you remember Hunger Games, the movies?

Alexander McCaig (17:30):

Yeah, of course.

Jason Rigby (17:30):

And then the whole series. And then each little section had its own symbol.

Alexander McCaig (17:35):

Yeah.

Jason Rigby (17:36):

And then when she raised the symbol, it's like the war, raising the flag, Herojima or whatever it might be.

Alexander McCaig (17:42):

Herojima, yeah. Right.

Jason Rigby (17:45):

In times of... See, this is a beautiful part. We see humanity progressing and evolving through symbols and that's how important they are. And I love that AI, I love that people like this, brilliant. I mean, these people are way smarter than I am, are looking at symbols and they're actually writing a paper on symbols because a lot of people don't realize how important symbols is.

Alexander McCaig (18:10):

No.

Jason Rigby (18:10):

But when we look at our evolution, symbols go right along with our evolution.

Alexander McCaig (18:15):

They always have been.

Jason Rigby (18:15):

But then-

Alexander McCaig (18:16):

Yeah, from then, no.

Jason Rigby (18:16):

Whether we devolved from there, I don't know.

Alexander McCaig (18:18):

Or we brought it into big data on a computer. You know what I mean?

Jason Rigby (18:20):

Yes. Yes.

Alexander McCaig (18:21):

It's always been there with us. And the question is, are you willing to look at that symbol and have some sort of experience with it? Are you willing to look at your data, look at yourself, have an experience with it?

Jason Rigby (18:32):

So how could the AI have an experience with a symbol?

Alexander McCaig (18:35):

The AI can have an experience with the symbol when the AI has an experience with itself. The AI will never be able to define what a symbol is until the AI can truly know the experience and how to define itself. We need to figure out what really defines us before we start saying what this thing is that stands alone essentially means just through some sort of idea that we have.

Jason Rigby (18:57):

And if we want to find out more about ourselves and take control of our free will with data, how do we do that?

Alexander McCaig (19:06):

You go to tartle.co, T-A-R-T-L-E.co, and you start to collect all of those things about you. Take that introspective look and find that value within it and evolve.

Speaker 1 (19:25):

Thank you for listening to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path. What's your data worth?