Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
September 24, 2021

Operations of Organizations and Our Communities With Special Guest and Systems Thinker, Christian Lemp Part 2

Operations of Organizations and Our Communities
BY: TARTLE

If the first part of their discussion explored the parallels between social systems and AI technology, this second half provides insight on how Christian’s work draws inspiration from an unlikely source: the natural world and the animal kingdom. 

From there, he touches briefly upon the responsibility of modern tech professionals to be aware of the social implications of their work and operations, providing words of encouragement to listeners of the podcast within the industry.

Drawing Inspiration From the Natural World

Ants leave trail pheromones to food that they find and then return to the colony. This leaves a road for other ants to find, which leads to the collective outcome of being able to feed everyone in the community. Similarly, honey bees coordinate with other bees to maintain their hive and protect the queen.

These are examples of biological systems that are naturally capable of self-regulating— so where’s our capacity to solve that on a larger scale, in business and societies?

Here, Christian discussed the possibility of our efforts being limited because we approach problem solving with a two-dimensional mindset—when in reality, we should be looking at the scenario in three dimensions. For example, one may be able to see, hear, and touch a forest, but they won’t be able to see what happens underneath the soil.

There is a call for us to “move away from the two dimensional, polarizing world that sticks us in buckets and says, this thing is this or that, but there can't be a flexibility or the nuances of an entity in between that can actually move throughout dimensions.”

But is it possible to run multinational corporations and governments as efficiently as beehives without taking away an individual’s creative capacity, while ensuring that the system remains flexible enough to meet challenges brought about by outside forces?

Former Approaches to Systems of Organizations

Modern organizations find themselves adapting to a strange new status quo: one where management must deal with remote employees and asynchronous work. It’s a symptom of decentralization in a structure, where control and command has become less concentrated on hierarchy. 

Therefore, the ability to make collective decisions while operating asynchronously is an indication that the business has a strong internal culture that naturally reinforces good decision-making despite the time differences and differences in flows of information.

Prior to this, most organizations preferred to take an authoritarian approach to systems management. This is where the leader is responsible for planning out the entire route from start to finish and people are expected to follow. It works in instances where the leader has a clear vision and knows what needs to be done to achieve it across multiple levels. However, not a lot of people enjoy working in an environment where they are only ever expected to be followers of someone else’s vision. There is little to no room to foster genuine creativity on a micro level/on the ground.

Organizations also try to implement the consensus approach, where everyone communes to find a solution that pleases everyone. While it’s a more democratic method, the process is slow and the end goal remains restrictive for the people on the ground. 

Is it Time to Relax Our Approach?

Could a more relaxed approach to implementing a system be in order? Christian muses over a world where companies focused on establishing a strong organizational culture. This would encourage everyone who was hired, who understood and was aligned with the company’s vision and mission, to naturally work towards a solution in both a collective and individual sense. 

This alternative gives more flexibility to individuals and small teams when a new challenge arises. While people still need to attend meetings and management will continue to make room for mistakes, this approach gives people the opportunity to proactively think of how they can use their talents towards their goals instead of wedging them into a box—or turning them into drones. 

Diffusing a small element of the decision-making process could help your organization by injecting a diverse array of perspectives and skillsets. Upper management shouldn’t take the entire burden of thinking outside the box.

Remaining Ethical in Positions of Leadership and in Tech

Christian briefly discussed the responsibility of leaders to build diverse teams, especially when they are in the tech industry or developing artificial intelligence. He drew from his personal experience working with an insurance domain to prove his point.

In this case, the domain was working on using AI to scan aerial images and assess the value of a home, seeing if it would fit within their risk profile. However, they found out that the AI system automatically excluded homes with a chain link fence. If this algorithm made it to the market, it would not have underwritten any homes with a chain link fence—which is a common fixture in poor neighborhoods.

This would have created a bias against people who needed insurance the most, and it would have been an unintended outcome of trying to solve a simple problem using AI without the added layer of human intervention. As much as possible, the teams behind AI development need to come from a wide array of backgrounds so that the creation of new technologies incorporate as many perspectives as possible.

Closing Remarks: to the Tech Professionals of the Future

Christian encourages professionals employed in data science, analytics, and technology to internalize the weight of their responsibility: their capacity to change the market and directly affect people through products and services. 

“People in positions of decision power, who are practitioners and implementing, have a responsibility to optimize for the right thing, and really be humble and understanding. And that's just something that leaders have to do,” he explained.

He also revealed that what stood out for him the most from TARTLE was the ability to “have a bottomless approach to data collection and ownership.”

TARTLE is our step forward towards a reality where people have better control over their own data. Currently, our personal information is working for the benefit of the wealthiest people and the most powerful organizations in the world. The concept of getting paid for your Facebook account, Instagram posts, and Twitter feed may be a little far-fetched—but this is exactly what makes money for these platforms. The TARTLE marketplace is our work towards inverting this model and bringing back the power to where it truly belongs: the people.

What’s your data worth? Sign up for the TARTLE Marketplace through this link here.

Summary
Operations of Organizations and Our Communities With Special Guest and Systems Thinker, Christian Lemp Part 2
Title
Operations of Organizations and Our Communities With Special Guest and Systems Thinker, Christian Lemp Part 2
Description

If the first part of their discussion explored the parallels between social systems and AI technology, this second half provides insight on how Christian’s work draws inspiration from an unlikely source: the natural world and the animal kingdom. 

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Alexander McCaig (00:07):

We're back for part two with Christian here, and dealing with systems, simple, complex, pretty much anything that interacts with itself on this planet. That's what we want to dive into, philosophical and tactical approach. So do you want to carry on with your train of thought about these real-world examples?

Christian Lemp (00:24):

Yeah. So we were talking about real-world examples of cultural or social change from small groups of people that there's a sort of tipping point, and how there's not always good outcomes. And I can think of countries that have gone from one course, Venezuela, for example. I'm not an expert in all geopolitics, but I know that when I was in high school and I was going to study abroad, we had students going there and it was, at the time it was one of the most prosperous and stable countries to go to. And then I look at what's happening now, and that has, for reasons behind my expertise, but it's changed, right? And so what we're trying to say is, what is like the critical mass of people to have a positive change and to be able to change culture, to have better, deeper, longer term, more equitable values? Yeah.

Alexander McCaig (01:33):

Because we have seven and a half billion.

Christian Lemp (01:35):

Mm-hmm (affirmative).

Alexander McCaig (01:36):

If you've seen in systems, is there are threshold. Because people think of herd immunity, they're always like, "80%, if we can get 80% vaccinated, we're good." Does that system sort of have the same application? Is there a percentage threshold for a system that you've seen before one entity within the organization actually starts to well off and then just really control, be the biggest cog in the wheel?

Christian Lemp (02:03):

I go back to, it depends on the model.

Alexander McCaig (02:05):

Okay.

Christian Lemp (02:05):

It depends on the model and what did we construct? Right? So what is changing? And what is numerically that threshold? What are we measuring? Right? We constructed some set of variables to measure.

Jason Rigby (02:22):

Well, let's use de-centralization. I think that'd be a great example, and what's happening right now. And if you want to speak to that, maybe our financial systems and stuff like that.

Christian Lemp (02:31):

So with decentralization, do you mean of like financial de-centralization or just kind of [inaudible 00:02:37]

Jason Rigby (02:37):

Well, I mean, I think you're seeing it. I think your brother's a perfect example of de-centralization at play and then COVID was a huge catalyst for de-centralization.

Alexander McCaig (02:47):

Cryptocurrency de-centralize.

Jason Rigby (02:49):

Yeah, I mean, I'm talking about old systems. I'm seeing that they're fragile now because of decentralization.

Christian Lemp (02:55):

So here's one. So one interest that I have in my research is, one of the observations with the traditional organization is less hierarchical in terms of control and command. And there's like a couple of ways that decentralization is informally playing out, I think in organizations. One, there's geographic, right? So I'm a remote employee,

Alexander McCaig (03:20):

Actual physical positioning.

Christian Lemp (03:21):

Physical position, right? But the effect of that too is decisions are being made at different times-

Alexander McCaig (03:31):

They're asynchronous?

Christian Lemp (03:31):

... and basically who's interacting. Yeah, we're asynchronous. And so we have kind of different set of information now being used to make collective decisions within organizations. And I think a good example is when companies are able to do this well. They're able to actually operate asynchronously because, to me that means they have a pretty robust like flow of information and they have a pretty strong internal culture to guide those decisions versus where everyone has to get into a room and it has to be consensus with everyone before we can move forward. I think that's unhealthy and it's actually too slow.

Alexander McCaig (04:11):

Or you could say, fuck everybody, else's morals. Everyone's going to run by mine.

Christian Lemp (04:14):

Right.

Alexander McCaig (04:15):

And then from that, that's how the decisions are made. Right.

Christian Lemp (04:17):

Right. Or the other possibility there is we all have to be on the same page before we can move forward. And then, what that does is it actually extends the time to consensus, extends the time to be able to move forward.

Alexander McCaig (04:32):

Well, that's the democratic approach, which has naturally built to be slow.

Christian Lemp (04:36):

Right.

Alexander McCaig (04:36):

So what you're stating is, let's take the beehive approach, where we all know what the mission is. Protect the queen and honey.

Christian Lemp (04:44):

Right.

Alexander McCaig (04:44):

Right? Like, that's the end goal. So I don't care what flower you go to, what time of day you do it, we just know at the end of the day, this is where we want to be. And the bee moral, whatever it might be, will guide us to that processing. And we can then act efficiently towards having that end result.

Christian Lemp (05:07):

And so, this is where I think leaders really benefit from developing an understanding of complex system in setting the what's that goal. What's the bee and honey of our group?

Alexander McCaig (05:18):

Oh yeah.

Christian Lemp (05:19):

Right?

Alexander McCaig (05:19):

I like this.

Christian Lemp (05:20):

What's the bee and honey? Or what's the queen and honey? Because if you're trusting everybody to be operating quickly and efficiently and making independent, smaller decisions, then you got to make sure real clear what's the outcome we're going after, and make sure that that outcome doesn't happen a whole lot of really divergent negative paths along the way.

Alexander McCaig (05:47):

"Oh, we made a lot of money. What happened?" Oh, I had to kill 30,000 people." "Oh, excellent." That's not what we want to do to get to our end goal.

Christian Lemp (05:55):

And the opioid crisis here in America, right?

Alexander McCaig (05:58):

Correct. And so, let's continue with the bee example, and I know Jason wanted to talk about this. Do systems thinkers or yourself look to nature for the designing and the testing and retesting of systems or models that you come up with?

Christian Lemp (06:19):

Yeah. Yes. Biology, physics are some of the sort of, I would say, foundational scientific domains of which many models can then be applied to other problems.

Alexander McCaig (06:38):

How so?

Christian Lemp (06:39):

This is one of the things that I find the field of complex systems so fascinating is, it's like interdisciplinary by nature. And kind of embedded within the field is taking a model that works in one domain, constructing variables that kind of interact in the same way in a different domain. And then running that model and seeing what happens, right? So-

Alexander McCaig (07:02):

So, use something very simple?

Christian Lemp (07:05):

... so one that's classic is, ants and leaving the pheromones to go and... Right. So-

Alexander McCaig (07:13):

I know where to go.

Christian Lemp (07:15):

... so ants will leave a trail pheromones to the food that they find and then they go back. And that indicates to the other ants where to go to find the food, right? And so they kind of help each other through this interesting system, living pheromones. And they actually have a sort of this collective outcome of feeding themselves, right? Bees are another, how honey bees coordinate is another one. And then just in our own bodies, right? So a biological system has regulation, right? To control the sort of outcomes, right? That can happen, right? That's in DNA, genetics, all of this stuff can take this, sort of either from a philosophical angle, or actually look at the model and apply variables and the interactions define them and then run the model on a different domain and see what the outcomes are.

Jason Rigby (08:15):

Because I think looking at nature and what is, compared to what I'm viewing it is, with my bias. Because a philosophical approach, even though it could work, now you're creating a bias. Whereas if you're looking at nature and seeing a system that's in place, that's been in place for a long time, that's what it is.

Alexander McCaig (08:32):

Yeah. And nature is very respective of life. Nature doesn't want to kill life, because it is life. So wouldn't that be the one thing you'd want to champion and follow after. I mean, it's what's designed you.

Christian Lemp (08:43):

I think humanity would benefit a lot from getting back to nature.

Alexander McCaig (08:47):

Yeah.

Jason Rigby (08:47):

But I mean, even the Bayesian, we had a whole podcast on that, like that can be false.

Alexander McCaig (08:53):

Well, yeah. And I'll give an easy example. They talk about the coin toss as an example, heads or tails. What happens in the event you flip the coin and it lands on its side? The model fricking blows up. Then you're like, "What happened to my reality. I thought it was just binary? This side or this side."

Christian Lemp (09:11):

Throughout the data point?

Alexander McCaig (09:12):

Yeah. And then they're like, "Wait a minute. There's something in between both these data points. Like, what am I supposed to do?"

Christian Lemp (09:17):

And I find that awfully interesting, I was looking to nature to solve a specific problem, right? If I have a lattice structure, think about, you know the thing people grow roses on? And I want to search efficiently across that. That's a two dimensional plane, right? And so I'm like, "Okay. What's the most efficient way to look for certain things first, where do I start to look in a very large lattice?" And I was like, "How does nature do it?" And I haven't found the solve yet, but this is how I've been sort of thinking of things.

Christian Lemp (09:49):

I think about penguins. So a mother has her baby penguins, and they blast off for a bit, okay? And when they come back to return to the mother, after they've done their first round of feeding with the collective group, they come back in and there's millions of penguins on this one island. How do they know where to search efficiently first? They always do. There's a specific call across this, essentially a three dimensional, lattice structure of where they should search, and they can direct themselves efficiently there. But we have problems with our current systems approach and analysis, where it's like, we try to take a three dimensional world, three-dimensional problems and force them into two dimensional analysis. It's like up or down, right? But there's so many vectors that cause specific behaviors, even within nature to do specific things, even across boundaries that you wouldn't understand.

Christian Lemp (10:42):

For instance, if I want to analyze the forest, Jason talked about Bioneers off-air. Well, if I walked into the forest and the forest floor was radiated from old nuclear testing or something that may be going on, I can't see that, right? And I can't see what's going on underneath the soil. So when I'm really viewing as this two dimensional, flat area and an interaction of all these points, these plants that sit on two dimensions. But I never considered the systems that were happening in dimension underneath the one that was occurring right in front of me, and I didn't know how to look there. So how do we continue to look to nature and take this three dimensional approach of solving, and move away from the two dimensional, polarizing world that sticks us in buckets and says, this thing is this or that, but there can't be a flexibility or the nuances of an entity in between that can actually move throughout dimensions.

Alexander McCaig (11:36):

I think that comes down to creative problem, solving. Conceptual blockbusting, interacting with diverse perspectives, like listening to what others are coming with. That's what it comes down to. Basically, diversity. Diversity of thought and open mind, and creative problem solving is how to sort through these really complicated, complex problems.

Christian Lemp (12:03):

So-

Jason Rigby (12:03):

Do you have any like a real-world example of something that you've maybe you've worked on in the past or something that you've researched that you saw a breakthrough because of this?

Christian Lemp (12:16):

Let me think.

Alexander McCaig (12:22):

I just had so many, I can't really pinpoint one.

Jason Rigby (12:23):

Well, I mean, we had the guest earlier was talking about-

Alexander McCaig (12:26):

David Weinberger?

Jason Rigby (12:27):

... about AI picking out people of color with facial recognition.

Alexander McCaig (12:32):

Racial bias.

Jason Rigby (12:33):

But then they brought in people of color and then these data scientists began to ask them questions, and began to understand their culture. And then they began to make changes in the systems from that.

Alexander McCaig (12:45):

Because they were open now to a diversity of thought.

Christian Lemp (12:49):

Yeah. So I remember I worked on one problem that, I think this is more of just seeing bias and then being opened to observing and recognizing that it was there.

Alexander McCaig (13:06):

Let's talk about it.

Christian Lemp (13:08):

So there was a problem I worked on a while ago now, but it kind of stuck with me is, this was an insurance domain. And what was going on there was, they're using aerial images to try and assess the value of a home and whether or not it would fit within their risk profile, basically. And what they found was they just kind of had an AI system go, they trained it. And none of the homes with a chain link fence fit within the risk profile. And had the algorithm gone to market, it didn't go to market, had the algorithm gone to market, it wouldn't have underwritten anybody's home with the chain link fence.

Alexander McCaig (14:00):

What if I want to have this big, beautiful, expensive mansion, but I'm an idiot and throw a chain link around it and not a stonewall?

Jason Rigby (14:06):

Well, in Florida, you have a lot of chain link fence.

Alexander McCaig (14:07):

All the time. Yeah.

Jason Rigby (14:08):

I've seen those on very nice houses.

Christian Lemp (14:10):

Right. Well, the effect of this was, when they dug in and had kind of open mind to think through the problem was, if you go to an inner city, what neighborhoods usually have a chain link fence?

Jason Rigby (14:23):

Yes.

Alexander McCaig (14:23):

I mean. I lived in Philly, everything had a chain link fence there, even the, oh I actually, the poorest parts of Philadelphia had more chain link than anything else.

Christian Lemp (14:31):

Yeah. So the effect of this algorithm, which didn't go to market is, poor neighborhoods wouldn't have been able to be underwritten. So imagine-

Alexander McCaig (14:41):

So this is the chain-link bias.

Christian Lemp (14:42):

... so then go ahead and imagine then if that's the neighborhood where you can afford a home. And so can't get insurance on your home, so you can't actually buy your home, so then you're stuck renting, then you're economically affected because buying a home is an avenue for wealth creation in America. And so you start to think about, what's the downstream effect sometimes of biases that can occur through this, optimize for one outcome only, and don't understand the why. Luckily this never got implemented, but like it stuck with me because it was such an interesting example of an unintended outcome when trying to solve like a pretty simple, basic problem with AI.

Alexander McCaig (15:32):

That's actually fabulous.

Jason Rigby (15:33):

Yeah. I love that. What about, let's stay on this role of bias.

Alexander McCaig (15:36):

Yeah. So if we consider that bias then, is there a check that you use in your approach after learning this, after seeing this and experiencing this firsthand, this chain link bias, that's what we're going to call it. The CBB or CLB. Sorry, I don't know what I'm talking about. And you talk about the downstream effects. How do you go back to then before implementing models now to look at what upstream could possibly create those downstream errors or biases? Do you have a know your own threshold? Do you have a process you take now to make sure that that chain-link issue does not happen again?

Christian Lemp (16:23):

So one is, doing your best to have a diverse team responsible for the AI solution. That's the first. Which is a problem, because often it's usually affluent white males who are over rep-

Alexander McCaig (16:43):

Could afford expensive degrees. All of those stuff.

Christian Lemp (16:45):

Exactly. It's overrepresented, but I think it's the responsibility of leaders to build diverse teams. Not just like to say they have one, but the outcomes are better, it's better for everybody involved. And I also think that anybody in analytics or data science who's in practitioner setting, especially who's going to market and affecting people directly through products and services has a deep responsibility to understand these things and think about them, right? And this goes back to this theme of the conversation of what are we optimizing for? People in positions of decision power, who are practitioners and implementing, have a responsibility I think, to optimize for the right thing and really be humble and understanding the why that got there. And that's just something that leaders have to do.

Alexander McCaig (17:50):

I'm just like, "He's like a wall against David, from earlier.

Jason Rigby (17:56):

Well, there's also, and I know you could speak to this because you brought this up earlier, there's this whole IOR, or ROI base.

Alexander McCaig (18:03):

You are so dyslexic.

Jason Rigby (18:06):

Are there shareholder value?

Alexander McCaig (18:08):

Yeah.

Christian Lemp (18:08):

Yeah.

Jason Rigby (18:09):

So what about external pressure put upon teams?

Alexander McCaig (18:15):

Have you felt or experienced that?

Christian Lemp (18:16):

External pressure? Oh, absolutely. I mean, I work in public companies and there's always pressure in a public company to produce results in a quarterly.

Jason Rigby (18:28):

How about an outcome? Like, have you ever had that, where they push you to a certain outcome?

Christian Lemp (18:33):

Oh, press for a certain outcome. Yeah. Not recently. I would say like probably the past six years of my career, I wanted to make sure I wasn't in the kind of setting where I was going to be pressured to compromise ethics in order to get results. Early in my career. I was in situations to please a client. And it's like, "Just make the numbers work."

Alexander McCaig (19:02):

Yeah.

Christian Lemp (19:02):

And I know that didn't sit right with me and I didn't want to ever do that. So-

Alexander McCaig (19:08):

So, that's why I left private equity.

Christian Lemp (19:10):

Because of that kind of pressure?

Alexander McCaig (19:11):

Because I'm actually in there, I'm like, "I can do some good. I can help people" And they're like, "No, make the numbers look this way." And then they send that to a bank? And then the bank underwrites that? I'm like, "This is not-

Christian Lemp (19:24):

But the bank knows.

Alexander McCaig (19:25):

And the bank knows too, right?

Jason Rigby (19:27):

Yeah. You're all in it.

Alexander McCaig (19:28):

Yeah, it's bad.

Christian Lemp (19:29):

And everybody's incentive system relies on these numbers on the spreadsheet to look a certain way so that they get their payday in that system, right? So to answer your question, "Do I drive or feel pressured?" No. I actually seek out organizations where ethics is a valued.

Jason Rigby (19:49):

What would you say to a data scientist, or maybe a chief technology officer, or something like that, that is feeling pressure? What are some of the things that you've done? Because you seem to be very self-aware. So what are some of the things that you've done in thinking process, especially with systems to make sure that you're seeing ethical?

Christian Lemp (20:10):

At the end of the day, who do you want to be? I mean,

Alexander McCaig (20:12):

Fucken mic drop, dude.

Christian Lemp (20:15):

At the end of the day, who do you want to be? It's-

Alexander McCaig (20:17):

That's a mic drop.

Christian Lemp (20:20):

And it's hard to say, because I think there's a lot of incentive and pressure to, again, optimize for yourself. There's a lot of cultural messaging around becoming obscenely wealthy, and that by pursuing your own selfish interests, that's actually what should be doing. Because again, there's messaging that in the end of the day, when we aggregate up, it's better for everybody through a whole bunch of interactions. So for people who do feel that pressure, and don't want to, there's so many ways to make money if you're in data and analytics today, that the right fit is out there, and you don't have to compromise your values.

Alexander McCaig (21:18):

I think that's incredible.

Jason Rigby (21:19):

I think where you guys are at is where we were at probably 10 years ago on the digital marketing side. Like on the digital market side, no one knew what was going on. And so you had beginning of Facebook Ads, you had Google Ads kind of maturing. And so you could literally bullshit your way. They would just hire you, pay you big money, and then you could throw an ad together and it could be nothing, but it would work because there wasn't a lot of competition on the systems. So you created, and I don't know if the same situation, I mean, I know there's requires a certain amount of education and stuff like that, but that report that's generated and given to someone, do you think the bias can't come from the AI as much as it's coming from a human with pressure?

Alexander McCaig (22:03):

Well, because the AI doesn't inherently-

Jason Rigby (22:06):

It's agnostic.

Alexander McCaig (22:06):

Exactly. Its agnostic, it has no bias.

Christian Lemp (22:07):

It has no opinion.

Alexander McCaig (22:07):

It has no opinion. It doesn't give a shit. If you tell it, "Make the Earth climate stable." It would be like, "Okay, kill everybody." It doesn't care.

Jason Rigby (22:16):

It's Marvel Thanos.

Alexander McCaig (22:18):

Yeah. It's-

Jason Rigby (22:18):

He wants to complete AI.

Alexander McCaig (22:19):

Yeah. He was all about it. And they also use that same approach with that other dude who was trying to blow up [Terazekistan 00:22:24] or whatever it was, right? So, I wouldn't, the way I observed this system, is that the AI, it doesn't have the bias, we just give it our bias.

Christian Lemp (22:36):

Right. Through our construction of reality, our incentives, through our, go down the list. But we create it.

Jason Rigby (22:45):

What are some of the, if you don't mind, because you brought Santa Fe Institute, and I've been hearing a lot of that lately. And I know there's some controversial, I don't know if you want to use that word, maybe out of the box thinking. What are some of the things that you've learned from there and what are-

Alexander McCaig (23:00):

It's like, yeah. There is.

Jason Rigby (23:03):

What are some of the things that you've learned there, in praise to them, and some of the things that you're seeing that's going on there?

Christian Lemp (23:10):

So I've not like worked with, or formally researched with them. They'd have a great website called Complexity Explorer, which offers often free, really high quality education in all sorts of topics and complex systems and mathematics and dynamical systems analysis. So-

Jason Rigby (23:32):

So anybody could go to that?

Christian Lemp (23:33):

Anybody can go.

Jason Rigby (23:34):

Our listeners, we're in 222 countries.

Christian Lemp (23:35):

Yeah.

Jason Rigby (23:36):

What was the website, again?

Christian Lemp (23:36):

complexityexplorer.com, I believe.

Alexander McCaig (23:40):

Okay.

Christian Lemp (23:40):

Mm-hmm (affirmative). I've taken a few of those courses. And one of the researchers who I really appreciate is Scott Page, Scott E. Page. And he was really my introduction to complex systems from the beginning. He has a great course on Coursera called Model Thinking, which he's actually turned into a book called The Model Thinker. And he has another book called The Difference. And it was the first time I was exposed to modeling as like constructing a set of interactions and using it to model out like how humans decide, and are there tipping points in the system? Is it stable? And then getting into some of like the, I would say, the greatest hits of complex systems thinking like cellular automata and the basis for understanding that like individual interactions and small decisions can have outsized outcomes. That's what I learned from him. And so-

Jason Rigby (24:45):

I can't tell you... I mean, we've seen that firsthand.

Christian Lemp (24:48):

Yeah.

Jason Rigby (24:49):

Point in case. I'll just use myself an example, because I know it. I sat in a bathrobe and come up with this idea for TARTLE.

Christian Lemp (24:57):

Mm-hmm (affirmative).

Jason Rigby (24:58):

Just me.

Christian Lemp (24:59):

Yeah.

Jason Rigby (24:59):

Sitting around a bathroom, then shared it with my co-founder. Lo and behold, you're in 222 countries with that idea.

Christian Lemp (25:05):

Right.

Jason Rigby (25:06):

What the hell just happened? You know what I mean? I look back I'm like, "This is nuts." Yeah.

Christian Lemp (25:10):

So, Santa Fe Institute, I would say, the epicenter of complex systems thinking they, years ago collected a whole bunch of interesting minds here in Santa Fe who were thinking about problems in this way. And they continue, they do events in Santa Fe for the public. So yeah. All sorts of interesting research. Controversial? I don't know. I actually don't know if it's controversial. Definitely outside the box, but I don't know of anything controversial.

Alexander McCaig (25:41):

And I'm curious. We met when I was holding a public event.

Christian Lemp (25:48):

Mm-hmm (affirmative).

Alexander McCaig (25:50):

What was your impression of the system at that time, that was created here?

Christian Lemp (25:57):

Which system?

Alexander McCaig (25:58):

TARTLE.

Christian Lemp (25:59):

Oh, of TARTLE?

Alexander McCaig (25:59):

I'm interested, just your view of it.

Christian Lemp (26:05):

Let me think back.

Jason Rigby (26:06):

Say whatever the hell you want on air too.

Christian Lemp (26:08):

Yeah. I will. Let me think back. So when we met, at the time I was working on a lot of problems in the operations context of trying to understand individual kind of workflows. And how a choice about maybe what work do you work on first affects maybe the whole the whole chain of events. In one of the challenges I was sorting through at the time when we had met was a lot of that data was just aggregated, and it was really hard to individually identify whose data is this. And so I'm thinking back and I'm jogging my brain, but what struck me was the ability to, instead of just be working with aggregate views of data, actually start to connect, have a bottomless approach to data collection and ownership. That was one I remembered. So there was a connection I made to kind of set of problems I was working on at the time.

Christian Lemp (27:22):

And then the second was the philosophy of kind of deconstructing the hierarchy and the centralization of data that's so valuable. Like if you take a look at The Fortune 500 today, it's all companies that have harnessed and won the "Winner take all" strategy to data ownership. And now, the wealthiest people in the world and the most powerful organizations in the world have taken everyone's data and done something with it. And I've never gotten paid by Facebook, or anything else. And so the approach that you presented, which was, "Let's invert this model," was really compelling to me.

Alexander McCaig (28:10):

That's awesome. Thank you for sharing that as a third party. I think this has been incredible. And I would just ask, is there anything you would want to leave the world with, to learn or resonate or ruminate on?

Christian Lemp (28:28):

Take your curiosity seriously, and see where it goes, and be humble and stick to it and see where you go, because that's where the creativity happens.

Alexander McCaig (28:40):

Creativity and tenacity, right?

Christian Lemp (28:41):

Mm-hmm (affirmative).

Alexander McCaig (28:42):

Cool deal. Christian, thank you so much for coming on.

Jason Rigby (28:43):

Yes, thank you, Christian.

Christian Lemp (28:44):

Thank you guys. Appreciate it.

Announcer (28:53):

Thank you for listening to TARTLE cast, with your hosts, Alexander McCaig and Jason Rigby. Where humanity steps into the future, and where source data defines the path. What's your data worth.