Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
September 19, 2021

Operations of Organizations and Our Communities With Special Guest and Systems Thinker, Christian Lemp Part 1

Operations of Organizations and Our Communities
BY: TARTLE

What can machine learning and data engineering tell us about how social systems are wired to function? As it turns out, these fields are more alike than meets the eye. 

Christian Lemp, an early TARTLE adopter and professional systems thinker, explains that he was drawn to his career path after years of observation and experience in different parts of the world. While Christian originally studied math and economics, which led to a short career in finance, he found himself more attracted to how different communities thought and interacted with each other. 

From there, he took a leap and entered the world of machine learning and data engineering. Christian helped find ways to understand organizations and optimize their work processes. However, he quickly realized that this work entailed untangling a series of systems that were all interconnected—some of which would require more creative, community-centric solutions. 

Interconnected Problems Require Interconnected Solutions

The deeper Christian delved into studying operations of organizations, the more he saw that problems in his line of work could not be solved individually. Since all the problems were so intertwined with one another, trying to make solutions one at a time would only reroute the issue to another part of the organization at best, and make the overall situation more dire at worst.

Instead, organizations needed to commune and mutually come to one big solution that could solve all the problems at once. 

Outside of organizations, this is an issue that can also manifest on a cultural and national scale—especially in locations with diverse cultures. For example, the banner of the United states houses numerous states and regions, each with their own special communities. All these communities are bound to have their own personal interests and biases. 

Is Efficiency An Absolute Good?

Given how complex all of that is, what place does mere efficiency have in our understanding of it? Not much, at least as it is currently understood. This is true across the board. Many things that seem as though they should be efficient don’t wind up being so at all. 

For example, monocropping is a common practice amongst farmers, where they grow the same crop on the same plot of land year after year. While it is simpler to manage and highly efficient, monocropping also makes the soil less productive over time because it depletes the nutrients found in the soil. As a consequence, it reduces organic matter in the soil and can cause significant erosion. While there are short-term gains for the farmer, it eventually nets a long-term loss because it hurts their soil.

If that is true for an activity like farming, how much more true is it as applied to human society? The fact is that there is simply too much going on in any society for it to be completely understood, much less controlled by any one individual. It’s just impossible. Yet when we get out of the way (for the most part), things seem to organize themselves into a symbiotic relationship. 

Short of that understanding, we tend to try to wedge people into different boxes. This is an effort that is not only doomed to failure but will also sooner or later lead to resistance, which can affect the work we put towards providing solutions as a whole.

Closing Thoughts

With all this information, it may feel like we’ve reached a dead end for the problems we face in our society: we can’t solve one problem at a time, and thinking of one big all-encompassing solution seems like an impossible task. 

However, the discussion with Christian suggests that there is one simple thing we are capable of doing that can help alleviate the solution: we can treat everybody we come across with dignity. Instead of forcing them to fit into a system based on our preconceived notions, we give them the space to see where they can fit in instead. Those in charge of creating systems should not be building people around systems; rather, they should be taking the time to understand everyone in their complexity, and building systems around people.

This is the kind of work that the TARTLE platform is putting in. We want to provide a safe space for people on the ground to take back control of their data and funnel it to causes and organizations that are important to them. When we give them the power to directly support what reflects their own personal ideals, we empower people to become more united and open to one another. 

What’s your data worth? Sign up for the TARTLE Marketplace through this link here.

Summary
Operations of Organizations and Our Communities With Special Guest and Systems Thinker, Christian Lemp Part 1
Title
Operations of Organizations and Our Communities With Special Guest and Systems Thinker, Christian Lemp Part 1
Description

With all this information, it may feel like we’ve reached a dead end for the problems we face in our society: we can’t solve one problem at a time, and thinking of one big all-encompassing solution seems like an impossible task. 

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Alexander McCaig (00:05):

Hello, everybody. Welcome back to TARTLECast. You are here for a late afternoon session. We usually don't do this, but we want to bring down the special guest from Santa Fe. He is a personal friend of mine. He was one of the early advocates for TARTLE. Thank you for your support.

Christian Lemp (00:21):

Mm-hmm (affirmative).

Alexander McCaig (00:23):

And he's also a phenomenal systems thinker. And I say that because not that I'm any good judge of anything, but I have spoken to you quite a few times and your train of thought, it's on point. So thanks for coming on today.

Christian Lemp (00:36):

Thank you. Thanks for having me.

Alexander McCaig (00:38):

So we got to give a little bit of background about you. No one knows who you are.

Christian Lemp (00:43):

All right.

Alexander McCaig (00:44):

All right. So what is it that led you to system thinking, complex systems, things of that nature to what you're doing now professionally?

Christian Lemp (00:53):

Good question. I think it for me kind of started organically. I had an interest in people and different cultures, exploring the world, and I kind of just observed how people work and how people interact with each other. I didn't know system science was a thing growing up. As my career progressed, I did my undergrad in math and economics, worked in New York for a while, kind of dabbled in finance a bit, diverted, had lived in New York and really enjoyed it there, then got into data analytics professionally as a career.

Christian Lemp (01:36):

And I started doing machine learning, some AI work, data engineering, which is what I do now, which is a lot about data modeling and bringing ideas into a structured data set, representing ideas in data. When I was doing my master's degree, I got that in business analytics and project management, that focus was on operations. I got really interested in operations of organization. So I started working in optimizing workflows, optimizing how people think and the decisions that they make in real-time in a large organization.

Alexander McCaig (02:13):

I wish I could optimize shit for Jason.

Jason Rigby (02:16):

I'm here, guys, I'm just hiding.

Alexander McCaig (02:17):

He's hiding in a corner, recluse.

Christian Lemp (02:24):

The problems that I was working on, what I started to understand in a large organization where there's lots of independent units that kind of function independently, they optimize independently. I started to notice it. Some of the operations problems I was working on, they couldn't be solved independently. They were all intertwined.

Alexander McCaig (02:52):

I know you're about to freaking blow your top and dive into this. You have individual entities making choices within a closed structure of the limitations for what an organization might be or it was defined as, correct?

Christian Lemp (03:06):

Correct.

Alexander McCaig (03:08):

With that, the analysis and the efficiency of that system shouldn't just be on looking at the one individual itself that you're talking about, that one individual entity within it, but looking at the collection of entities and how they work together. Is that what you're saying?

Christian Lemp (03:25):

Exactly.

Alexander McCaig (03:26):

Okay.

Christian Lemp (03:26):

Exactly. As I was working on these problems in the way you just described really nicely, I started researching what kind of modeling techniques work for this. That's when I stumbled on a lot of work from Santa Fe Institute. I went to the New England Complex Systems Institute for their winter course, which is a two-week intensive, got exposed to systems thinking at a really mathematical, rigorous level. And then from there, it just, for me, really took off. I was like, "This is the field that explains so many of the interdisciplinary interests I have and I think can be used to model organizations more effectively."

Alexander McCaig (04:15):

So this is interesting here. So even the three of us here in the studio, okay, that's a system.

Christian Lemp (04:22):

Mm-hmm (affirmative).

Alexander McCaig (04:23):

Now, what we view within our perspective as a human being, sort of the limitation is right here. We know for a couple of steps of cause and effect what happens from our interaction in this system. Does the math become more complex as the degrees of cause and effect increase past what we can normally see and observe here in our own human minds and the effect of it? In our mind, we say, "Oh that's really complex." Is it the ability that we can't actually carry the memory of our thought and the chain of events, and the fact that a computer can carry the chain of events further? It's not necessarily more complex, just has the ability to remember it and move it forward more efficiently?

Christian Lemp (05:11):

In one of my courses, I'm doing a PhD in system science right now, and two interesting points from that. One was the founder of my program, George [Clearer 00:05:23], and he was a fantastic philosopher and systems thinker. He referred to modeling, and simulation, and the computer, the ability to compute and simulate, as an intuition amplifier.

Alexander McCaig (05:37):

Interesting.

Christian Lemp (05:37):

Yeah.

Alexander McCaig (05:38):

See, now David Weinberger, who we had on earlier, would be on the polar end of that. He absolutely despises intuition. But again, he also is a philosopher that deals with system sciencing, chaos theory. So, please go into this because this is a nice actual balance to what community heard before.

Christian Lemp (05:56):

So the part of that is my program opened up with a philosophical debate of a constructivist reality or a realist.

Alexander McCaig (06:07):

You're going to have to explain that for people.

Christian Lemp (06:09):

Sure. So this for me, actually, I had to spend some time thinking about it. I changed my view after talking about it a lot. But as we try and model, when we're modeling, what we're trying to do is put some rigor, put some math, and some numbers to these sort of fuzzy thoughts that come up and we observe reality. I guess the question is, "Is what we're observing actually real, or is what we're observing our construction of it? And the way model, are we doing our best to sort of model what we can, or is there some objective reality that we're trying to work toward?"

Christian Lemp (06:56):

I started thinking like, "Okay, if there is some reality and we're just doing our best to reach it," then I over time-shifted and thought, "Actually, we're constructing these realities that we want."

Alexander McCaig (07:08):

As we interact with them?

Christian Lemp (07:10):

As we interact with them, they interact with our senses. We learn things. We have bias. We interact in society and those formulate what we observe and what we highlight is important. And so we actually construct our own reality in a way. To follow back on your question, where does that fit in? Well, when we're modeling, we are using our intuition to construct some reality of what's important. We have to simplify around us, and there's some patterns that we think are really important that the person next to us might not think is important because they have a different reality they construct.

Alexander McCaig (07:54):

This is a very interesting point. All right? And I don't want to lose this here. What Weinberger had spoken about is that this is a principle of understanding the world. He is saying that just because an individual may carrier principle or a collective may carry a principle, they find that necessary in understanding the world because the world's too complex for a human to understand. But what you're saying is that the human defines its own world. They define their own complexity. But I think he's looking at it from a third-party perspective, actually removing himself from it and he's saying, "Let's just use the computer as a third party also to look at the interaction of all of these things. Let's not even worry about the why things occur. Let's just look at what is happening and what the outcome is."

Jason Rigby (08:41):

His definition of a computer is amazing.

Alexander McCaig (08:43):

Yeah.

Jason Rigby (08:43):

He says it's just a bunch of switches.

Alexander McCaig (08:45):

He says it's just a bunch of switches. But when I'm looking at this, and you think of Bayesian model, "We're going to put in a statistical model and we'll continue to refine it and smooth that curve going forward." Well, that does sound a lot like how a human being in their own perspective would want to smooth out their own model going forward. If they understood that, if they were logical moving forward in their life, they're consistently decreasing risk and refining to have a probability so something that is just so finite and direct and completely objective that it allows them to evolve faster. So what you are describing here, I think is quite interesting, but the fact that the human element is received more poignantly and more directly in your model of systems thinking than what his approach might be in terms of looking at it as a third party.

Christian Lemp (09:34):

Maybe. Maybe. I'm not deeply familiar with his background and how he's thinking about it. But for me, the intuition is thinking, going back to where I started with being interested in different cultures, being exposed to different cultures, living abroad for some time, and observing my worldview, my values are different just based on my location right now, and then taking that with me and saying depends on where you are. What matters depends on where you are, and also the background, and also the problem you're trying to solve. So for me, systems thinking comes back to a very human element. I think I'm very interested in how humans interact, how decisions are made, how groups make decisions together toward a common goal, especially when they have limited information or separate limited information, fragmented information, and stuff like that.

Alexander McCaig (10:38):

You know, coming to a decision is interesting because you're interested about how the individual came to that decision. He looks at how AI comes to its decision, and it doesn't really even make a decision. It just says, "I find correlations, ones of great strength in deep learning. I associate those and give an output, irrelevant of what the why is or what is actually going into it, just looking for those correlations." This is a personal bias that I have, the way you bring in your thinking around this for understanding the why and using systems thinking, complex systems, to understand that why of choice in perspective of each individual reality is actually much more beneficial for creating a great understanding amongst all of us.

Alexander McCaig (11:29):

Whereas, the approach he might be taking, just from the way I had felt this, is that he creates a distance between knowledge and understanding in that model because there's no interest in the why. All we know is that we can A/B test to the hilt, look for the correlation, whichever one has the best output. And even in marketing, Jason, you know this, that's the one you follow through with. Who cares what it is or if it has any relation to your branding or whatever? It just works.

Christian Lemp (11:54):

Yeah. There's an interesting dichotomy between this like top-down use of AI and then bottom-up modeling of understanding.

Alexander McCaig (12:06):

Mm-hmm (affirmative). Are you a bottom-upper?

Christian Lemp (12:08):

I'm a bother.

Alexander McCaig (12:09):

Oh, you're a bother?

Jason Rigby (12:11):

I want to get back to this human thing because you said something very, very important.

Alexander McCaig (12:14):

Yeah, go ahead. Do it.

Jason Rigby (12:14):

I think this is really... I'm over here, guys, in the corner.

Alexander McCaig (12:18):

Who put baby in the corner?

Jason Rigby (12:20):

Do you feel that it's an indication of geolocation, is the nature of humans and it's almost not that they don't have free will, but being impregnated into that geolocation causes, which creates a bunch of bias? You know what I mean? You have religion and all that. Do you feel that's the main indicator of action, that cultural system that they're in?

Christian Lemp (12:50):

It's a major one. I think what culture you were brought up in is often, there are cases where it's not, but it's often a function of where you are in the world. Especially now with being more remote, it's probably a little bit less so.

Alexander McCaig (13:13):

Because of the internet, like a flattening of cultures?

Christian Lemp (13:15):

Yeah, to some degree. I mean, even still, for example, if you're a digital nomad, my brother happens to be, he lives abroad right now. He adapts to the culture he's into function, but I don't think he'll ever like truly live as if he were born in the countries that he's living.

Alexander McCaig (13:35):

But he makes decisions respective of the culture of the place he's in?

Christian Lemp (13:39):

Sure.

Alexander McCaig (13:39):

But deeper down, within himself, he knows where he came from?

Christian Lemp (13:43):

Right.

Alexander McCaig (13:44):

Don't you find that's interesting and that could actually give a false bias if you were to put him and a group of these digital nomads into a system to actually analyze? "Oh, look how they're making decisions. This must be the most truthful way of describing who these people are and how they interact." But little, do you know, is they're only acting in accordance by doing as the Romans do. "When in Rome." Correct? I find that very, very interesting.

Alexander McCaig (14:11):

There's an old quote that says, "All men are born unequal but die equal." So where you're born, the family you're in, anything of that probability, that's going to change circumstances, opportunity, what have you. But when you're dead, you're just dead. So we all die in the same sort of fashion regardless, and that's the great equalizer, which is that time. So when it comes to understanding humans and other humans within our system of Earth, which is a closed system, pretty closed, because we're sitting on the part of the crust in the limited amount of area for us to interact, that would then tell me that what we are doing, if we are born on equal, we can make choices towards that equalization of understanding through this new digital medium that we have.

Alexander McCaig (15:07):

And the enhancement of system algorithms like you have and the ones you work with as you receive more data coming from actual legitimate input of the why from the individual human being, and then I put them into that corporate structure like you spoke about in the beginning, which would be Earth, for instance, that's our margin in that closed system, well then how do we all interact with our own perspectives, and perceived realities and our whys that can help us truly understand what's going on in the system as whole? Am I off?

Christian Lemp (15:39):

I don't think so. I don't think you are. What I'm hearing is basically we can communicate more effectively, share information with each other more effective, then we can make decisions from a common ground more effectively.

Alexander McCaig (16:00):

I think some people take the approach that efficiency sometimes takes a greater precedent than equanimity, equality, certain rights, or respect for what it means to be human being. And people with specific knowledge bases and backgrounds, academics might take that approach because they think efficiency is best. But there's a cost to that system in general. Maybe through analyzing the system, the natural inefficiencies of an individual's own evolution may be beneficial if we take a more human approach rather than strictly the efficient approach. Think about what's happened with monocropping. We all thought it was extremely efficient, right?

Christian Lemp (16:44):

Mm-hmm (affirmative).

Alexander McCaig (16:44):

The data's fantastic. Look at our economic yield, look at the efficiency. But really, the cost, it was much greater on down the road through the depletion of the soil because we don't have that mixture and that regeneration that's actually occurring. So if you continue to apply that sort of philosophical framework to systems thinking, I actually think it's degrading for us and what it means to evolve as a human being in analyzing human beings if you look at it as strictly as a system of efficiencies. I don't know. That's my personal feeling.

Christian Lemp (17:16):

Mine too. I feel something happened and we adopted a dogma. We, the West or very much the U.S., is probably the leader in this kind of thinking is that we optimize for our own selfish interests in the immediate term, ultimately what emerges out of that is positive outcome for everyone. The system lifts if we are all selfish. I think that that is a great marketing campaign if you're a politician, but I think that's a terrible way to operate society.

Alexander McCaig (18:00):

Because it's almost like saying, "My individual closed system's completely closed off from everyone else." This is a poor application of how people use the idea of Ayn Rand They look at her. It's like, "Just do everything you. Forget everybody else. That's lazy. Everything essentially fixes itself." Well, Ayn Rand, really she was just very objective about what she could control herself. And if everybody took on that self-responsibility in the entire system, then the system increases in value. She said, "I can't understand others until I really understand myself too."

Alexander McCaig (18:32):

It's funny how these marketing campaigns come in and create these biases. But efficiency biases, economic biases, academic bias actually play a role in the design of how we're analyzing and looking at other people. A lot of these systems have not asked individuals for how they define themselves. We've talked about this with Jung, it's that statistical flattening when you look at people within these systems and states. I just want to say, "The collective altogether, and when they come together, we can just identify them as one."

Alexander McCaig (19:02):

But there's so many unique characteristic identifiers for that one entity moving through many different vectors of space-time within our system here on Earth. But you're telling me, you only want to focus on efficiencies and then say the application of what they do, the why, is not important for where it gets to because I'm only focused on the outcome of the system but not the why of how we actually got there? How do you really find understanding in a system then? How do you find understanding when we're in a group and understanding around the self and other self if you completely ignore that why, the real key drivers where we perceive our reality to get us where we want to go with our own evolution?

Christian Lemp (19:41):

I think some of the efficiency that we've optimized for, in that sense if you think about for companies, it's often some sort of profit. Like social media, at the end of the day, what returns the most profit to shareholders and then we're going to optimize all of our engagement algorithms, all of our deep learning, we might remove the why and the how we got there between a user and the engagement. If the engagement equals more revenue, well then let's keep going with it. I think we've seen what that can do, not positive of results, a lot of divisiveness.

Alexander McCaig (20:25):

A lot of divisiveness. Jason, you know this, do you not?

Jason Rigby (20:29):

Oh, yeah. We talk about the buckets all the time.

Alexander McCaig (20:31):

Right. What happens when you're in a bucket? You are preconceiving a reality for this individual and you coerce them to make them feel like the bucket is actually who they are. You're defining their own identity rather than allowing them, through their own free will and choice, to identify for themselves.

Jason Rigby (20:49):

How do you get past that, though? [crosstalk 00:20:51] I want to ask you this because I have a friend of mine that is very environmental-friendly but super second amendment. He hunts, but he's super environmental. So you can't put him in a bucket. You, Alex, have all different kinds of things and you drive a Subaru. But I mean, I'm super-left on some things and super-right on others so you can't put me in a political bucket. Especially, people are our age, I mean, I'm probably oldest, of course, by far from you guys, so is it efficiency that causes that? Why are they trying to put people in buckets? Is it easier to disseminate the data then?

Christian Lemp (21:36):

Yeah. Because then you can target your audience. I think so. I mean, that's my opinion. Why put people in the buckets? Because it's easier to message to a collective rather than a lot of individuals. How do we get away? I think you asked, "How do we kind of get away from that?" I think we have to just start valuing deeper things about ourselves and our communities. We have to start valuing a little bit longer-term goals. I think that there's a way to do that and be prosperous as well.

Alexander McCaig (22:22):

So, that value would then be incumbent upon the collective of us coming together and deciding, as a collective, humanity's race, where we want to be? Do we want to end up in nuclear annihilation? Do we want to end up in a completely material-focused world and let all the emotions and individual characteristics of us go away and just be perfectly efficient? I think that's a choice we have to make before we come here and design systems based on biases that are deciding for people, how we should make our decisions going forward.

Christian Lemp (22:59):

The hard part about that is it's an emergent outcome, right?

Alexander McCaig (23:03):

Mm-hmm (affirmative).

Christian Lemp (23:04):

We can't all, back to where we started, interact, and make a choice together. We're making fragmented value choices based on our systems.

Alexander McCaig (23:13):

Do you feel like we can't come together and do that? And this is the participatory learning in models that Weinberger had spoke about. He said, "It's difficult to get everyone to interact." Well, if he can get 90% interact, well, then that's a pretty good collective decision for where we want to hit. So you would need a tool that could effectively bring the world together to make decisions collectively. Right?

Christian Lemp (23:44):

Mm-hmm (affirmative).

Alexander McCaig (23:45):

I can't think of any other than the one we have currently created.

Christian Lemp (23:49):

I was going to say I can think of one really good one.

Jason Rigby (23:53):

I've heard people say, I'm thinking Santa Fe Institute talked about this, if you could get a million people together that they could make a significant change. Is it something like that? Is it that small percentage out of 7.5 billion?

Alexander McCaig (24:08):

That's a good question. Do the systems have a threshold or a tipping point where it just pours into everything else and just become so pervasive within this system that that entity or group of entities then act as really the key driver of the mindset for the rest of those autonomous things?

Christian Lemp (24:27):

I mean, there's examples throughout history of tipping-point changes in society and not all of them are good outcomes.

Jason Rigby (24:35):

Yeah.

Alexander McCaig (24:36):

Well, hold on. All right. Okay. All right. Well, then with that, listen, let's come back for part two, and let's talk about some of those real-world examples.

Christian Lemp (24:44):

All right.

Speaker 4 (24:52):

Thank you for listening to TARTLECast with your hosts, Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path. What's your data worth?