Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
October 14, 2021

Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility with Long-Time Affiliate of Harvard's Berkman Klein Center and Best Selling Author David Weinberger, Ph.D

Everyday Chaos: Technology, Complexity
BY: TARTLE

Technology is quickly becoming the backbone of modern infrastructure. At the pace that it is progressing, it may someday become as ubiquitous and as vital to our economy as cement and concrete. However, AI is agnostic. Despite its immense computing capabilities, it will never be capable of human understanding and discernment.

One example of this is the results derived from A/B Testing, where researchers compare two versions of a marketing asset to see which one performs better. While it can show which campaign would run better, it cannot provide any new learning. 

With this limitation in mind, is it still beneficial for us to know what the most probable outcome for a certain event would be—even if we don’t understand the why or how for its occurrence?

Can Machine Learning Go Wrong?

At this point, David discussed an imaginary scenario where even something as non-controversial as spam mail could become a problem if it was found that legitimate emails from businesses owned by people of color were found to be falsely marked as spam at an unequal rate, in comparison to people who are not of color. Aside from the inefficiency, the AI would become an unfair metric for emails and may even be damaging businesses on the basis of race.

The decision-making process behind sorting emails into the spam folder is compromised because the technology is using so many signals in “deep, complicated, and multi-independent patterns of probability” that will be near-impossible to comprehend without a lot of time, money, and effort. At this point, this massive system is hurting communities who are already disenfranchised in the first place.

This brings to mind Microsoft’s Tay.ai, a chatbot on Twitter created by the tech giant in 2016 that was designed to mimic the conversational patterns of a 19-year-old girl. It would learn from continuous interaction with other users on the social media platform.

Immediately after its release, Tay became controversial after it started tweeting inflammatory and offensive comments. As a result, Microsoft was pushed to shut down the service only sixteen hours after it was launched. 

Social Justice And Technology

It’s a clear indication that the people responsible for programming AI have a corresponding social burden to fulfill, particularly in ensuring that their technology does not harm anyone. This burden can become even bigger when machine learning and AI is applied to other fields, such as medicine and smart transportation.

Beyond Tay.ai, computer scientists and engineers around the world find themselves at the helm of constructing technologies with so much potential. How do we address inherent human bias in these individuals?

David reveals that most people who have the knowledge to work with these complex technologies do not necessarily have the same depth of understanding for social justice as well. This led to a call for participatory machine learning, otherwise known as the design justice movement.

Giving Minorities A Seat At The Table

Participatory machine learning involves people who are familiar with related issues on social justice, as well as communities who would be most affected by the presence of new technologies. They are given a position in planning and management. 

Their input is important from the get-go because it does have an impact on how these systems work. To further explain, David painted the picture of an imaginary emerging smart city that decided to use AI to reinvent its bus system. 

Ultimately, all the new bus stops, routes, and schedules are successful in moving people faster to their destinations, and the numbers echo its success. However, a caveat: these statistics have been decided on average, and only shows that it is successful based on how well it moves affluent communities more efficiently than those located on the outskirts of the city. Those living on the outskirts, who need efficient transportation more than others for work and productivity, become isolated from the system.

At this point, it would be difficult to unravel all the work put in making the new transportation system a success. It’s important for the marginalized to be consistently consulted on the impacts of new infrastructures and technologies, even after construction and installation is finished. Those responsible for creating these systems have a special responsibility to ensure that those who do not have the same footing will finally get a seat at the table. 

David agrees that it may be a lengthier, more expensive process. After all, it will take more time, money, and effort to locate these people, recruit them, and ensure that everyone is on the same page. However, it is the cost that we need to pay if we want a shot at eliminating inequality. 

The Limits of Machine Learning

Beyond the cost of bringing people to the table, David acknowledges that technological progress is already expensive in and of itself. Machine learning systems require individuals who are highly educated in computer science and computer engineering; they will also need other systems that require massive technologies to run. 

Finally, lingering questions on data sharing and ownership prevent communities from fully utilizing what they have. To what extent do you own your data and what should your relationship with it be? What does it mean to own something?

We do not live in individual data cocoons that we own. We live in a community. This public community cannot be run without public data, and public sharing of information about one another. 

The thoughts that define my actions within this system of public information and data, however, are missed by algorithms, analysis, and machine learning. This is because people do not want or have the ability to share why they are driven to take certain actions. 

Ultimately, it appears that one of our most profound discoveries from machine learning is that the world is much more complex than we ever wanted to believe. Despite these sophisticated machines processing massive amounts of information, we do not have the capability to provide a completely accurate and precise prediction of what will happen.

This does not mean that the approximate knowledge we have now is worthless. It helps us appreciate our universe in a new way by teaching us to be comfortable with complexity. 

Are We Entitled to Understanding Anything?

In line with TARTLE’s mission to promote stewardship and collective responsibility, Alexander asked the implications of machine learning in helping humans create better decisions and more informed choices based on the observable universe. To this, David asked a thought-provoking question: why do you think humans are entitled to understanding?

Machine learning and artificial intelligence is capable of taking us to greater heights without the interference of human cognitive biases. With its objective oversight, it has the potential to bring out the best in us as human beings that live in a complex system.

As technology continues to innovate at an unprecedented pace, David leaves us with a parting message: machine learning will drive us to examine all the values that we hold, and sometimes to consider painful trade-offs between two or more equally important values.

“So don’t hold on too tightly to any one value; think about how you may have to give up on some of it in order to support other very important targets.” David concluded.

What’s your data worth? Sign up for the TARTLE Marketplace through this link here.

Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility Harvard Senior Researcher and Best Selling Author David Weinberger, Ph.D. by TARTLE is licensed under CC BY-SA 4.0

Summary
Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility with Long-Time Affiliate of Harvard's Berkman Klein Center and Best Selling Author David Weinberger, Ph.D
Title
Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility with Long-Time Affiliate of Harvard's Berkman Klein Center and Best Selling Author David Weinberger, Ph.D
Description

Technology is quickly becoming the backbone of modern infrastructure. At the pace that it is progressing, it may someday become as ubiquitous and as vital to our economy as cement and concrete. However, AI is agnostic. Despite its immense computing capabilities, it will never be capable of human understanding and discernment.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Alexander McCaig (00:00:07):

Hello, everybody. Welcome back to TARTLE Cast with Jason, myself and a very special guest, David Weinberger. David is a foremost thinker on AI evolution and human understanding of the universe. He's done great work all over the world and has worked for major companies like Google, you've probably heard of it. And from that, one of the eight or nine books that he's written, a more recent one here is called Everyday Chaos. And there's a natural evolution that is occurring in humanity with our technology AI in particular and that evolution of that AI has certain effects on our own systems for how we operate as human beings here on this planet in reference to everything else that is occurring around us.

Alexander McCaig (00:01:00):

So David, thank you for joining us on TARTLE Cast, thank you for writing this book and we are very excited to ask you some questions that have come to light after going through this material.

David Weinberger (00:01:13):

Thanks for having me.

Alexander McCaig (00:01:16):

For kicking us off, David, the book is albeit quite philosophical and I think because you've had such a detail oriented, very objective path of understanding the evolution of this technology since the early days of nuclear deterrence up to where we are now, that it's time to describe the path of this technology and our path as human beings with it, because it is evolving at such a profound rate, that what you're calling as the system or as understanding our universe would be benefited from the artificial intelligence systems. But what I find curious here is in the simplest format of the deep learning, it takes in inputs regardless of what the inputs are and then from that, derives a corresponding output after it has comparatively gone across a set of razor vectors to say that wow, there's something interesting happening here between a correlation of these data points in space and time that are leading to this outcome.

Alexander McCaig (00:02:33):

And it doesn't really care what the outcome is because AI is quite agnostic. It just says what is coming in as the data that is being fed. So I want to know, is that actually helping us in understanding what is going on here? Because if you consider the aspects of AB testing, you just know that something is occurring because you've changed and it's driving an outcome better, but the why is missing. So is it actually beneficial in this format that AI can go through these calculations at a faster rate than we can, to tell us what the most probable outcome will be? But is that truly beneficial if we don't understand the why for the occurrence of it though?

David Weinberger (00:03:25):

Yes, that's why we use it. That's the simplest answer.

Alexander McCaig (00:03:28):

Okay.

David Weinberger (00:03:29):

And let's take non-controversial, generally non-controversial uses of it such as predicting the weather or sorting spam from non spam, which it does exceedingly well, so well that if you were around in the late '90s, early 2000s, it looked like spam might render email useless. And then actually it's slightly more complex than this, but a non machine learning algorithm, a Bayesian and algorithm. And I should say, I'm not an expert by the way, I'm just a writer, please. Thank you for saying all those kinds of things. I'm just a writer, I'm not a first-hand computer scientist, but it means I am not a firsthand expert in any of this-

Alexander McCaig (00:04:21):

No problem.

David Weinberger (00:04:21):

... overall. Well, it is when I go wrong [inaudible 00:04:25]. And over time, that initial algorithm was replaced by a more capable set of machine learning algorithms and that's why if you do use email the way most older folks do, it's totally usable. Spam is a minor annoyance even though it's because machine learning is doable. So you take non-controversial examples like that, semi-controversial one is like auto complete, semi-controversial because it sometimes is embarrassingly wrong, it sometimes brings to light shameful biases in our culture. And it works really well, that's why we use it, so yeah. But in many instances, we don't understand, in many instances we don't particularly care to understand. I assume for a lot of the spam stuff, we don't need to know why it's working like the weird combination of words and in their near-miss to one another and the absence of words and all those sorts of patterns, which escape human notice.

David Weinberger (00:05:41):

And in some cases, maybe machine may be picking up on patterns that are so complex that we couldn't understand them if we wanted to, but I think we basically don't care so long as it's separating the spam from the not spam, which is something that we can test pretty objectively. Now let's make up a case. It starts, it turns out that I was silly sort of example, but not entirely, that legitimate email from people of color or black owned businesses or whatever, are being marked as spam at an unequal rate, falsely marked as spam at an unequal rate from email from people who are not people of color. And at that point, we would be I hope be genuinely concerned, we'd be alarmed, we'd say oh, because it's an issue of fairness here. It's not simply being inefficient, it's being fair and it may be damaging businesses on the basis of race. That's not acceptable.

David Weinberger (00:06:46):

And at that point, we may well want to know, why is it doing this? What are the words that are triggering it? I think that spam algorithms are simply enough, we probably could inspect it. I don't know, I'm not an expert. Let's assume if that's the case, in which case you would do a forensic examination and you would discover and fix, we hope. But in other cases in diagnosing diseases, you can easily have the same sorts of inequities where it works great for white people but it's sending people of color for mistreatment and that's really pretty serious. And the human body is way more complex than spam, it's insanely complex. Human metabolism is one of the most complicated systems around and it may be, and there are certainly cases in medical AI, diagnostic AI, that it's able to pick up on things that testing shows they're getting pretty, AI is getting right which means it's probabilistically right because AI only produces probabilistic results basically.

David Weinberger (00:07:55):

But if it says it looks like there's an 85% chance that this person based upon... I'm going to make this up but not entirely on a retinal scan, it looks like they may develop diabetes or I have heart problems, which normally retinal scans, humans can't pick up on. And in fact, this is one of the cases, that retinal scans, machine learning is able to predict things to state things probabilistically about the person's age and gender and heart condition and the like, elements of heart condition. If it turns out that it's working well for white people and terribly for black people, then it may be that it's using so many signals in such deep, complicated, multi-independent patterns of probability that we simply cannot understand how it's making its decisions, at which point, we have a problem because we have a system that is doing damage, that is inequitable in doing damage, and that's the most pressing issue.

David Weinberger (00:09:01):

And we don't know how to fix it by fixing the software directly. We don't know what the data is causing us, we don't know what the algorithms is causing it. This is a completely plausible situation and at that point, we have to make some decisions. So there are some things you can do to work with AI to tinker, because there's a huge amount of human decision making in developing any AI system, you don't just pour in the data and it does an analysis and it comes with results, you have very talented, experienced, and often intuitive computer scientists, computer engineers, who are building these systems and they're kicking with a whole bunch of parameters or dials to tweak it this way or that way.

David Weinberger (00:09:41):

And it's conceivable that they could do more or less blind tweaking and drive the bias out of it. That's logically possible. It's quite likely, I'm going to guess that it could be improved somewhat and we'll have to make a decision about what we want to do about that. It may not be an easy decision... Go ahead, yeah.

Alexander McCaig (00:10:04):

So that's driving the bias out, David. Now what about the inherent bias in those individuals? Those intuitive computer scientists that are defining the original fundamental pieces of the code that start that Bayesian statistical process to begin to refine itself over and over? The way I see it is this, I have a piece of stone and you like the Greeks and I'm carving something up in the Hellenistic period. If I carve too much and then leave that structure in place, it can either be too weak or I can't go to change it later because of the immutableness of stone itself. But I can come in and maybe carve off a little bit of this, change some of the angles and then I have myself a stronger pillar.

Alexander McCaig (00:10:57):

For these individuals that are going in designing these algorithms, you only have so much stone block to work with first, what defines that fundamentals, that perspective of reality which we want to observe and allow the footing for that AI to take off on. So how is it that we can almost prevent the biases ahead of time in the design of the initial aspects of that algorithm before data is actually put into it? What are the checks and balances that go into it? Because there are quite knowledgeable people that have come through this world and knowledge can also do a great amount of harm if understanding is not applied. Knowledge plus understanding defines our wisdom.

Alexander McCaig (00:11:37):

But if we exacerbate knowledge and don't understand the implications of our own biases and fundamentals that go into it, our stone block can become very weak and chipped away so far that the amount of support it gives into the future is not really what we need to step off of. Is that metaphor making sense?

David Weinberger (00:11:56):

It's making sense, we want to push back on it a little bit.

Alexander McCaig (00:11:58):

Please, please do.

David Weinberger (00:12:00):

Part of it that makes sense is that at least to me immediately is that... and it's a complicated question, as I think you know.

Alexander McCaig (00:12:10):

Yeah.

David Weinberger (00:12:14):

Machine learning systems, well, they learn from data and somebody makes a decision about what we want it to learn, what's the acceptable degree of accuracy, what sorts of trade-offs we're willing to make in the inaccuracies, and what data we're going to use to train it on? The thing that corresponds to the stone I think is the data. And unlike stone, data can easily be added to, subtracted to, it doesn't... well, I don't know what to say about its fragility. It's a really, really important... I also want just to add that I would never claim that you can drive all bias out of machine learning any more than I could say you can drive all bias out of human society. It doesn't mean we don't have a very strong obligation to do the best we can.

David Weinberger (00:13:16):

But as long as data represents humans and human activity, it's going to represent even biases and we can work against that. And that has become thankfully a very, very big issue and very well recognized in the AI community and beyond. So who gets to decide what data is going to learn from and one of the checks and balances? That's a really important, very hard question because in one sense, it helps to have some good idea of how computer, how machine learning systems work to be able to tell whether this dataset is going to give us the sorts of results at the level of accuracy that we want.

David Weinberger (00:14:06):

But the people who have that knowledge are not necessarily people who also have a deep understanding of social justice issues. So I don't think, even in cases where it seems biased, couldn't enter such as weather predictions but certainly in more social uses of machine learning, there is a strong movement for what's called participatory machine learning sometimes, called that, also design justice movement, which takes very seriously the involvement of people who are knowledgeable about the social justice issues including people who are most affected by the outcomes, who traditionally have been very much left out of the machine learning design and training processes.

David Weinberger (00:15:02):

I'll give you an example. The idea behind participatory machine learning before I give you an example, is trying to bring in the communities that are affected all the way through the process, bringing them in all the way through the process. And it can be difficult to do that, but personally, I think it's an important approach, it's the right thing to do generally.

Alexander McCaig (00:15:21):

Of course, that makes me feel good.

David Weinberger (00:15:26):

But it's really hard to do and to know you're doing it right.

Alexander McCaig (00:15:26):

No, I can agree with that. So that participation is something that we've been focused on here at our current company, TARTLE. We're in 222 countries and humans create vast amounts of data, but they're lacking the participation of how that data is used, value curated from it and how that has then come back to apply to the decision-making about the resources for their own life. So we were like, we need to flip that power asymmetry and bring them into the process, which is a very difficult thing to do, and ask them to share swaths of data, very personal, granular things with those individuals or those companies or those algorithms that are trying to do that sort of analysis so that we can eradicate those biases and truly take a very inclusive approach of understanding, not just having knowledge, but true understanding and using that AI to help find that thread that actually unites the greater majority of all of us to solve the problems that we're currently faced with.

Alexander McCaig (00:16:32):

The predictive models show we got 50 to 70 years if we continue on this current climate path and not taking responsibility or stewardship. Great, fantastic, thank you for telling me this is the limitation of my time. But the question is, are the people involved the ones that are actually effecting that system in the stewardship, in the change to alter that predictive model to say, I want to extend it to 100, 200, 300, 1,000 years, multiple generations? But they have been left out of it because there's no analysis of their own behaviors and how those behaviors are then shared with the people that design those systems that help predict those models to define what our future would look like. And you're right, that sort of participation, David, is a very difficult thing to get.

David Weinberger (00:17:19):

I think I'm not sure I'm understanding exactly who you had in mind, so let me take a step back and then be more specific. Because I think that was a boomer complaint. I am a boomer of course, and I'm fine with boomer complaints, but I want to make sure. So I'm going to give you an example of one type of participation. The top part that is usually, people have in mind when they talk about participatory machine learning, this is not entirely a made up case but because I'm always bad on facts or reality, I'm going to pose it as a hypothetical. It's roughly right though. Should just say it's hypothetical.

David Weinberger (00:18:07):

Imagine if you will, that a city decides to use AI to redo its bus stop schedule, it's bus routing and schedules. And they gather up all sorts of data, including from all parts of town. And a machine comes up with an optimized schedule and sure enough, they put it in place and it is moving more people faster to their destinations. So boom success, except that it turns out that that's on average and it's because it's moving the rich... I'll say white parts of town more efficiently than the poor parts of town, and of course the poor parts of town is the part of town that really need bus service because [inaudible 00:18:51] taking [inaudible 00:18:52] and the like.

David Weinberger (00:18:54):

And that was not known until the people who were affected by it spoke up and their thinking is oh, maybe if they had been involved from the very beginning, the city would have implemented what was essentially a racist bus schedule with all good intentions, it was a 100% good intentions all around, including environmental impact. It's a more efficient system. But the voices of people who are marginalized and affected just simply weren't consulted. And so then the question becomes, okay, next time, let's make sure that we bring these people in and let's make sure that we don't have to rely upon petitions from them but even after the thing is installed, that we're doing evaluations, that we're checking with the community, would get early warning that we basically screwed the pooch on this.

David Weinberger (00:19:44):

All good intentions, maybe we can get these people involved throughout the process, these people being... this is a tough question, what exactly does that mean? Who counts as the participants who need to be consulted and in just very practical terms, how do you do that? You have town meetings, you form a board, many of these questions that are at least moderately technical, they can be explained to people who are not computer scientists, but you need people to have some patience and interest in it. So do you let them self-select themselves? The pragmatics of it are difficult but that's true in all representative democratic processes. It's tough and you figure it out.

David Weinberger (00:20:25):

So that's different than I think from your... and I'm completely in favor of that. I think that's different from the fact that the people who are... I'm 70, so I'll die in 15, 20 years, maybe a little bit more, I will escape the ravages of climate change, my children won't and my grandchildren won't but I will. So from my self centered perspective, it sounded to me like you were taking the necessary participants to be those of us who have the most power in the world, my demographic by and large, have the most money, have the most power, and who basically don't care about what happens after they die, which makes them very, very bad people by the way but that's what we're dealing with. That's a different type of participation and I may be reading into your comments, that's why I'm asking it.

Alexander McCaig (00:21:26):

You're almost reading in perfectly. I look for participation from everyone regardless of demographic resource holding, whatever it might be. The way we define it is if there is a human being that is interacting with that system, their voice is to be heard. If they are ones that ride the bus or don't ride the bus, there is an effect of their thought within that community and the outcome of those effects. So whether you are a baby boomer or your group has typically been excluded, you have the ability to participate through your own choice and freewill to do so of the sharing of that data. And that's what we're looking at. Go ahead.

David Weinberger (00:22:04):

Sorry, go ahead.

Alexander McCaig (00:22:06):

No, you do. I want you to stir up the gear here, go ahead. Please ask.

David Weinberger (00:22:10):

I have a few things to say. I'll just batch them. One is that I think there's a certain, I'll say it, naivety in thinking that everybody is on an equal footing and can have a voice, and I think you'd probably agree with this that it's incumbent upon the people who are creating these systems to take special steps to make sure that people who for lots of different reasons don't have as loud of voice get invited to the table. And it may be expensive to do that because you may have to engage, it's a harder process. It's harder to find, to recruit, to bring people up to speed so they can participate. It's harder but too bad.

David Weinberger (00:22:58):

Second of all, it's the same thing, there's a huge power dynamic here. It's expensive to create a machine learning system, it requires people who are generally highly educated at least in their computer science, computer engineering and some of the most important models, machine learning systems that is, require gigantic amounts of tech just to run. The systems themselves are so massive with so much data that there is a huge power dynamic that we have to be very, very aware of and I would like to see broken up as many others as much as possible.

David Weinberger (00:23:47):

That question is I think separable from the question of who owns data, how much does a person own their own data and what should the relationship be between that data? Well, do people own their own data, and what does that mean? I think that's a much harder question and I'm less in step with the majority I think on that thinking than others, but I also think my thinking is not very good on it, so I'm not a good person to talk with about it.

Alexander McCaig (00:24:21):

No, I think you're a fine person.

David Weinberger (00:24:23):

No, I'm telling you. I'm not, I don't know enough about it. But thank you for your confidence.

Alexander McCaig (00:24:27):

No, you are. I'm going to show you why I'm confident right now. David, you have your own thoughts, right? And these thoughts have also been captured in this book you have written.

David Weinberger (00:24:34):

Some.

Alexander McCaig (00:24:35):

Okay.

David Weinberger (00:24:36):

There's not a lot in there about privacy.

Alexander McCaig (00:24:38):

I know, I understand. It's not so much about the privacy but it's the fact that you are willing this thought to happen. And the capturing of that thought defines behaviors and then through that behavior, we have in a digital medium which would be data. So if I'm going to think something on my own accord, if I'm going to act, walk, talk and breathe on my own accord, and that is going to be recorded that within define that that is my data because I am the progenitor and through all providence of action, it comes directly back to me in my thought processes I think.

David Weinberger (00:25:10):

No, it really doesn't though. If you run a stop sign, is the fact that you ran the stop sign your data? I don't think so.

Alexander McCaig (00:25:15):

Well, did I make the decision to run the stop sign or was I unaware?

David Weinberger (00:25:18):

I don't know, it doesn't matter. Whatever you like.

Alexander McCaig (00:25:21):

Why? I was the one operating the vehicle, correct?

David Weinberger (00:25:23):

Sure.

Alexander McCaig (00:25:24):

So the data [crosstalk 00:25:25]-

David Weinberger (00:25:25):

Well, it's not your data.

Alexander McCaig (00:25:26):

Well, it's not my data because I created that action?

David Weinberger (00:25:28):

Yeah, you did an action either intentionally or not.

Alexander McCaig (00:25:33):

Yeah.

David Weinberger (00:25:34):

It's not your data in the sense that you cannot choose to withhold it. If it comes up in a court where you're getting fined or whatever, you can't say, well, judge, that's my data. You can't admit the fact that either of the cops side or the camera side or my car recorded it, it's not your data. And that's true not just when you transgress, it's true of life in the world, we don't live in individual data cocoons that we own, we live out in a public. The public is impossible without public data, without public sharing of information about one another. So I'm not as... the separation between private data and public data, of course some data is private but it's private because we've chosen to make it private, not because it's data, not because it's something that we did.

David Weinberger (00:26:22):

My diary as opposed to my books, and I don't have one but if I did, my diary is private. It's private data because I chose to make it private and there's a set of these illegal set of, there is laws and regulations that say I'm entitled to do that. If the law said I could keep my stop sign transgressions private, then I can do that, but I can't, it's so much more complex than saying if I do it, it's my data, if I think it, it's my data. I don't think that's true, I don't think life is possible that way.

Alexander McCaig (00:26:55):

There's a function of, oh, I'm operating in a public open system or I'm operating within the closed system of my mind. And I think that is a large defining difference between the material and immaterial and how we capture that information. I have every right to share a thought but if I choose to drive a car, then I admire agreeing to be a part of that public system. Yes, it is my data within that public system, and then we share within that as a collective in that public data pool of sharing. I'm not saying to withhold that, what I'm saying is that the thoughts that define me to act within that system, that force me to make choices to get in that car and operate on a public road amongst public data is a function of my choice and many times, that why, the why that drives me in my own thoughts, that cause me to operate in those very open complex systems is things that go missed. And when we look at artificial-

David Weinberger (00:27:50):

Go missed by what?

Alexander McCaig (00:27:52):

Missed by algorithms, analysis, many of the things, because people don't want or have the ability to share that why for that occurrence that drives them to do what they do. I'm not talking about the preventing [crosstalk 00:28:06]-

David Weinberger (00:28:07):

These are totally separable things. One is what rights do you have over "your data"? Which is very important and very vital issue right now. I have to disagree with punches of the thinking about it, but I think the majority are thinking about it, but I don't have trust in my opinion about this. And then there's the question of the data. Data is a readout on a dial, that's literally what it is, it's not a thing in the world. Not everything is data, this is... sorry, my background is as a philosophy teacher, it was many, many years ago. I have a PhD in it. I'm not pulling rank because I'm about to say is I'm a totally incompetent philosopher, was never great in it.

Alexander McCaig (00:28:58):

We're all incompetent.

David Weinberger (00:28:58):

But I'm supposed to be because I have a piece of paper and I've worked hard for it. And I was pretty good philosophy teacher, but I don't count myself as a philosopher much less as a competent or good philosopher. I'm explaining why I think about things the way that I do. And that's just there and my background, those are my interests.

Alexander McCaig (00:29:17):

Love it.

David Weinberger (00:29:18):

So when the computer revolution started in the 1950s, a little bit shorter, really started to take over as businesses started using computers in the 1950s and everything, information in the sense in which you use the term was invented in 1948, taking over a term that we still use in its old ways but it became a technical term. Everything started to look like information, everything, everything. Literature started to look like information, everything got read history was information, life itself, literally DNA, we still think of as information, but it's not, it's a molecule, it's a squiggly little molecule. It's very useful to read it as information but it's not information, it's a thing, it's a squiggly complex molecule that lends itself very fruitfully to data analysis, informational analysis, which I'm totally in favor of. I like science, I like genetics and all that stuff.

David Weinberger (00:30:25):

Data is literally something that's read off of some type of meter, whether it's a physical meter or something else. The temperature today is 75 in Boston. Okay, that's a reading off of a thermometer. That thermometer is divided in two conventional units, the degree of precision and accuracy is built into the thermometer. What we accept, I didn't give you a fraction and if I did, it probably wouldn't go past a hundredth because that's all we need, that's the data. Data is a reading off of a meter that is all that it is. And so machine learning only has data, that's what it deals with. And data in one way, it's a really... can I say crappy?

Alexander McCaig (00:31:11):

You can say whatever you want, are you kidding me? [crosstalk 00:31:13]

David Weinberger (00:31:14):

It's a really shitty representation of the world because the world is not consistent data. On the other hand, it turns out having massive amounts of this basically shitty representation of the world which depends upon what we're measuring, where and why it's totally inequitable. That's why weather is also, even though it's not about people, is subject to biases because we have different quantities of data in poorer parts of the world. And so it's less accurate.

David Weinberger (00:31:44):

So even that, because of the nature of data and the inequity of the world, turns out to be biased. So data is a paper thin metric. It turns out when you have massive amounts of it, machine learning can find correlations and correlations within correlations and many of them are spurious correlations, we hope those get overwhelmed by the better patterns. It turns out that you're able to make predictions and do classifications and do all the sorts of things that machine learning does better than humans can.

David Weinberger (00:32:17):

And that's at the cost sometimes of course of bias, we have bias in non machine learning systems as well, whenever human system. And B, sometimes in ways that we can't figure out how it's coming up with its results. So we get knowledge in the sense of oh, we can predict what the weather is pretty accurately, that's the type of knowledge, but we may not be able to explain exactly why. We are getting to the point where we can do diagnostics that are pretty accurate, accurate enough that we want to keep doing them, and we can't tell exactly how it knows it. As we get further and further into human metabolism which is a vast complex dynamic system, it is crazily complex even down to the individual cell level, just figuring out what happens when a compound touches the cell wall, sets off cascades of what people like to say with information, but if not, it's of chemicals unlike, that we can't predict it in some cases but machine learning can, and we don't know how.

David Weinberger (00:33:20):

We're going to get more and more used to the idea that the world is way more complex than we ever wanted to believe. And we're now ready to believe that because we have these machine learning systems that can take in this complex, gigantic and complex amount of information and making enough sense of it that we want to keep using the predictions of the machine learning system. But yet in many instances, we don't know how it works, and that tells us something about the world, that the world is incredibly, incredibly complex, far beyond our relatively simple ability. Maybe the best in the universe but still a relatively simple ability to make sense of the universe.

Alexander McCaig (00:34:04):

Then that would beg the question for the foreseeable future, do you see a change philosophically in how humans use AI rather than a tool, but as a third party witness and on top of that, a third party influencer in telling us how we should make our decisions because if it has the ability to observe many different end points at the same time and then refer that information back to us, do you see that actually enhancing the way we make our choices that would be beneficial to our evolution?

David Weinberger (00:34:47):

Beneficial to us, I'm not sure what to do with our evolution in this regard. Absolutely, that's why we use it, we currently use it. I use routing software maps all the time. Since I was born, I have a deficiency in my ability to navigate the real world. It is very pronounced and it's real. I have no sense of how roads connect, I can't visualize it. I use writing software all the time, I have no idea how it works. I'm using Google Maps, maybe the people at Google Maps know it happens, then may be the machine learning system is intelligible to them. Not all of them are black boxes. But maybe it's not, I don't care. As long as it's working, I don't care.

David Weinberger (00:35:39):

Likewise, right now if I go to the doctor, for the past all of my life, I've gone to doctors and believed what they said based upon science that is a black box to me. Now, the difference is that conceivably in some universe, I could be a scientist and understand it and with some of these machine learning systems, that's not a possibility. I'll give you another example, this comes from a guy named... oh God, don't get old. When you get old, you can remember one name. It takes two people to remember both names, Michael [crosstalk 00:36:07].

Alexander McCaig (00:36:06):

It's probable that I do get old, so no cheers.

David Weinberger (00:36:11):

Michael, I just hadn't lost it. Anyway, we'll come back to him in a minute, who is a scientist. When Higgs boson was discovered at the Large Hadron Collider, it's been probably seven or eight years ago now, it was awesome that humans are able to do this. But this guy, Michael, he's going to drive me crazy. Nielsen, Nielsen, Nielsen, Michael Nielsen, he's a complete believer, he's not like an idiotic science skeptic in this. 100% says yeah, we found it. But it's interesting to note that there is nobody, there was no person on the planet who could explain how the Collider that discovered it works. There are people couldn't explain subsystems, but it's too big a system for any one mind.

David Weinberger (00:37:06):

The difference between that and machine learning is, if you could find somebody who can understand each part of it, you can find an electrician who'll explain how this particular yolk works or physicist or chemist or et cetera, et cetera, et cetera, with machine learning, at this point, there are machine learning models for which there is no even a set of people who could understand. Nevertheless, in some sense, the world has been a black box, it was forever. There is an end to our understanding in every case, we rely upon systems of authority and trust as we should and we should do that more and more. We seem to be moving away from that in some of the segments at least in the American society and that's a disaster.

David Weinberger (00:37:50):

But we operate in a world that pragmatically for us has been a black box since we started gaining knowledge. I don't know why the doctor said I should take this for cholesterol instead of that or why we think that bad cholesterol is bad for... I trust my doctor and there's a chain of authority that ends somewhere, and that's fine. So we've been in that world forever. We don't like to acknowledge it because we like to think that as with the Large Hadron Collider, that yeah, it's big but you can find specialists who understand every part of it.

David Weinberger (00:38:25):

But in fact, what's been going on since forever, almost literally forever in human history is that we have a little brain, it's like two and a half, three pounds, it's the fact and I probably got it wrong. We're just trying to understand the universe is 14 billion years old, where you're amazing at it, et cetera, et cetera, but our primary way of doing it has been to try to reduce what's going on to noble principles and generalizations.

David Weinberger (00:38:54):

Now the simplest example is Newtonian, there's Newton, a relative handful of equations explain motion, gravity motion. And they work and I'm not denying those general principles. But that's what we have counted as knowledge. The fact that we can never apply those principles or any principles in sufficient detail to claim complete knowledge, we just pass off as an interesting little quirk of the universe. So you go to apply Newton, I'm not arguing against Newton, I'm in favor of Newton's laws and all that's pretty radical.

David Weinberger (00:39:28):

If you go to apply it and you want to know when the coin will hit the bottom of the coral reef, the ground underneath the Tower of Pisa, you can get very, very accurate, but by plugging in the right variables for the height and the air pressure and mass at the earth and of the coin. But you can never, ever, ever account for every blade of grass, the springiness of the blade of grass when it drops, the variations in air pressure as a pigeon flies by and it changes, sets up a little swirl and changes the pressure, and it puts in some wind, or the pull of distant planets, of distant stars which is omnipresent, we don't factor that in not because it's not true, it's because we don't need to. We just want to know roughly when the coin's going to hit or what is a sufficient angle on and force on the cue ball to get the ball into the pocket? We don't care about the coefficient of friction of each piece of the field or the pull of each of distant stars.

David Weinberger (00:40:33):

So we have these laws and I'm happy to say yeah, there are monuments of human achievement that we have laws that we can apply, but we tend to skip the fact that we can never apply them completely accurately because we can't get all the data that affects what's happening. And that's fine, we don't need to, we stop where we need the accuracy. But that still always approximate knowledge putting up for us, amazing, we can get it. One more sentence, that has masked for us the fact that the universe is in fact so vastly complex, that everything, literally everything affects everything else all the time. Machine learning is making that chaotic interaction more obvious to us because now we get some benefit by getting a wider swath of unknowable interactions. I think I'm ranting, I'm sorry, but then [crosstalk 00:41:28]-

Alexander McCaig (00:41:30):

No, no, that's fine. You're a philosophy teacher, so I want to look at-

David Weinberger (00:41:33):

Was, was, was, was, was, was.

Alexander McCaig (00:41:33):

Was, was? You're a master of philosophy.

David Weinberger (00:41:37):

Yes, I am the world's greatest philosopher. Please, please quote that on Twitter. [crosstalk 00:41:42] My lawyers will call.

Alexander McCaig (00:41:46):

If we look at these AI scientists, AI engineers as being the teacher, and I want you to answer this from a philosophical point of view, and we look at the AI as being an individual, a super smart genius, a child and it's our responsibility to teach it, what are some of the things that you think from a philosophical view, it's a responsibility that we have to take? And I know we've talked about limiting bias and stuff, but is there anything else that you're seeing that we could change our course, especially with the Googles and the Facebooks and everything that they're doing with marketing and AIs? Because that leads the most ROI. But what are some of the things that this teacher can do to put us on the right course for teaching this super genius AI?

David Weinberger (00:42:33):

It's again a really important question, which is, I'm just buying time, that's all. Although it is true, really important question. On the one hand, I'm very pragmatic about it because we're putting a shot side for the moment of bias, we're taking it for granted that that's going to be central in this. If it works, then I'm in favor of it, which is to say if it's a medical diagnostic system and we're just going to imagine, it's not biased, it's separate problem. Amazingly, it's not biased. And it is able to predict the onset of you name it, Type 2 diabetes which we can do something to prevent.

David Weinberger (00:43:23):

And if it turns out empirically that it is as accurate as it says it is and maybe it's 98%, somehow we're not close to that now, maybe it's 75%. Whatever it is, if it's accurate within its stated range, then it's working. And in some sense, we know stuff that we didn't know. We know I have a 75% chance and I can change my decisions, I can make decisions based on that. I could say I don't care or I don't believe it or I could immediately cut out all sugars and carbs, et cetera, et cetera, or something in between.

David Weinberger (00:44:07):

I can go for further tests. I could wait and see, I could get more high blood tests, have my blood sugar tested more often. I have full range of things I can do. So if it works, it works. If autonomous vehicles become fully autonomous and the highways are filled with them, and I don't believe this statistic, but who knows? The standard statistic is that could drive down traffic fatalities by 92%. Drives them down by half. That's a lot of safe lives, it's about 20,000 lives a year.

David Weinberger (00:44:40):

I feel like I don't have to know how it works so long as it's doing that and so long as not all the lives or white lives, let's start with that or people in the expensive cars, which is going to also skew ethically racially [inaudible 00:44:54]. And so long as it's failures or ones that we can manage, with a really, really vague because the choice will be, would you like to save 20,000 lives a year or would you rather kill 28,000 people who don't have to die? Kill 20,000 people in order for us to have a sense that yeah, we don't know how this thing works. I'm going to go for how to black box it, given the other things. There's also a possibility that machine learning will discover relationships that we do understand and oh, wow, didn't know that, at which point we have learned from it and that's always good.

David Weinberger (00:45:48):

And then for me especially in the Everyday Chaos book, the most important thing that it teaches us I think is not necessarily this or that new relationship, but the fact that it does these useful things and thus is doing a better job of reflecting our world than our prior processes then, by taking in more data without any presumptions about how that data goes together and finding intricate, sometimes quite minor relationships among the pieces that turn out to be reliable guides to what's going to happen. And that gives us a very different picture of how the world works than the one that we've had for a long time, which says the world's is a set of swirling dust, that we don't really much care about it, except that it's controlled by a set of laws which we humans happen to be able to understand, what a coincidence!

David Weinberger (00:46:52):

Well, once you get used to machine learning and the fact that we sometimes don't know how it works, I find myself instead of thinking, oh yeah, so the world actually is insanely complex and now we have a tool that's able to find some of the patterns in that complexity. And it does so without being told what the general principles are, you just give the data, you don't tell it what we think the relationship is among the data. I'm actually going to come back to that in a second, because it's amazing astounding fact. It does it simply on the basis of data without generalizations being put into it, and it frequently doesn't have generalizations that come out of it. We know a good chance it's going to rain tomorrow, but there's no generalization about swirling massive heat in Louisiana and how that's affecting the cold spell and moving lender or whatever.

Alexander McCaig (00:47:44):

And this is interesting here. AI is as good as the amount of inputs I can receive. That's just what it is. Doesn't do well when it has very little inputs.

David Weinberger (00:48:04):

Well, it needs sufficient input.

Alexander McCaig (00:48:06):

It needs a sufficient amount for the algorithm to even process, right?

David Weinberger (00:48:10):

To come up with accurate enough results.

Alexander McCaig (00:48:11):

Correct.

David Weinberger (00:48:13):

And that's a moving target, right?

Alexander McCaig (00:48:13):

Absolutely. Do you and I both know that's a very moving target also depends on what you're looking to analyze? Now you used the term universe many, many times, and our understanding as human beings in relation to the universe is from what we see. Now, my understanding of physics is that 95 to 99% of all matter is actually unseen, but it envelops that space. So how is it that we come to understand our universe in chaotic systems when it only accounts in the material sense for the smallest amount in the totality of life and existence itself? Is AI then beneficial at dealing with the fact of not having inputs or knowing where to look when it doesn't have information that it needs for its own inputs? Can or do you see itself guiding itself to look to the 99% of the unseen material universe?

David Weinberger (00:49:26):

If humans can't see it and our tools can't see it, then neither can machine learning of course. I don't know anything about physics, so...

Alexander McCaig (00:49:34):

No, no, no, but you're on a good philosophical point here, so where does then that lead us to our understanding, if we are defining principles of this world strictly on the minimal amount of our universe that we can't see? Or for instance, our societies?

David Weinberger (00:49:50):

Why do you think humans are entitled to understanding?

Alexander McCaig (00:49:56):

Now, this is a very interesting question.

Jason Rigby (00:49:58):

That's a great question.

Alexander McCaig (00:50:00):

It's a very interesting question. Understanding allows us to evolve and if we lack the understanding of the why or the drivers in a system of cause and effect within our own mind, which is immaterial and then moves into our material world, and if we lack the understanding for how that process occurred, how are we supposed to evolve? We would all be these hedonistic, self-absorbed, not worrying about the climate, not worrying about others, just moving about our existence.

David Weinberger (00:50:32):

I'm losing it somewhere, I'm sorry.

Alexander McCaig (00:50:38):

David-

David Weinberger (00:50:38):

Go ahead, yeah.

Alexander McCaig (00:50:41):

The right to understanding is a choice, understanding is a choice we want to take through our own responsibility. I could listen to you on this podcast and choose to not understand a word you said, or block it or wallet off in my mind, but I've taken on the responsibility to listen as perceptibly as I can to what you have to say so that I can understand the perspective coming from you. Because you to me as a stranger in some senses, are an enigma within your own thoughts and mind, I only knew you through the book and now I get to know you through your thoughts and this interaction here.

Alexander McCaig (00:51:20):

So it's not that I have a right to understanding, I have a right to my life and my thoughts but I have the choice to understand. And through that choosing to understand, will allow us to define your evolution and how much we want to take the responsibility for that understanding. So I am curious if AI can deal with that fact about driving the responsibility for human beings to want to understand, to show us a world we did not see before, to bring out the blooms of complexity, to look at our own minds and understand this very spacelessness and timelessness of our thoughts and see how that applies to our physical, so we can understand. And if I am very egotistical, self-absorbed, self-centered, then I would like to understand what happens within my system of me picking up this coffee mug and putting it down? Or how I have this interaction or how I approach you.

David Weinberger (00:52:24):

You're asking a lot of the computer, which as a computer, does not understand anything. It is a set of series of switches. It is not conscious, it's not partially conscious, it doesn't understand, it's a set of switches that manipulate other sets of switches. Those switches represent data. Your choice to understand the world, to not understand it, to be a selfish asshat or not, that's not on a computer, that's up to you. And I am not even sure it's up to you, but it's certainly not on a computer. I don't know who it's up to because if it were up to people, then they wouldn't be selfish. Well, I don't know, anyway.

David Weinberger (00:53:08):

The thing that I am most excited about with AI is not the advances that it gives us in our ability to predict and classify, categorize things in ways that can be very harmful, but extremely beneficial in medicine, in climate change for sure, climate, weather itself, let me even forget climate for the moment. Weather itself is one of the most complex systems. Literally everything on the planet affects the weather. Likewise, it affects the climate and everything in the universe affects climate, and more so then including seasons, stuff that's very close in hand within the solar system.

David Weinberger (00:53:53):

So the machine learning's ability to get me places literally, to route me or to analyze... and it's used quite widely in climate science, to provide models that help us to see not only what is happening and is likely to happen but also to identify some of the play what if with climate, say what happens if we were to melt the ice caps? And this is probably not a good idea by the way, and so forth? That stuff is hugely, hugely important, really exciting, that's where a lot of excitement about AI comes from.

David Weinberger (00:54:37):

That and the fact that the misapplication of that can be used to suppress human judgment and to enforce racism, inequity, false word. For example, when used in the judicial system for sentencing, which is a really, really bad idea, in my opinion. The thing that's really exciting to me about AI, I think it's aligned with what you're saying, which you will tell me, which is... can I give you a slightly extended example?

Alexander McCaig (00:55:04):

Yeah, by the way. Come on, give it to me, I'm right here. Throw it at me.

David Weinberger (00:55:11):

One of the standard ways of noting when machine learning actually started to takeoff, became a real important thing, was around 2011 I think, something like that. Fax. Fax are for losers, if people remember things. The image in that competition in which computers were synced on a set of images and asked to identify them, classify them into various categories. And the standard way of doing it had been getting pretty good at, was to program the computer so that it would recognize particular things in images. And so I'm going to make this up, if it's classifying animals, you teach it, or if there's a lot of contiguous gray and a curve, a white curve, that's probably an elephant.

David Weinberger (00:56:05):

But if it's blobby and soft or if it's got wings and maybe it's a bird, you'd have to tell the computer what to look for. And with enough work, it gets pretty good at that. Then the team said oh, you know what? Let's just gather up lots and lots of data and let's not tell him what we know about how things are classified. We'll just give it images which are collection of pixels, which are colored dots in a rectangle or in the grid and we'll say, this set of pixels is an elephant, this one is a rhino, this one is a bird, et cetera, et cetera, et cetera. We won't tell it anything we know about animals, we'll just label this stuff.

David Weinberger (00:56:47):

We know elephants are gray, we know zebras are striped, let's not tell it that. Let's just let it figure out on its own what it thinks is important. And it did and it substantially beat the best of the program. That's the amazing thing about machine learning, it is still almost always the case that we suppress from the data that we give it to learn from, what we know about how the data goes together. When Mount Sinai gave 700,000 health records to prototype, they call it Deep Patient, 500 different categories of information all of which is data, all numbers, it didn't tell it oh, these are symptoms and these are medicines and these are outcomes and these are weights and these are ages, it said, oh, here's a bunch of numbers. And not only won't we tell you what they are, we won't tell you what we know. We humans know about the relationships between symptoms, medicines and outcomes. You're not getting any of that, computer. And these systems started to do better than humans can.

David Weinberger (00:57:44):

This is the most astounding thing I think about it, that not only doesn't it rely upon exactly the knowledge that we had developed for thousands of years which is about these generalizations, Aspirin cures headaches, zebras have stripes. You suppress that and it does a better job, what a slap in the face. What a slap in the face, two thousands of years of human inquiry? What it tells us is that the universe is all about the particulars, the patterns of details in particular. And for me, this is very ennobling about life.

Alexander McCaig (00:58:19):

This is my favorite part about life and you are articulating it wonderfully well. And you talk about zebras and Deep Patient and the Egyptians chewing on the willow bark and how that leads to Bayer Aspirin, I look at this and I understand that there are very specific intricate, complex characteristics of human beings. And most of the time, they go unmissed or we generalize idea about human beings by putting them in buckets because it helps people simply define something, but it does it with a complete lack of understanding. So I love the fact that AI can help get us without the bias we've had for thousands of years depending on our religions, depending on the way we think, the way our parents have brought us up, the way we've artificially or synthetically viewed our world, this will afford us the opportunity to see that we can be absolutely wonderfully unique human beings, characteristically in a complex system and look at that directly and in a very objective manner. That excites me.

David Weinberger (00:59:28):

Good, it excites me too [crosstalk 00:59:29]-

Alexander McCaig (00:59:29):

You and I are going to parade through the streets with our clothes off.

David Weinberger (00:59:34):

I'm going to find a difference between us in a minute. I haven't taught philosophy since 1986, so I'm very much an ex, but this is exactly what philosophy teachers do, which is they get agreement and then they say, well, let me throw this at you, and then they drive it. So they basically teach people how not to believe things, problem with the profession. So A, so long as part of this is, we are so complex and our ideas frequently, our senses, our intuitions are based so much on the interrelationships with small pieces that we may not be able to identify, that we may not be able to analyze or understand why... I don't trust that guy, I can't tell you why. We may be wrong, intuition is terrible, we may be wrong. It'd be nice to be able to say, oh, you know why, it turns out that I once didn't trust somebody whose ears had the same ear lobes. It could be that.

Alexander McCaig (01:00:39):

Yea, shame on you, Jason.

David Weinberger (01:00:40):

My problem was that. This guy has ear lobes.

Jason Rigby (01:00:46):

If you saw these ear lobes, you would have [crosstalk 01:00:48].

David Weinberger (01:00:50):

So it would be nice to be able to know these things but it may be simply too complex. And B is that machine learning in general I think teaches us this, machine learning's ability to analyze anything like any individual's understanding is pretty much non-existent and may not ever be existent. The general way that it works is a slap in the face to the idea that humans can and should and must understand everything. And understanding means reducing the particular, finding which generalization a particular fits under. That's where the slap in the face is. The idea that it's going to help me identify suspicious people, it's probably something we don't even want it to do because it's going to be so biased [crosstalk 01:01:37]-

Alexander McCaig (01:01:36):

But do you see ear lobes on that collective? That's suspicious.

David Weinberger (01:01:41):

There is this famous example of somebody feeding in Facebook photos of... I think this is the way it works, Facebook photos of people, the portraits of people who identify their sexual orientation, their code preferences. And it had gaydar, it's claimed to be able to now identify people based on their photos, who's gay and who isn't. And this caused some concern because that thing can lead, especially where gay rights are suppressed, it's a freaking nightmare to think that CCD cameras in public might start identifying people as gay, and so being oppressed because of it. So it was a great deal of pushback on it because of the potential for misuse, but also because it seemed like really bad science and it in fact turned out to be based upon irrelevant facts, not upon the set of the face or the person who has a gay nose or whatever ridiculous thing.

Alexander McCaig (01:02:38):

You've seen this before with the Nazis, what they would teach. Here's a Jew, here's a Gentile, and I don't want to say it but a lot of that study did come from Harvard University which was adopted, but that was the model. It's like let's [crosstalk 01:02:52] file and then we're going to use that to define who a person is, that's a dangerous road to go down.

David Weinberger (01:02:58):

Even though there are some safe generalizations about Jewish faces, I own one myself. Still, it's not something we want to do ever and we certainly don't want to trust the machine that may not be capable of being interrogated about how it came up with it's identification and then apply sanctions. There are places, for example in the judicial system, where sentencing recommendation and bail recommendation software is still being used despite evidence that it's not great where even if it was pretty good, it was pretty accurate in predicting who's going to be recidivist, who's going to be sent back to jail, or who's going to jump bail? People, I think we generally believe in the judicial system in credit scores and in other areas where we actually the right for us to know how the decision was made is more important than the decision be as accurate as possible.

David Weinberger (01:04:02):

For example, with the FICO scores, the companies that do the scoring, that do credit scores are constrained by law in the U.S. and I think thankfully to not use machine learning, to evaluate a relative handful of factors, to rigorously exclude any information about protected classes like religion, gender, sexual orientation, et cetera, et cetera. And to always be able to give not only a reason why you were rejected for the loan let's say, but the reason has to be something you can do something about. Well no, you're black, clearly unfair and you can't do anything about it. You live in a poor part of town, you can't really be expected to move to a nice part of town, that's not actionable.

David Weinberger (01:04:45):

But no it's because you missed, your credit card payments relate for the past few months. That seems to me, completely reasonable. The very same companies are using machine learning to spot credit card fraud. Well, none of those things hold, but I think that's probably okay because it initiates an investigation, they send a warning to the user, there are various forms of remediation. I may be wrong about that but it seems way, way better than using machine learning for credit scoring.

Alexander McCaig (01:05:16):

Yeah.

David Weinberger (01:05:16):

And we as a culture, have to make those decisions.

Alexander McCaig (01:05:24):

We as a culture have to make those decisions, the AI will not make that decision for us.

David Weinberger (01:05:29):

No.

Alexander McCaig (01:05:30):

Thank you. It's funny how we end up at these thoughts, it only takes an hour.

David Weinberger (01:05:37):

So this is a basic a therapy session, the doorknob. And by the way, I got to mention that in the dream, the woman who was seducing me was wearing my mother's clothing. [crosstalk 01:05:46]

Alexander McCaig (01:05:50):

I got to put that into machine learning model, something's not going to turn out well. I'm sensing into your load issue. David, it's truly been fantastic. Actually, we're going to need to bring you back on because there's so much to unpack with this and if you're comfortable to do it, we'd love to have you here. And as we close this up, I would like you to share with us and the rest of the world that listens to this one thing you could afford them to think about in their future, what is it for them that you would leave them with that would help guide them going forward towards understanding?

David Weinberger (01:06:38):

This requires wisdom. I'll say something AI related or something that AI teaches us, which is, pay attention to trade offs in every moral decision, especially in including ones about fairness. One of the things that machine learning is teaching us, and we talk about this all the time, is that all of the values that we hold are all subject to trade-offs and they can be really, really painful trade-offs. So don't hold on too tightly to any one value, think about how you may have to give up on some of it in order to support other very important targets.

Alexander McCaig (01:07:17):

I actually have... Okay, you just gave me goosebumps.

David Weinberger (01:07:22):

I guess so long.

Jason Rigby (01:07:25):

David, how would people find out? I encourage everyone to buy the book, but how-

David Weinberger (01:07:29):

What is the name of that book by the way?

Alexander McCaig (01:07:30):

Everyday Chaos.

Jason Rigby (01:07:32):

Everyday Chaos. I love the cover, it's beautiful. But he's got the rest of the book, this is [crosstalk 01:07:38]-

Alexander McCaig (01:07:37):

Yeah, yeah, yeah, actually [crosstalk 01:07:39]-

Jason Rigby (01:07:39):

He's got it all full of little notes and stuff, but how can somebody find out more about you?

David Weinberger (01:07:46):

Weinberger.org.

Alexander McCaig (01:07:48):

Okay, Weinberger.org.

Jason Rigby (01:07:51):

We'll put that in the show notes.

Alexander McCaig (01:07:52):

We'll make sure to put that out there. David, listen, thank you again for this and we do look forward to having you again on soon.

David Weinberger (01:07:58):

Thanks for the questions and for your openness in conversation.

Alexander McCaig (01:08:02):

Most definitely.

David Weinberger (01:08:02):

I appreciate that. Okay.

Speaker 4 (01:08:11):

Thank you for listening to TARTLE Cast with your hosts, Alexander McCaig and Jason Rigby with humanity steps into the future, and source data defines the path. And what's your data worth?