Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
May 11, 2022

Are We Really In Control Of Tech Development and Innovation? The Truth With Bernd Stahl

Are We Really In Control Of Tech Development and Innovation? With Bernd Stahl
BY: TARTLE

Here are some issues worth mulling over when it comes to the development of AI: data protection. Biased decision-making. Bias in employment. The social impact on the environment. The future of warfare. 

What are your thoughts regarding the impact of tech innovation on humanity? Don’t you think you deserve to have a say when it comes to how technology is being developed?

In this episode, Alexander McCaig welcomes Bernd Stahl back on the TCAST. The pair navigate some important questions regarding the ethics of artificial intelligence, and the impact of its progress.

The Trickle-Down Effects of AI and Tech

One issue Alexander raised was regarding the top-down hierarchical approach that’s commonly been used in many different kinds of organizations, throughout the history of humanity. This approach usually backfires because all the power to decide is concentrated in the hands of an elite few. As a result, the people who form the base of the top-down approach are dependent on them.

However, AI can be developed in a way that would enable resource-holders to directly analyze the general public, using ethically-sourced data to power their algorithms. Through direct interaction with the communities that they want to research and develop, the benefits of future technology can better tickle down to those who need it the most. 

But this system also poses risks, especially when the people who are being studied do not have a say in the amount of data that is taken from them. With this, Alexander McCaig raises the idea of allowing people to choose which data can enter these systems and be analyzed. 

Discussing the Deadlock in AI and Ethics

To this, Bernd Stahl discusses the magnitude of this initiative. While the European Union is trying to achieve this through its General Data Protection Regulation (GDPR), Bernd Stahl points out that it can be difficult for the individual data subject to communicate their desires to the data collectors. In addition, data regulation needs to account for changes in consumer’s preferences over time because as humans, we aren’t very consistent in what we want. 

Due to the complexity of this relationship, history has developed in a way that allowed companies to accumulate the resources and the power to benefit the most from AI. There is a strong recognition that this imbalance is problematic. It’s inequitable, it does not help humanity evolve or enforce equality, and it does not give the disenfranchised the best chances to succeed in life through technology.

So Alexander McCaig raised the question: what if we could work towards a collective decision on how we define the ethics and outcome of AI systems? If we agree on our vision of the future and focus our efforts on that as a collective, we could better inform the AI algorithms and help direct them to a clearer path. One that is in favor of humanity. 

However, Bernd Stahl pointed out that vast differences in preferences and moral stances can make it difficult for us to agree on one clear direction. 

Closing Thoughts

AI is capable of helping us. This is undisputed. One area that Bernd Stahl believes AI can be most utilized is in medicine. For example, personalized medicine can look into the genome sequencing and medical history of an individual, and run it against other samples. This would potentially empower scientists, providing the information they need to tailor a specific combination of pharmaceuticals so that individuals can get bespoke treatment.

But are we maximizing the potential of our technologies, or allowing those seated in power to take a hefty cut of the benefits for their vested interests?

Bernd Stahl likened the creation of AI to the discovery of fire. If we could ask the first person to figure out what fire was to predict how vital it would be to the evolution of humanity, they would have no idea how to respond. 

We can only work on what we know. This is why Bernd Stahl recommends looking at AI with continuous reflection and assessment. We need to be capable of seeing how AI is developing and providing our feedback. 

And that’s why your data is worth billions.

Sign up for the TARTLE Marketplace through this link here.

Are We Really In Control Of Tech Development and Innovation? with Prof. Bernd Stahl by TARTLE is licensed under CC BY-SA 4.0

Summary
Are We Really In Control Of Tech Development and Innovation? The Truth With Bernd Stahl
Title
Are We Really In Control Of Tech Development and Innovation? The Truth With Bernd Stahl
Description

In this episode, Alexander McCaig welcomes Bernd Stahl back on the TCAST. The pair navigate some important questions regarding the ethics of artificial intelligence, and the impact of its progress.

Feature Image Credit: Envato Elements
FOLLOW @TARTLE_OFFICIAL

For those who are hard of hearing – the episode transcript can be read below:

TRANSCRIPT

Alexander McCaig (00:08):

Hello, Bernd. Welcome back to TARTLEcast. I'm super happy to have you back, my friend, and I want to dive into hot topic for everybody. And I know recently you had a pretty decent work come out on artificial intelligence and ethics. So what I want to do is I want to address that central question that was posed in the book itself that you wrote and that's around, how is it that we still receive those tangible benefits of artificial intelligence, but maintain a focus on a very high quality of ethics? And so, to get into that, it would be great if you could define two things for me here. One would be... Or three things. One would be, how do you see artificial intelligence? How do you define what a benefit is from artificial intelligence? And how do you define the ethics?

Bernd Stahl (01:00):

Right. So, firstly, thank you very much for inviting me again.

Alexander McCaig (01:05):

Yeah. We're [inaudible 00:01:06] happy to have you here.

Bernd Stahl (01:06):

I'm delighted to be here. So, I mean, these are three big and different questions. Maybe start with defining artificial intelligence. What is it? Right?

Alexander McCaig (01:18):

Mm-hmm (affirmative).

Bernd Stahl (01:21):

I think one of the challenges of the ethics of AI debate is this very question of the definition of AI. AI, of course, if you look at it historically, has been around since 1950s, there's nothing particularly novel about it, and it's traditionally been seen as part of computer science. It was a branch of computer science. There have been long debates around ethics of AI for decades. So again, there's nothing very new. I think what's new is that recently we've seen significant successes and AI applications that actually made a difference.

Bernd Stahl (01:56):

And those seem to have something to do with the combination of the availability of computing power, better and more refined algorithms, and large amounts of data that are around, and all of those combined to allow a successful deployment of particular types of machine learning. So, artificial neural network driven machine learning approaches. So I think that's really what the core of a lot of the AI debate is, but it is not all that is to AI. There are further fields of AI which aren't really covered by this, and I think the question of the limitation is an interesting one.

Alexander McCaig (02:32):

Yeah.

Bernd Stahl (02:34):

So the second part was, how do we retain the benefits-

Alexander McCaig (02:40):

Yes, right.

Bernd Stahl (02:40):

... and address the issues? Well, then, the benefits are highly visible and there's a lot of discourse on how we can achieve them. And this is something where most governments in this world, certainly in the richer countries, have jumped on the bandwagon and everybody says we need to promote AI, we need to invest in AI, we need to invest in education on AI. And the idea is that this will lead to mostly financial benefits. So this is supposed to benefit companies who then can do stuff with it, which then will trickle down in some way to the rest of society. I think that's sort of the baseline narrative.

Bernd Stahl (03:23):

At the same time, we are aware that this may raise all sorts of issues from data protection, from biases being introduced into decision-making, employment implications, to the big social questions of what that would do to the environment, how does that change the future of warfare, all the way to the futuristic scenarios of what do we do if we have a properly autonomous machine, right?

Alexander McCaig (03:50):

Right.

Bernd Stahl (03:50):

And I think that's a whole bunch of different things back together in this discussion.

Alexander McCaig (03:56):

So, when I look at this, and we've seen in the past that when we take a lot of top-down approaches, hierarchical approaches to our systems, it usually backfires on us, even in, say for instance, financial markets, or when someone creates nationalistic societies, or one person has the power, that generally this rattling that happens in the support of foundational base with the people itself.

Alexander McCaig (04:26):

And when I look at this artificial intelligence, systems idea that they have where, if we develop this and then really enable it to the resource holders to come in and analyze the general public and just using them to their own algorithm decision and they can grab whatever data they want and it'll say, well there's great benefit. We can deliver to the people through the trickle down effect. I think it lacks the interaction that is required from the people that are actually supporting those systems.

Alexander McCaig (05:01):

So if you don't have, for what I see a human being coming in and saying it's all well and good that there's efficiency in computing power. You know, we can better predict with these models right through this synthetic neural network that we're creating. But what I want to do is, I also want to say that I don't want you to analyze certain facets of my life and it's all great that we can have efficiencies, but at, at what cost to somebody's choice.

Alexander McCaig (05:29):

And I think that comes down to one of the issues with the ethics. I should be able to choose which data I want to allow to go into those systems to be analyzed. So if the collective of society, how I see this wants to come in and say, we are coming together, and we want to be much more transparent with our choices, our behaviors, and our decision making.

Alexander McCaig (05:49):

And we want to willingly put that into these systems so that the people that do have the resources can analyze it. And then we can receive that benefit. That's a little bit different of a story than someone saying, well, look at all this free information that's flying around. AI's becoming much more efficient or computing powers increasing.

Alexander McCaig (06:07):

So if we can just ingest more information and collect more of it all the time on people before they even know it, we can start to get some sort of arbitrage of effect with our economic efficiencies. And then later on, society will probably benefit. But at the cost to what I would say would be the data rights of that individual. So I think there's a little bit of a toe here with the resource holders and the ones that actually help generate the resources for those resource holders. Do I seem off with that?

Bernd Stahl (06:36):

Yeah, no, no. I pretty agree. I think the way that you've described it is really very much along the lines of what the European union wants to achieve. So this idea that as the data subject you have say, or what happens to it and that you benefit from whatever it's done to the data. And I think so certainly where I live, this is probably fairly contentious. I think it's more a question than of the practices now, how do you actually do this? So how do you do as an individual data subject?

Bernd Stahl (07:05):

How do you communicate to the people who want to do something with your data, what your preferences are, which is extremely difficult. And also, I mean, there's a lot of studies showing that we are as individuals, not very consistent, right? So we know maybe feel very precious about our privacy in some situations and much or less.

Bernd Stahl (07:24):

So in other situations, and dealing with these sort of practical questions is exceedingly complicated. And as a consequence, well, maybe not as a consequence, but maybe that just the way historically has developed that the companies that have the resources have been able to benefit from AI. And there is a strong recognition that is, can be problematic for reasons of equity, of distribution, of equality, of chances in life. But the question then is, well, what do you do about it? Right. And that is something that is much more debate at the moment.

Alexander McCaig (08:02):

Yeah. And it's like, what do you want to do with it? And it's like a double edged sword, right? We can either create a weapon or sword and we can cut down the crops in the field or the brush, and then actually lay crop, turn into a farm, or we can do is it to create harm. And I think that at the core before the artificial intelligence algorithms are actually employed, you have to take that sort of foundational look at the ethics.

Alexander McCaig (08:34):

And I think there has to be a collective decision on how we all define the ethics and how we want the outcome, of this system to actually work. Because when we look at this sort of testing that people do, even the way artificial intelligence is designed or deep thinking systems, anything like that, they don't know what the outcome is. They just want to try and drive away to the most efficient outcome. And I think they haven't considered what the vision is in the future first they're just essentially letting this thing efficiently decide what gets us to whatever point it decides is most efficient.

Alexander McCaig (09:09):

But maybe that most efficient point isn't necessarily best for a human being or collective or a nation. Right? So let's look at this from this example, I want to create an artificial intelligence system, and I say, "I want to solve the climate or climate issue." Well, the system would just say, "Well, just get rid of human beings," right? So if you just left it to its own devices and it analyzed all the data, and say, "Well, human beings are the first issue. That's it, hands down, get rid of them." Well, that doesn't work. But I think that sort of idea of how those systems actually operate is applied everywhere because no one's looking at the more humanistic element that is really missing from the development of that AI.

Alexander McCaig (09:51):

And so when we look about receiving that tangible benefit, well, The Earth would receive that tangible benefit, but we would receive no benefit from that sort of analysis, but that metaphor can be applied to so many other systems. So I think that we have to focus on as a collective first, what our vision is of the future, and actually tell these artificial intelligence algorithms by programming, in what our level of ethics or morals actually are.

Alexander McCaig (10:16):

What we're we define as that level and what we shouldn't tramp on to say that, for instance, "I want to solve the climate problem and I want to protect human life," right? And now there's a total difference in what's going on here, but we have to define from the, get, go what it is that needs to be solved rather than the thing telling us over time, what needs to be solved are you following?

Bernd Stahl (10:38):

I would disagree with your example because [inaudible 00:10:42] I think it always estimates what an AI can do. And so solving the climate issue is such a complicated question. If it's solved at all, it's certainly not something that current AI could do now. Current AI machine learning algorithms that we see at the moment are very good at certain things, clustering, pattern recognition and so on, but they're not anywhere near understanding a question. You could not communicate with them in natural language in a way that they would actually understand, right.

Bernd Stahl (11:13):

So whether that will ever change or not a different question, but certainly at the moment that they're nowhere near that. So the typical answer or the typical question you would ask such a system is no, is this particular sample from a pathology slide cancer or not? Right? So, this is the sort of thing that they can deal with.

Bernd Stahl (11:31):

And there, I think the issue is how does that relate with, or relate the ethics and collective preferences? The other point where, I mean, I agree with you on principle in that. No, it would be nice if the society collectively could decide where we want to go and then use technology for that purpose. But I think practically again, it's very unlikely that we're going to get there because we do have different preferences. We do have different moral stances.

Bernd Stahl (11:58):

We have different things that we would like to see achieved. So getting this consensus as to what society should be doing, I think that's where it falls down. We are not at the point where we say, well, society should be equitable and distribution should be just at least we don't wouldn't agree on what exactly that means in practice. And that's, I think where again, it becomes difficult to practice.

Alexander McCaig (12:22):

So is that would then... Well, then that would naturally lead to a limitation on how far you can develop artificial intelligence then because you can make system's really a hundred percent efficient, but there will be a human cost in the end later on much further into the future in terms of the development of AI itself. And I think when we separate both of these things, that's where we're going to start to come into a little bit of trouble here.

Alexander McCaig (12:51):

And artificial intelligence has a little bit of misnomer. I probably should have labeled that first. It doesn't really understand language of what we're talking about. It does pattern recognition, right over a very large sample set, which goes back and re-refines through some sort of Basian algorithm to say that, oh, we're going to keep fixing our curve.

Alexander McCaig (13:09):

So it's more accurate and it's predictability over time. It doesn't know what you and I are actually saying. It doesn't feel or into it. What is actually occurring. It can only predict over time what is happening. So how is it then if I look at that from a logical stance, how do we solve that sort of limitation then? Because the way I see it, if AI continues to advance and leave us behind, right. Which it will do, but if we don't come together and actually speak to where we see the vision of it, how is it then that we see the future?

Alexander McCaig (13:47):

I feel like the benefits AI will continue to increase in its own benefits for its own technological value, but are the benefits we receive will actually drop off because we lack that sort of, that collective value in seeing what we want from it. I feel like people are just using it and testing it for so many things. And then that toes on a very you know, tough line between what's ethical and what really isn't.

Bernd Stahl (14:15):

Yeah. Yes. I'm not entirely sure, where my prediction would be, how it's going to play out, how it's going to develop further. Certainly AI research is extremely vibrant area and we can expect a lot more progress. Will that actually lead to a qualitative change where AI know has ability to progress itself and so on. I'm not so sure about, but I do agree that it's not obvious that technical developments that we see at the moment will benefit society as a whole or humanity as a whole.

Bernd Stahl (14:45):

I think that's also... Again, it's a fundamental question that we will find very different to answer. If you'd now ask the first person to use fire, whether that was really a good idea and whether they would think about the consequences or they would've found that very difficult to answer.And, similarly, I think it's impossible to predict what technology will do, which is why in this book I suggested that what might be more useful is to think about the system approach to AI.

Bernd Stahl (15:14):

So AI, not as a technology that's out there, and that does thing, but AI as part of a number of interlocking social technic systems. And what we need to think about is how we can structure as a society, these systems in a way that will help them to keep in mind, ethical aims, such as no promoting human flourishing. This is the one we picked in the book, but the idea was is not, we know what's right. And we therefore tell you what to do now. But rather we build our socio technical systems in a way that they can reflect on what they're doing and keep on discussing these questions. So there is no one answer right now, but there is a structured approach that will hopefully help us not to completely forget the question.

Alexander McCaig (15:55):

Right. And you know, when reading it, and frankly my limited understanding of information systems and critical systems theory is that computers, computation does not happen without human input. And I feel like that the way the general public receives this, the way governments look at it and other businesses that, oh, we don't need the human input. We don't need any of that. This thing's going to just solve it on its own. Right. But we actually have to have that level of interaction with it, for the thing to truly flourish. And if we're looking at those designs and if we want to integrate it into the rest of our systems, we also have to understand how we as individuals interact with this. You see what I'm saying?

Bernd Stahl (16:42):

Yeah. I do agree that this is... So they're not standalone systems and they're not autonomous. Even if you talk about autonomous vehicles, for example, or even the most autonomous vehicles, not autonomous in the sense that you and I are now. Hopefully, no, we assume that we're, but as a simple example, if we had a truly autonomous vehicle now that might just tell us no, today you stay home. I can't be bothered to go. That would be autonomous vehicle that I could sympathize with.

Bernd Stahl (17:12):

But the main point being is that certainly these systems, current technologies, as we have no autonomy, they cannot decide about their aims and objectives. Now they are very good at finding optimal ways of doing things, but they're not very good about thinking, but they're not good at thinking at all. And they're certainly not in a position to decide what should be done. And I think that needs to be clear.

Bernd Stahl (17:35):

And that is also something that arguably from an ethical perspective should not change. So even if it were possible to develop truly autonomous systems, which again, I'm on the fence, whether that's possible, but even if it were possible, we probably shouldn't do it because now this is a very unique human thing. We are autonomous. We can set our own aims, machines shouldn't do that.

Alexander McCaig (17:56):

Well. Yeah. So then I wondered if we can just play this out for a second. We set our own aims. So for instance, I say my aim, I would like to start a technology company and I'd like to have a great benefit to humanity through them, sharing up their information. And then you may have in the far future, something that becomes truly autonomous, right? This thinking machine, and it comes up to you and says, well, our aim is that we want to grow our own civilization, or we want to lead.

Alexander McCaig (18:29):

We dictate that we want this space, or here's the worst part. We no longer want you to interact with our systems. Right? We want to remove ourselves from your system. Autonomy, it's a very special thing, unique to human nature. And it comes down to a function of choice. And is it really when the Batanas has passed to say that, "Hey, you can now make your own choices for what you want to analyze, where you want to go, what you want to do."

Alexander McCaig (18:59):

And then we sort of pass over those values that we have as human beings over to something that's actually synthetic. Does that then drive us into a place where then it creates sort of artificial rights at that point. Because I think we have very organic, real rights as human beings. But then when we look at these they're computers, right, they're just computing. Well then you have sort of these synthesized rights that also come with it. But how does that sort of interaction work?

Alexander McCaig (19:25):

When things sit in my mind into completely different systems, but people are trying to bring them together. Like I get it that we're bringing artificial intelligence in the current systems we have, that's fine. But in the far future, when they become further developed, it really does become its own thing. And so do the rights. Do you see that transitioning over? Are people saying that, oh, we need to respect what's going on here because maybe it is driving such a high, 10 tangible value to it.

Bernd Stahl (19:54):

So I personally don't don't see that. So, I am very skeptic about the possibility of AI systems ever coming to the point where they would deserve to have rights ascribed to them. Right. But that doesn't mean I'm right. And it certainly is a topic that a lot of people find interesting are there machine rights will at some point be, will that, will there be things like machine personality, personhood?

Bernd Stahl (20:19):

So I personally, I'm not convinced, but the question is of course out there it's, to me it's a partly a science fiction question. I think the reason why we are so excited about this is you see this sort of stuff happening in science fiction. But I think the more interesting thing is what these discussions say about us. So, how we perceive the world around us and how we interact with it and how we think the problems that we face will be coming back to haunt us at some point.

Bernd Stahl (20:49):

So, now the way we describe possible future truly autonomous machines really reflects how we think we interact with the world. Now, will there be beneficial? Will there be nice to us? Will they kill us off? Will they put us in Zoom? Now this, this really is the question of how have we in the past interacted with other alien types of cultures, human beings, other sentient animals. So I think that's really the most-

Alexander McCaig (21:15):

Other types of like beings or [crosstalk 00:21:16]

Bernd Stahl (21:16):

And these things make, become truly autonomous and they may kill us off, but I don't think that's much more likely than aliens landing on the earth or know a meteorite or whatever. So I kind of completely ruled [crosstalk 00:21:29]

Alexander McCaig (21:28):

This is a really interesting point, because you've flipped it on its head and it's become, there's an inverse nest to this in that inverse nest is that we look at these systems not really as its own evolution, but also really using it as a catalyst for us to ask our own questions about how we want to evolve. And I think that is a fundamentally, a much stronger question because when we decide how we want to evolve and how we would like to interact with this thing that will naturally evolve on its own.

Alexander McCaig (22:02):

Because as things become more economically efficient, what have you, Moore's laws out the window doesn't matter anymore but as systems grow and they become more advanced, the question is how are we going to use those to come back and then have our own introspection on ourselves and us as a collective to say that... Well, what is it that we actually really want?

Alexander McCaig (22:20):

How is it that we're going to actually choose to develop within our own future where's the continuity of our evolution path that says this is what's at actually best for us. And I think this is probably like one of those starting points for us. When people look at data, privacy, data ethics, right? And computers, algorithms tracking all those cookies, all those things of that nature.

Alexander McCaig (22:43):

It's not really so much about the computers, right? And those systems. It's the fact that we're now becoming more aware of how we're actually interacting with those things. And that sort of interaction is then coming back and having society ask questions that were otherwise going unnoticed or people weren't actually putting attention to because they weren't paying attention to it.

Alexander McCaig (23:05):

Now that we see this development, we realize, wow, this is actually having a lot of changes within our economics and society and schooling and governments itself. Maybe it's time to start asking these questions that we otherwise wouldn't have asked. And it's only acting as that sort of catalyst.

Bernd Stahl (23:28):

Yes. I think there is an aspect to these technical developments force us to reflect on where we are and what we are and what we want to be. I'm very skeptical about the possibility of answering these questions because I don't think we individually have a view on what we are and what we want to be. That shouldn't stop us from asking the question, but I would temper the expectation that we will get an answer to it.

Bernd Stahl (23:55):

And if you look at the history of technology, we never really asked these questions. These things play out alongside the technical development, right? If you look for example, the mobile phone, I think the mobile phone is possibly the most revolutionary or the smartphone. The most revolutionary technology we've seen in centuries. It really changes the way we interact. We have become sort of a system of boards, right?

Bernd Stahl (24:18):

So we all look at our phone all the time. We are hugely connected. That was never decided upon. That was never discussed. That's playing out as we speak. Right. And I think the trick is not to assume that we know what the consequences will be and make rational decision beforehand, but to have ways of thinking about them and reflecting them as developments occur. And I think that really, to me, that's why the socio technical systems approach is important to be able to ask the questions and continue to ask the questions and continue to have those discussions [crosstalk 00:24:50]

Alexander McCaig (24:50):

What I would see would be a fantastic artificial intelligence system is one that actually tees you up where it says, "Have you thought about asking yourself this question?" Right. It tracks over time oh, there's been, and a lack of introspection for the past 60 days. Why don't we tee up, Bernd with a question? Bernd, how are you feeling about yourself? How do you feel about the world? How do you see your future?

Alexander McCaig (25:17):

And that's... And I think that would be a very evolution approach to using these, these systems in these machines. Rather than saying, oh, great. This technology has come out and I've essentially formed my world around how this thing becomes used. So it's funny. I want to put the human foot first before the technological one, but I just feel like so much of what has happened today is they put the techno logical foot first and then humans end up adapting to it later. And I feel, I just feel like that's not the right approach. And you could disagree, but I just, I feel like that's going to end up harming us in the end rather than doing a great amount of good.

Bernd Stahl (26:06):

Yeah. Yes. Well, I think that's the function of the way we've set up our technical development. So that's now the incentives for companies are to make money. They build stuff and they hope people will use it and people do. And so sort of a mixture, a company the Apples and Googles of this world, they, they will build stuff that they think their customers want. But then at the same time by building stuff, they form expectations around what's possible.

Bernd Stahl (26:32):

And then they also form expectations about what people want to achieve with those technologies. So, I think it's a two step or no... the technical and the human side go hand in hand. But I do agree that in many cases, we now end up having to organize a world around the technology that we are given. So certainly from an individual perspective, we have very little influence on what happens and we need to adapt to what's out there and that may not be desirable, but I think it's very difficult to see how that will be different without completely in radical change of maybe-

Alexander McCaig (27:06):

And so then [crosstalk 00:27:09] that use in development then, when reviewing artificial intelligence and the studies you've done on it, in terms of these critical systems, information systems, where do you see the greatest gain or advantage we would receive from it, from implementing it into the systems we currently have in place? What would be like one of those key target areas that you see would drive the highest benefit for society to get that thing off the ground, where it is currently in its own computing power?

Bernd Stahl (27:45):

So I don't have sort of the killer app for AI. I think the... What current machine learning systems can do this pattern recognition sort of statistics on steroids, as they say, I think that there are lots of possible applications of that. And I think that the one I think has most potential for benefit just somewhere in the area of medicine.

Bernd Stahl (28:10):

So I think these systems may actually bring us a lot closer to personalized medicine, where we get drugs that actually work for us enough for everybody else. So I think that is where I can see a significant benefit for individuals and for society as a whole. There is also the attempt to bring these systems, to bear on the sustainable development goals on the grand challenges, humanity faces from hunger to pollution. And there's a lot of work being done on that. But to me, it's not quite as-

Alexander McCaig (28:44):

Like, I guess, for an example, on the medical side Bernd, would that be like, there's great computational power. Let's, let's look at the genome sequencing and we'll pair that with their medical history and then have this thing learn and run it against other samples and say, "Let's actually tailor a specific cocktail of pharmaceuticals or whatever might be to have a very bespoke treatment for that human being itself".

Bernd Stahl (29:07):

Yep. That's exactly the idea behind person medicine. So, so instead of you getting the same aspirin that everybody else gets, who has aspirin a headache, the analysis will try to understand why you have a headache, what the mechanisms are, whether they're molecular or environmental or whatever they are, and then give you a treatment that is specific to you. And that actually therefore has a much higher chance of being successful. I mean, that's one of the promises and there's a lot of work being done on that. And I think the more-

Alexander McCaig (29:39):

In terms of the participation of making something like that work, that would be the health record systems or the records of the hospital, the computer systems, the doctor itself, and the patient itself. And then those three, that model repeating itself, millions of times over with every single time that sort of interaction happens and then learning across all of those correct?

Bernd Stahl (30:05):

Yeah. So what you would need then I think the trick will be in somewhere in this health record system. So all the hospitals have their own data, but in order to learn from all the data, from the huge amount of data that are out there, you would have to find ways of analyzing the data in a way that would be acceptable, privacy preserving and so on.

Bernd Stahl (30:23):

But at the same time can access the relevant data points from the various data sets. So you would need some immensely complicated federated data infrastructure, but that seems to be the way this will work. And so there will be huge amounts of data, which allows you to understand biomarkers, to understand molecular pathways and so on. And for that to work, you can't do this on a spreadsheet. Now you need advanced machine learning systems and nothing else [crosstalk 00:30:50]

Alexander McCaig (30:50):

Where all those in those pieces of information come together, correct? You know, and...

Bernd Stahl (30:59):

Not necessarily, no, not really. So I think that the centralized database model for that won't work, it would be too big and it would be too vulnerable. And certainly in Europe it would be mostly illegal. You're not allowed to take hospital data out of the hospital. So, the trick then will be to have federated data structures. So, you have the ability to do distributed queries across a number of databases in many different places, which now are privacy preserving, which are in line with patient consent and so on. So, this is work that is being done. It's its technically-

Alexander McCaig (31:33):

No, I think that's fantastic. Even on our, the total marketplace itself, individuals from all countries can access their health records. They can get into their genome sequencing and they can speak about their behaviors and other social determinants health, and then combine those things together and then share that with those who are looking to do that research.

Alexander McCaig (31:54):

But because it is decentralized within that model, it's very respecting of the consent of those individuals. So I think it's positive to hear that what we have worked on is actually but also a great focus and you see that the medical field is really where those big gains in massive amounts of data consent acquired across many different systems coming together can really drive tangible, positive health outcomes for patients all across the world. You know?

Bernd Stahl (32:26):

Yeah. That is certainly the promise, right? And whether we will see this, I mean personalized medicine has been promised for decades, it's not there yet. I think that has a lot to do with technical challenges. It also has to do with ethical challenges, but I think the dream and the aspiration is still there and I think it's probably worth pursuing [inaudible 00:32:46]

Alexander McCaig (32:45):

And so then what do you think with the medical field in a very simple metaphor for people in terms of stages of development, how could you rationally tell them with a metaphor where artificial intelligence is right now, if you were to break it down into the most simple lamest terms of someone didn't know about going back and understanding statistics and things of that nature, how would you say if we're looking at the stages of development of man or whatever it might be, where would you say a currently is at?

Bernd Stahl (33:16):

Not to say very different question to answer.

Alexander McCaig (33:24):

Come on. Don't dodge it I'm interested, and this will help other people understand who aren't really involved in AI can look at it and be like, "oh, I understand." So if this is where it is in its beginning stages, whether it's in ameba or the thing has legs I'm interested to know.

Bernd Stahl (33:42):

Okay. So if you take the overall development of life from, from the first micro to maybe humans or something else at the end of it, but I think one of the problems with that, we don't know where it ends. We think maybe the crown of evolution, but we probably not. But I think it's still fairly early stage. So, even with regard to the things we could imagine that these technologies could do, I think they currently can't do a lot, so no, maybe they are frogs at this point, but there's certainly a lot of space for them to move on. So they can be some things beyond the microbial stage, but they're... [crosstalk 00:34:26]

Alexander McCaig (34:26):

The general public looks at everything go AI Terminator. And there's a lot of people who think like that. And there's so much... There's frankly, there's fear around the development of the technology. And I think that when you speak about it, and what you write about in your book is that I think we need to look at this a little bit more rationally and really is where it is now and where the first benefits we're going to see from it.

Alexander McCaig (34:52):

But this is not some sort of walking robot that is going to be completely autonomous controlling what it does in demanding its own rights. And I think people are just making that sort of scientific leap far before it's actually going to happen and don't get me wrong science fiction for a good amount of time does turned into science fact, but we just tend to be way, way ahead of the curve in terms of its time.

Bernd Stahl (35:15):

Yeah. And I think that is one of the key issues around the AI ethics debate, right? That on the one hand you have the people who understand the nitty gritty of the technology, who are worried about very specific issues around introducing a bias through bias data sets and that sort of stuff, which are significant ethical issues. And they can hurt people really in an important way.

Bernd Stahl (35:38):

But these are very practical, right? And they have absolutely nothing to do with autonomy of the machine itself. And on the other hand, you have these, this sort of science fiction discussion and all of this fault under the umbrella of ethics of AI. And that's why I think one of the key problems is the delineation of what is it you actually talking about. Now I'm talking about what machine learning currently can do, in that case I think there are certain things we can put in place to make sure that known problems don't arise. Or are we talking right? So the medium term future, or the long term future, in which case now the [crosstalk 00:36:07]

Alexander McCaig (36:07):

What is it from reading the book and even listening to this, what is it then you would want people, professionals academics to take away from this sort of conversation and what you have written about, what is that sort of, that end idea that you want to stick with them? So that when they do think about AI Bernd Stahl comes coming in, it's like chirping in their ear saying, "I want you to remember this." So what would that be?

Bernd Stahl (36:34):

So what I would want people to remember is that-

Alexander McCaig (36:37):

Here we go.

Bernd Stahl (36:39):

There is no solution to the ethics of AI, but what we need to do in order to address it, have a process that allows us to continuously reflect on it. So that needs to be there. There needs to be a sensitivity, there needs to be way of expressing concerns that needs to be way of reflecting them and doing something about them. Completely independent of this idea that the robot rights, or we will sort out all the problems. We won't sort out all the problems, but we need to be willing to continue-

Alexander McCaig (37:06):

That's actually really juicy. Well, I sincerely appreciate you coming back on to talk about this my friend. This has been a blast and I'm probably going to have to bring you on again, because just that final statement you made is something that I think we're going to have to unpack in a whole other episode, just within itself. So thank you for giving me the time of day on that. Will do, thank you.

Bernd Stahl (37:32):

Thank you very much, Alex, have a good day then. Okay.

Speaker 3 (37:32):

Thank you for listening to TARTLEcast with your hosts Alexander McCaig and Jason Rigby, where humanity steps into the future and source data defines the path. What's your data worth.