Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace
Tartle Best Data Marketplace

Technology is quickly becoming the backbone of modern infrastructure. At the pace that it is progressing, it may someday become as ubiquitous and as vital to our economy as cement and concrete. However, AI is agnostic. Despite its immense computing capabilities, it will never be capable of human understanding and discernment.

One example of this is the results derived from A/B Testing, where researchers compare two versions of a marketing asset to see which one performs better. While it can show which campaign would run better, it cannot provide any new learning. 

With this limitation in mind, is it still beneficial for us to know what the most probable outcome for a certain event would be—even if we don’t understand the why or how for its occurrence?

Can Machine Learning Go Wrong?

At this point, David discussed an imaginary scenario where even something as non-controversial as spam mail could become a problem if it was found that legitimate emails from businesses owned by people of color were found to be falsely marked as spam at an unequal rate, in comparison to people who are not of color. Aside from the inefficiency, the AI would become an unfair metric for emails and may even be damaging businesses on the basis of race.

The decision-making process behind sorting emails into the spam folder is compromised because the technology is using so many signals in “deep, complicated, and multi-independent patterns of probability” that will be near-impossible to comprehend without a lot of time, money, and effort. At this point, this massive system is hurting communities who are already disenfranchised in the first place.

This brings to mind Microsoft’s Tay.ai, a chatbot on Twitter created by the tech giant in 2016 that was designed to mimic the conversational patterns of a 19-year-old girl. It would learn from continuous interaction with other users on the social media platform.

Immediately after its release, Tay became controversial after it started tweeting inflammatory and offensive comments. As a result, Microsoft was pushed to shut down the service only sixteen hours after it was launched. 

Social Justice And Technology

It’s a clear indication that the people responsible for programming AI have a corresponding social burden to fulfill, particularly in ensuring that their technology does not harm anyone. This burden can become even bigger when machine learning and AI is applied to other fields, such as medicine and smart transportation.

Beyond Tay.ai, computer scientists and engineers around the world find themselves at the helm of constructing technologies with so much potential. How do we address inherent human bias in these individuals?

David reveals that most people who have the knowledge to work with these complex technologies do not necessarily have the same depth of understanding for social justice as well. This led to a call for participatory machine learning, otherwise known as the design justice movement.

Giving Minorities A Seat At The Table

Participatory machine learning involves people who are familiar with related issues on social justice, as well as communities who would be most affected by the presence of new technologies. They are given a position in planning and management. 

Their input is important from the get-go because it does have an impact on how these systems work. To further explain, David painted the picture of an imaginary emerging smart city that decided to use AI to reinvent its bus system. 

Ultimately, all the new bus stops, routes, and schedules are successful in moving people faster to their destinations, and the numbers echo its success. However, a caveat: these statistics have been decided on average, and only shows that it is successful based on how well it moves affluent communities more efficiently than those located on the outskirts of the city. Those living on the outskirts, who need efficient transportation more than others for work and productivity, become isolated from the system.

At this point, it would be difficult to unravel all the work put in making the new transportation system a success. It’s important for the marginalized to be consistently consulted on the impacts of new infrastructures and technologies, even after construction and installation is finished. Those responsible for creating these systems have a special responsibility to ensure that those who do not have the same footing will finally get a seat at the table. 

David agrees that it may be a lengthier, more expensive process. After all, it will take more time, money, and effort to locate these people, recruit them, and ensure that everyone is on the same page. However, it is the cost that we need to pay if we want a shot at eliminating inequality. 

The Limits of Machine Learning

Beyond the cost of bringing people to the table, David acknowledges that technological progress is already expensive in and of itself. Machine learning systems require individuals who are highly educated in computer science and computer engineering; they will also need other systems that require massive technologies to run. 

Finally, lingering questions on data sharing and ownership prevent communities from fully utilizing what they have. To what extent do you own your data and what should your relationship with it be? What does it mean to own something?

We do not live in individual data cocoons that we own. We live in a community. This public community cannot be run without public data, and public sharing of information about one another. 

The thoughts that define my actions within this system of public information and data, however, are missed by algorithms, analysis, and machine learning. This is because people do not want or have the ability to share why they are driven to take certain actions. 

Ultimately, it appears that one of our most profound discoveries from machine learning is that the world is much more complex than we ever wanted to believe. Despite these sophisticated machines processing massive amounts of information, we do not have the capability to provide a completely accurate and precise prediction of what will happen.

This does not mean that the approximate knowledge we have now is worthless. It helps us appreciate our universe in a new way by teaching us to be comfortable with complexity. 

Are We Entitled to Understanding Anything?

In line with TARTLE’s mission to promote stewardship and collective responsibility, Alexander asked the implications of machine learning in helping humans create better decisions and more informed choices based on the observable universe. To this, David asked a thought-provoking question: why do you think humans are entitled to understanding?

Machine learning and artificial intelligence is capable of taking us to greater heights without the interference of human cognitive biases. With its objective oversight, it has the potential to bring out the best in us as human beings that live in a complex system.

As technology continues to innovate at an unprecedented pace, David leaves us with a parting message: machine learning will drive us to examine all the values that we hold, and sometimes to consider painful trade-offs between two or more equally important values.

“So don’t hold on too tightly to any one value; think about how you may have to give up on some of it in order to support other very important targets.” David concluded.

What’s your data worth? Sign up for the TARTLE Marketplace through this link here.

Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility Harvard Senior Researcher and Best Selling Author David Weinberger, Ph.D. by TARTLE is licensed under CC BY-SA 4.0