top of page
Search
Writer's pictureNancy Nemes

Hands-on Buddhism for Ethical AI

By guest author: Agnes Fekete, MOSTLY AI, a synthetic data company, working on putting ethical and explainable AI into everyday practice


Soraj Hongladarom, a professor of philosophy and ethics expert from the University of Bangkok, Thailand, put forward a manifesto, calling for the values of Buddhism to serve as guidance for building ethical AI systems. In the following, I will try to further his ideas on implementing these principles in practice.



Reduce suffering for all (aka compassion)

In current ethics guidelines, much emphasis is put on values such as autonomy and the rights of individuals. However, these values of Western philosophy lack the most important perspective—that of the common good. Missing this perspective of scale is particularly dangerous for a technology that has its strength in automation, ready to create or implement decisions a million times over. AI's very strength is scale, while its weakness is bias.

How can we make sure that an AI system is reducing suffering for all, not making things worse for some people and better only for others? The answer lies in the training data. We have the power to augment the training data to reflect the world not as it is but as we would like to see it. When approaching AI training, we should ask ourselves: How can we fix past mistakes embedded in the data? How can we reduce the suffering that already exists in the world? Aiming for a fair outcome would be a great start. To make that happen, we need to define fairness on a case-by-case basis. MOSTLY AI's fairness research did just that with two famous datasets: the racially biased COMPAS recidivism dataset used in law enforcement to assess the likelihood of a defendant becoming a recidivist and the US Census dataset, which captures income inequality between males and females. In both cases, real-life suffering is embedded in the data. An AI system trained on this data without any augmentation would simply automate this suffering and scale inequality. However, by introducing demographic parity to the dataset via synthetization, we can remove imbalances and provide fair synthetic training data for AI systems. To satisfy the constraint of demographic parity, the proportion of each segment of a protected class (like gender or race) should receive the same outcome at equal rates. Of course, fairness needs to be defined on a case-by-case basis to ensure that the intent to be fair isn't lost in the implementation phase. Like equalized opportunity or equalized odds, other mathematical fairness definitions need to be looked at as possible tools to eliminate suffering—not just for individuals but for all—making sure that discrimination doesn't simply end up in another group.


Do no harm

While some AI algorithms are benign, others carry the potential to cause harm. The European Commission's proposal to regulate AI differentiates between AI systems based on a similar principle by setting up four categories: unacceptable risk, high risk, limited risk, and minimal risk systems. However, here, the definition of risk refers to citizens' rights, livelihoods, or safety and leaves the idea of risk to the safety and wellbeing of society or community at large out of the picture. Just think of Facebook's AI's serving user content that would influence the outcome of an election or of the social media algorithms designed to maximize audience engagement, ultimately fueling rage, sowing discord, and radicalizing susceptible individuals. Many have criticized the AI regulation of the EU for being too lackluster in its approach to the AI systems allocated to the minimal risk category. Without proper supervision, how could anyone deem an AI system to be of minimal risk? The 'do no harm' principle needs to be implemented at the regulatory level and with zest.


The practice of self-cultivation

Professor Hongladarom advises everyone taking part in AI development to self-cultivate. This means continuous training to get closer to the goal of eliminating suffering. The goal is not as important as the practice itself. If we put the emphasis on practice instead of the goal, even the impossible becomes less intimidating. Engineers should try their best to break out from algorithmic thinking and understand biases embedded in the data. Data scientists—the monastic order of ethical AI—should try their best to explain complex AI ethics problems to the layperson. Customer service representatives should also have a voice since they practice compassion and witness the result of bias and unjust practices day in and day out. Companies and governing bodies should facilitate these conversations by providing the space for this very important practice. Diverse and interdisciplinary ethical AI boards should be set up to govern AI practices. Data literacy should be increased across organizations to raise awareness of potential issues and ethical problems. We should all practice self-cultivation.

Accountability

While the intent behind the EU Commission's attempt to regulate AI systems is to create accountability, there is serious doubt whether this will be possible without proper regulatory oversight. Accountability starts with Explainable AI, and some go as far as demanding a certification system that would put weight behind AI governance. The trick is that models are meaningless without the data they were trained on. However, sharing this data is problematic from a privacy standpoint. Highly representative synthetic data can serve as a drop-in replacement to enable model documentation, model validation, and model certification. To understand model decisions, we need to systematically explore changes in model output given variations of input data, which is also referred to as local interpretability. These explorations are impossible without the plasticity that AI-generated synthetic data offers. Regulations need to take this into account if they are to be taken seriously and adhered to across industries.


The Buddhist approach to regulating AI

The Buddhist philosophical system differs from Western schools of thought in placing a great emphasis on interrelationships. I believe that this is the missing piece we need to make AI regulations really work. Interrelationships between algorithm and data, decisions optimized for individuals, and their impact on communities and society at large need to be mapped and understood. Contemplating these dependencies should be part of the regulatory process to ensure that AI systems truly do no harm.


138 views0 comments

Comments


bottom of page