Embedding fairness and ethics in collective decision-making

Thumbnail Image
Mohsin, Farhad
Issue Date
Electronic thesis
Computer science
Research Projects
Organizational Units
Journal Issue
Alternative Title
Collective decision-making is the problem where we have to aggregate individual preferences to collectively make a choice. Voting is one of the most commonly studied methods of making collective decisions. The task of collective decision-making can generally be divided into two parts: 1. preference learning and 2. preference aggregation. In the preference aggregation domain, judging the quality of voting rules in terms of well-defined properties and paradoxical behaviors is an important topic. Additionally, with the recent growth of algorithmic decision-making, concerns regarding fairness and ethics are being discussed more and more in the voting domain. For example, how can we explicitly guarantee decisions that are fair towards minority groups? How can we learn preferences in an ethical domain? Can we embed ethical properties in the preference aggregation method? This dissertation aims at answering these questions. With the advent of machine learning techniques, particularly in preference learning and similar use cases, we ask the question, how can we better use these techniques to help make better collective decisions? The first part of this work focuses on introducing a new notion of group fairness in voting. First, we explicitly consider the group identity of agents in order to make collective decisions that are fair towards the minority. This explicit consideration of agent identities is opposed to the commonly considered anonymity property, which ensures all agents are treated the same. Next, we analyze group fairness guarantees for existing voting rules, focusing on economic efficiency, and we see a trade-off between economic efficiency and group fairness. Then, we focus on designing voting rules that are both fair and efficient. For this, we develop machine learning-based techniques for automatically designing new voting rules that can achieve different levels of trade-offs between fairness and efficiency. Next, we work on learning agent preferences in a moral dilemma. Since the ethical domain is considered high-stakes, we explore learning explainable models to represent agent preferences. We collect a new dataset of agent preferences in moral dilemmas and experiment with learning heuristic models like lexicographic preferences and efficiently aggregating individual models to represent a social-level preference model for moral dilemmas. Lastly, we present a slightly different but related work on verification algorithms for no-show paradoxes under popular voting rules. Here, we develop different algorithms based on integer linear programs and heuristic-based searches that let us algorithmically verify whether a group no-show paradox is possible under a specific voting rule given a voting profile.
School of Science
Full Citation
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
PubMed ID