Towards scalable and generalizable multi-objective learning, provably

Loading...
Thumbnail Image
Authors
Chen, Lisha
Issue Date
2025-05
Type
Electronic thesis
Thesis
Language
en_US
Keywords
Electrical engineering
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
Many learning tasks in today's real-world systems inherently involve multiple objectives. In such problems, objectives must balance multiple performance metrics, including fairness, safety, privacy, and accuracy, or address potentially conflicting objectives from different entities that are optimized jointly to facilitate data and knowledge sharing. A fundamental question here is how to handle multiple objectives in a principled manner, such that it allows pursuing individual objectives but also preserves the benefits of collaborative learning. In this context, the thesis will put forth a new grounded framework to address multi-objective learning from three complementary aspects: modeling, optimization, and generalization. The research will enhance multi-objective knowledge extraction with theoretical guarantees (optimization, generalization, conflict mitigation, and their trade-offs) in future systems. Dedicated to addressing these challenges, this thesis can be summarized as follows: In the first part, we study the multi-objective optimization algorithms. 1) We revisit some stochastic variants of the multi-gradient descent algorithm (MGDA) in the unconstrained setting, propose a new variant and a framework to analyze the optimization error and the conflict avoidance. Then we apply the framework to existing stochastic algorithms, which leads to improved analysis. 2) When there are pre-specified preferences over objectives, we consider a formulation through constrained vector optimization, where the preferences are modelled through a cone-induced partial order and the constraint functions. 3) To prioritize optimality over preference satisfaction, we consider another formulation through optimization on the Pareto set. Under the formulations of 2) and 3), efficient gradient-based algorithms are developed with convergence rate guarantees. In the second part, we study the generalization of multi-objective learning. 1) In the unconstrained setting, we provide an algorithm-dependent generalization bound for the stochastic variants of MGDA. Furthermore, combined with the optimization error and conflict avoidance analysis, it forms a unified framework to analyze the three errors and their trade-offs. 2) In the meta learning problem, which is a special example of multi-objective learning, we analyze the modeling and generalization errors in the mixed linear regression setting. The results suggest that model adaptation helps reduce modeling error, increasing the number of tasks or objectives helps reduce generalization error, and uncertainty modeling through Bayesian inference further helps reduce generalization error.
Description
May2025
School of Engineering
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN
Collections