On connections between mean field games and deep generative models

Loading...
Thumbnail Image
Authors
Huang, Han
Issue Date
2024-03
Type
Electronic thesis
Thesis
Language
en_US
Keywords
Mathematics
Research Projects
Organizational Units
Journal Issue
Alternative Title
Abstract
Deep generative models have exploded in popularity and made frequent headline appearances in the past few years. From generating photo-realistic images and contest-winning artworks with diffusion models, to discovering millions of promising new materials with graph neural networks, and finally training large language models such as GPT-4 that exhibits glimmers of artificial general intelligence. The field of generative modeling has seen unprecedented levels of accelerating growth in its application and is rapidly transforming various aspects of everyday life. Nevertheless, there remains much need for a systematic theoretical understanding of generative approaches. While we understand each model well in isolation, it is generally difficult to compare different methods as they are derived from distinct motivating principles. Hence, we need to contextualize different generative approaches under an unifying framework to make them commensurable. Mean-field games (MFG), a versatile framework for modeling density evolution under customizable preferences, has emerged as a promising candidate for this purpose. In this thesis, we will explore and formalize the connection between MFG and normalizing flows, a prominent family of generative models, then outline extensions to other methods such as diffusion models. With this insight, we introduce transport costs to regularize NF optimization and demonstrate its effectiveness at controlling the Lipschtiz constant of the trained flows. On the other hand, MFG is also a powerful modeling tool that finds application in game theory, economics, finance, and industrial planning. Devising algorithmic solutions for MFG is thus interesting in its own right. With a bridge between MFG and generative modeling, we harness advancements in scalable and expressive neural architectures to solve high-dimensional MFG accurately and efficiently, a case that is especially challenging for classic optimization techniques. Starting with solving single instance MFG with flexible flow parametrizations, we then take it one step further to learn mappings that outputs optimal trajectories for distinct MFGs without re-training. Our proposed approach leverages attention-based layers to build sampling-invariant parametrizations for continuous operators and is the pioneering work for the unsupervised learning of high dimensional MFG solution maps. Finally, we study the inverse MFG problem and develop the first learning based framework for its solution. Our approach leverages bilevel optimization to simultaneously infer optimal agent trajectories and the unseen obstacle in the MFG setup. Our method is robust across various complexity levels and serves as an effective regularization for trajectory likelihood estimation in data-scarce scenarios.
Description
March2024
School of Science
Full Citation
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
Journal
Volume
Issue
PubMed ID
DOI
ISSN
EISSN
Collections