System risk quantification and decision making support with integrated artificial reasoning framework

Thumbnail Image
Kim, Junyung
Issue Date
Electronic thesis
Nuclear engineering
Research Projects
Organizational Units
Journal Issue
Alternative Title
Decision-making involves the identification and selection of options, guided by a predetermined set of criteria and personal preferences set by the decision-maker. Each option presents a unique trajectory and profile in transitioning from the current system state to the next state, and uncertainties are typically associated with. In order to make risk-informed decisions, a probabilistic assessment of state transition taking into account the control actions and current system state is necessary. Probabilistic risk assessment (PRA) can be utilized as an analytical tool to address the probabilistic aspect of the decision-making process. Several risk assessment methodologies in the field of PRA have evolved to address risk issues in a constantly changing environment, and the probabilistic dynamics framework has been proposed in a state discretization form. The thesis introduces the use of machine learning to aid state discretization and the integrated artificial reasoning framework (IARF) in order to improve the capabilities of the probabilistic dynamics framework. This is achieved by increasing the explainability of state trajectories and enhancing controllability of the number of system states. Unlike conventional equal width discretization approach for state discretization, machine learning aided state discretization decides state boundary based on similarity of simulation data. It can support managing the number of system states so that one can be away from the state explosion issue due to the curse of dimensionality. IARF is a physics-based approach of defining system structure in dynamic Bayesian network (DBN) and can help operational decision-making with explainability and traceability. Outcomes from machine learning aided state discretization and IARF are used for system state transition models in Markov decision process (MDP) for finding optimal solutions with given operational constraints. The MDP consists of the processes of finding a solution of Bellman equation, which can be derived from the conditional probability equations of the constructed DBN. System operators can capture stochastic system dynamics as multiple subsystem state transitions based on their physical relations and uncertainties (e.g., component degradation process or random failures). Optimization of operational mode of high temperature gas reactor with balance of plant and hydrogen production facility was performed to illustrate merits of the suggested approach.
School of Engineering
Full Citation
Rensselaer Polytechnic Institute, Troy, NY
Terms of Use
PubMed ID