Averting a.i. catastrophe : improving democratic intelligence for technological risk governance

Authors
Garvey, Colin K
ORCID
Loading...
Thumbnail Image
Other Contributors
Winner, Langdon
Akera, Atsushi
Adali, Sibel
Fortun, Michael
Kinchy, Abby J.
Woodhouse, Edward J.
Issue Date
2019-08
Keywords
Science and technology studies
Degree
PhD
Terms of Use
This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute (RPI), Troy, NY. Copyright of original work retained by author.
Full Citation
Abstract
Concerns about the negative social impacts of artificial intelligence (AI) continue to grow as rapid technological developments bring the promises and threats of AI into reality. Though long dismissed by AI scientists, developers, and entrepreneurs as irrational fears of an ignorant public duped by an unscrupulous media, public concerns are being borne out as a growing body of evidence suggests that AI, as now practiced, poses significant risks to a majority of humankind. What are the risks of AI, and who is creating them? Reliance on technical experts for the definition of relevant categories carries with it the risk of reproducing both the “hype” surrounding AI and experts’ exclusive focus on technological, rather than sociological, sources of risk. I therefore take a political approach to risk, broadening my focus to include the activities of the creators, owners, and users of AI, as well as those whom they impact. Through participant observation at AI conferences, semi-structured interviews with experts, and textual analysis of primary and secondary literature, my dissertation examines how AI scientists, developers, entrepreneurs, funders, and users create risks, what those risks are, and who they put at risk. I organize this empirical data into seven dimensions of what I call the “AI risk horizon”: military, political, economic, social, environmental, psycho-physiological, and existential risk. Drawing from STS literatures on the governance of technology, I show how risks in all seven dimensions of the horizon emerge from the technocratic political structure of decision making processes in AI research and development. Despite endangering a majority of people, a minority of elites stand to benefit marvelously from AI. In short, one person’s risk is another’s profit My central question is then: What can be done to intervene and mitigate the scope and magnitude of these risks? This dissertation uses a twenty-point framework to evaluate barriers to better risk governance and propose strategies for overcoming them.
Description
August 2019
School of Humanities, Arts, and Social Sciences
Department
Dept. of Science and Technology Studies
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Relationships
Rensselaer Theses and Dissertations Online Collection
Access
Restricted to current Rensselaer faculty, staff and students in accordance with the Rensselaer Standard license. Access inquiries may be directed to the Rensselaer Libraries.