Computer-assisted human annotation for animal identification

Authors
Beard, Audrey
ORCID
Loading...
Thumbnail Image
Other Contributors
Stewart, Charles V.
Su, Hui
Yener, Bülent, 1959-
Issue Date
2020-08
Keywords
Computer science
Degree
MS
Terms of Use
Attribution-NonCommercial-NoDerivs 3.0 United States
This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.
Full Citation
Abstract
Photographic wildlife censusing (PWC) -- in which animals are surveilled by way of photography, entered into a database, and counted -- has historically required significant labor on the part of human annotators, largely due to small and rare well-annotated training datasets. One framework, photographic mark-recapture (or sight-resight), leverages photographs taken by volunteers, scientists, and camera traps, and necessitates the identification of individuals based on visual similarity. State-of-the-art methods for this kind of PWC leverage a detect-classify-rank-verify-annotate pipeline. We focus on the latter three steps in an effort to help spur broader community interest that the other constituent components (detection and classification) have enjoyed for decades. To that end, we formalize the Computer-Assisted Human Annotation (CAHA) problem and explore several metrics and evaluation protocols that indicate algorithmic correctness and expected human labor, including the trade-off between them.
Description
August 2020
School of Science
Department
Dept. of Computer Science
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Relationships
Rensselaer Theses and Dissertations Online Collection
Access
CC BY-NC-ND. Users may download and share copies with attribution in accordance with a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License. No commercial use or derivatives are permitted without the explicit approval of the author.