Author
Kaur, Chintan
Other Contributors
Wen, John T.; Julius, Anak Agung; Wu, Wencen;
Date Issued
2016-05
Subject
Computer and systems engineering
Degree
MS;
Terms of Use
This electronic version is a licensed copy owned by Rensselaer Polytechnic Institute, Troy, NY. Copyright of original work retained by author.;
Abstract
Two control modes are used. For the low-level control, user specified translation motion of the manipulator end-effector determines the combined arm and base motion such that the mobile base movement happens when the manipulator arm is at workspace boundary, otherwise, only the arm moves to fulfill user command. Another on-the-fly equality constraint is added to this when the movement is desired while keeping user-selected object aligned with the end-effector center. In the prior work, end-effector orientation is fixed with respect to base or is manually controlled by switching between translation and rotation modes. This makes picking objects placed behind other objects or picking small objects placed away from the edge impossible or extremely difficult. Our proposed approach with autonomous orientation infused with user controlled translation proves to make such tasks easier.; Thus, a complete end-to-end system to fetch an object is presented wherein the user can instruct the robot to reach a desired location, then bring the object in view by either using direct or conditional instructions. This is followed by instructions to move towards the object, align the end effector center with it and grasp it. Finally, the robot can be instructed to bring it back. We implement the proposed algorithm on a dual-arm Baxter robot mounted on a wheelchair base in simulation. Example scenarios are used to showcase effectiveness of the algorithm and related videos are provided. We also discuss possible future extensions to this research.; In the second control mode, autonomous navigation to landmarks is provided using navigation stack and localization framework available in ROS. Switching between the two control modes takes place in the background based on the instruction type.; Human-directed control of a mobile redundant-articulated-manipulator is a challenging task using traditional joystick based approaches. For individuals suffering from quadriplegia, addition of another constraint - inability to move and feel both arms and both legs - makes the task significantly harder. In this research, we propose a language-based shared control strategy to navigate in home environments.; Spoken or typed natural language interface is more intuitive and offers flexibility to command using high-level description such as "Go to a certain place", besides giving finer-scale control over position and orientation of the robot. We contribute a vocabulary of instructions to be used with mobile-manipulator robotic systems, with focus on retrieving objects in the environment. Each command is paired with robot action and/or available sensory information by converting raw language instructions to a machine processable representation. New locations on the map can be learnt and updated as well as new objects with their corresponding fiducial markers. We also facilitate conditional loop instructions using perceived knowledge about the presence of objects in the field of view.;
Description
May 2016; School of Engineering
Department
Dept. of Electrical, Computer, and Systems Engineering;
Publisher
Rensselaer Polytechnic Institute, Troy, NY
Relationships
Rensselaer Theses and Dissertations Online Collection;
Access
Restricted to current Rensselaer faculty, staff and students. Access inquiries may be directed to the Rensselaer Libraries.;