Date of Graduation

Fall 2018

Degree

Master of Science in Computer Science

Department

Computer Science

Committee Chair

Anthony Clark

Abstract

Recently, the use of autonomous robots for exploration has drastically expanded--largely due to innovations in both hardware technology and the development of new artificial intelligence methods. The wide variety of robotic agents and operating environments has led to the creation of many unique control strategies that cater to each specific agent and their goal within an environment. Most control strategies are single purpose, meaning they are built from the ground up for each given operation. Here we present a single, reinforcement learning control solution for autonomous exploration intended to work across multiple agent types, goals, and environments. The solution presented here includes a memory of past actions and rewards to efficiently analyze an agent’s current state when planning future actions. The agent’s objective is to safely navigate an environment and collect data to achieve a defined goal. The control solution is first compared with random and heuristic control schemas. To test the controller for adaptability, the controller is next subjected to changes in the agent’s sensors, environments, and goals. Control strategies are compared by examining goal completion rates, the number of actions taken, and the agent’s remaining health and energy at the end of a simulation. Results indicate that the newly developed control strategy is adaptable to new situations. A reinforcement learning based controller, such as the one presented in this research, could help provide a universal solution for controlling autonomous robots in the field of exploration.

Keywords

autonomous, robotics, control, memory, reinforcement, learning, exploration

Subject Categories

Artificial Intelligence and Robotics

Copyright

© Keith August Cissell

Open Access

Share

COinS