Term of Award

Summer 2018

Degree Name

Master of Science in Applied Engineering (M.S.A.E.)

Document Type and Release Option

Thesis (restricted to Georgia Southern)

Copyright Statement / License for Reuse

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.


Department of Mechanical Engineering

Committee Chair

Biswanath Samanta

Committee Member 1

JungHun Choi

Committee Member 2

Minchul Shin


A study is presented on development of an intelligent robot through the use of off-board edge computing and deep learning neural networks (DNN). By using a convolutional neural network (CNN) object detection/classification system supported by Tensorflow Object Detection API and a recurrent neural network (RNN) speech recognition system provided by Mozilla DeepSpeech off-board, an intelligent robot was developed. The robot was capable of taking voice commands from the user and not only seeing but also understanding its environment to perform certain tasks. Turtlebot with an Xbox Kinect camera was used as the mobile robot platform. Both networks were applied on remote devices, with the Robot Operating System (ROS) being the communication medium between hardware. The CNN model with the single shot multibox detector (SSD) MobileNet architecture was run on the edge computing device, an Nvidia Jetson TX2, while the RNN model was ran on a laptop CPU due to software limitations. The CNN model was additionally trained to increase the speed of the pre-trained model provided by Tensorflow Object Detection API on TX2 with a video feed. The bounding box predictions were slightly unsteady so distance estimation was mostly obtained with a laser scanner with some programs using the bounding box height from the CNN model with limitations. Multiple program nodes developed in Python for the speech task and in C++ for robot control were used with a logic network integrating both models to make a dynamic robotic system. The program actions included manual control, follow an object, find and move to an object, and move to a desired location point. With the use of a memory program, the Turtlebot could tag identified objects and retain their global position for future use. The system worked successfully with the robot being able to perform various tasks through voice command control and retain the global position of tagged objects as “memory.” The effectiveness of the robot system was illustrated through different tasks in different scenarios. Scope of further work is outlined.

Research Data and Supplementary Material