Technologies

Since our project involves much more hardware and electrical engineering work than other Computer Science Capstone projects, we have many different examples of technologies to showcase, both hardware and software.

Raspberry Pi

Raspberry Pi
A Raspberry Pi 4.

A Raspberry Pi is a miniature, portable computer which runs a Linux operating system. Dr. Shenkin will need a microcomputer such as a Raspberry Pi mounted onto the drone in order to run the navigation system and store the mapping data for future use. Unfortunately, the Pi may not have enough processing power to perform the calculations required for 3D mapping. Some Pi owners simply use the Pi as a Wifi hub, sending the data back to a larger computer which can do the heavy lifting; however, this is not an option in the remote reaches of the rainforest. With all this in mind, Dr. Shenkin will be satisfied if our system works on any Linux machine or VM.

LiDAR Sensor

RPLIDAR A1
An RPLIDAR A1 LiDAR sensor.

Light Detection and Ranging (LiDAR) sensors are highly useful tools for mapping environments. A LiDAR works by firing pulses of infrared light in many different directions, then waiting for the pulses to reflect back to the sensor. The sensor then calculates the location at which the pulse reflected back by using the angle the pulse was fired and the time taken for the pulse to return. The resulting output data is a set of coordinates to a point in space. At these coordinates, there is a solid object which the LiDAR pulse struck and then reflected off. A LiDAR sensor collects millions of these points over time. These points can be stored as one long list, which is called a point cloud.

Depth Camera

RealSense Camera
An Intel RealSense depth camera model D435i.

Our depth camera has multiple modules which allow various types of vision. One is a conventional RGB module which captures light as a normal camera would. Another is a stereo module which replicates human depth perception through binocular vision. It works by using two separate camera lenses at the same time to capture points at two different angles, then calculating the distance of those points using trigonometry. The camera produces an exceptionally high volume of point cloud data as output.

ROS (Robot Operating System)

Despite the name, ROS is not an operating system which runs a computer. Instead, it is a framework which supports various software libraries and tools for the purpose of building robot programs. These tools include drivers, which allow the user to interface with hardware components, and libraries, which make data processing more accessible to the user.

Simultaneous Localization and Mapping (SLAM)

SLAM Map
A map created by a LiDAR performing SLAM inside a building.

Simultaneous Localization and Mapping (SLAM) is a procedure in which a sensor/camera simultaneously tracks its location and maps its surroundings. The sensor can detect how far it has moved since it was turned on and thus track its own position in space. Using this knowledge, point clouds picked up by the LiDAR can be plotted in their proper positions in real time. This process eventually results in a complete map of the surrounding environment.

Nav2 (Navigation for ROS)

Nav Display
Nav2 in action, mapping a room and determining safe routes to travel.

Nav2 is a library for ROS which implements robot movement and obstacle avoidance. Given a map of the environment and a desired goal to reach, the robot will look for the easiest path to the goal while minimizing the risk of colliding into obstacles. Since we do not have access to a physical drone, we use a simulated robot in a virtual environment to test Nav2.