Our Project

The vision for our project as supplied by our sponsor, Dr. Shenkin, is provided as a Capstone project proposal.

The Problem

Our sponsor, Dr. Shenkin, studies forest 3D structure, and his current workflow involves traversing the forest on foot using a LiDAR on a tripod to scan the canopy from the ground level. Groups of researchers trudge through the forest until they find an adequate spot, at which point they will set up the 360-degree LiDAR device to map the nearby surroundings.

Ground LiDAR in jungle
A crew of researchers operating a LiDAR from the ground level. (Source: Dr. Shenkin)

This process takes a large investment of time and manual labor, requiring a full week for a crew of 2-3 researchers to cover a hectare (100m x 100m) plot of forest. Also, researchers have no choice but to scan from fixed vantage points at ground level. This is a bad way to scan the details of tall trees. Surrounding overgrowth hinders the line of sight to the upper canopy, which results in some details being missed. As a result, forest ecologists receive an incomplete picture of 3D forest structure. Since forest structure dictates how the forest functions, such as a tree’s resilience to climate change and the strategies employed to survive, incomplete data results in an incorrect assessment of what the forest is doing. A more detailed picture of forest function is important because it would allow researchers to more accurately simulate these functions and determine how the forest ecosystem responds to changing variables in the environment.

Our Solution

In order to resolve the issues inside our client's workflow, we have been tasked with creating a navigation system for a self-autonomous drone. This system will detect objects in its path and find safe directions to travel in order to maneuver around them. Over time, the system would build a map of the space to be saved for future use.

An autonomous drone provides a faster, easier, and more effective alternative to Dr. Shenkin’s current workflow. Instead of a team of researchers, a drone would be able to scout the plot of forested land with no outside intervention from human crews. The drone would be capable of flying up into the canopy, collecting data which is inaccessible to researchers confined to the ground level. Additionally, while a handheld LiDAR needs to be mounted in fixed spots throughout the forest, a drone with a sensor can take rapid, continuous snapshots as it moves. This drastically increases the speed at which the mapping process is completed while ensuring a more complete picture of the forest environment.

Our navigation system runs in an Ubuntu environment and utilizes the ROS framework. ROS, short for Robot Operating System, is an open-source platform which is used in the industry to utilize robots to their fullest potential, as well as to make robots easier to use and understand. ROS provides a link between the robot itself and its sensors to enable autonomous movement, object detection, and SLAM (Simultaneous Localization and Mapping). See our Technologies page for more information on our hardware and software components.

Project Architecture

Architecture diagram
Our high-level system architecture with regards to our central computer, ROS framework and libraries, and sensor hardware.

Our sensors, such as LiDARs and depth cameras, collect data from the surrounding environment. This information is sent to ROS to be accessed by ROS's provided libraries. For instance, a LiDAR library can retrieve the data in a coordinate point format. This data can then be used in a SLAM component to plot these points in 3D space. With all the points being plotted, a visualization program can output the plotted points to a user display, allowing the user to see the map being generated in real time. At the same time, the points can also be written to the onboard SD card for permanent storage.