Navigation Helper Helmet

Team name: Eye of the future

In this capstone project, we plan to develop a helmet/eyeglass that assists visually impaired people to perceive their surrounding environment and navigate more easily.

Client:Abolfazl Razi
GTA: Han Peng: hp263@nau.edu


Last updated feature:


⦁ Feature update on April 22, 2021
User interface: We have added the function of collecting face samples on the user interface. This feature allows users to collect new face recognition samples and store them in categories.

User interface: We have added some new voice packages to the user interface. For example, it can now prompt the user that the motor has been reset, etc.

Helmet: We re-made the helmet using acrylic sheet.



⦁ Feature update on March 11, 2021
Rotation system: Can drive the camera and LIDAR to rotate together. The stepper can swing left and right, angle range is +90 degrees to -90 degrees.

LIDAR system: Can detect distances from 0 to 400cm, and error won't exceed 5cm.

Voice prompt system: It can describe the type of object, the distance and direction relative to the user. When the detected object is a person familiar to the user and the sample has been collected, the voice broadcast is used to explain the detected object's identity.If the detected object is not recorded, the user needs to be notified that the object is an unknown person.

User interface: User now can easily adjust the range of LIDAR detection, the speed of the motor. And also turn on or off the video record or motor rotation.



⦁ Feature update on January 11, 2021
Face recognition: It can already distinguish strangers and acquaintances. Added recognition of Jingwei Yang, Junlin Hai, and Bo Sun among familiar people.

Object recognition: Many kinds of objects can be recognized, such as tables, chairs, displays, water bottles, trees, cars, bicycles.

Voice prompt system: The voice prompt system can already say Junlin Hai and Jingwei Yang's names.

Description:

In this project, we plan to design and implement a low-cost helmet/based monitoring system that uses high-definition cameras, an infrared (IR) camera for night vision, and a LiDAR to perceive the environment and translate it to voice commands.
The basic operation of the device would be monitoring the environment and providing voice-based hints to the patient.

Client Description:



Abolfazl Razi
Position: Assistant Professor
College: School of Informatics, Computing and Cyber Systems
Main research: Director of Wireless Networking and Smart Health
Email: abolfazl.razi@nau.edu
Like: https://www.cefns.nau.edu/~ar2843/

Potential benefits and applications:


Our products can help users understand the surrounding situation and describe the surrounding environment by voice. For example, when there is someone in front of the user, the product will use voice to prompt the user "someone in front of you," When the person in front of you is an acquaintance, the voice system will say the person's name. If there is an obstacle in front of the user, the voice system will tell the user the type of obstacle and the user's distance.
The main application of our products is for people with visual impairments, such as blind people. Our products are mainly used to help visually impaired people to live and communicate normally.

Requirements:


1. The product can detect the target 360 degrees
2. The product should have Voice Command function
3. The product shoud have Object recognition
4. The product should have Face Recognition
5. The product should have at least 5 hours of use time after charging
6. All electronic components should be inside the equipment to ensure that they will not be damaged in a harsh environment.
7. The product should be comfortable and should have heat dissipation function to ensure that it can be worn continuously for at least 2.5 hours.


Project Design Depiction:


Image
(If the graph is not claer, please click it.)

Our products include five subsystems: User Interface (UI), Rotating System, Recognition System, Lidar System, and Voice Prompt System. We control the entire product's work through the UI, including turning on/off or restarting the device. At the same time, we can also control product parameters and other subsystems through UI, including IDAR monitoring range, motor on/off, and speed. Also, our products' functions are realized by the rotation system, the recognition system, the lidar system, and the voice system. The rotation system is responsible for driving and controlling the camera and LIDAR rotation to determine the position of the target, such as "left" and "right". The recognition system is responsible for identifying the target to determine the type of object. If the target is a human, it also needs to determine the name of the target. The LIDAR system is responsible for judging the distance between the target and the user and controlling the recognition system's monitoring range. Finally, the voice system will collect all the data, and the data will be organized into a complete picture for broadcast through the previously designed language logic. Users can obtain voice information through audio output devices such as wired earphones, Bluetooth earphones, or speakers.


Visual depiction:



raspberry pi 4b

Camera Module

MakerFocus TFmini-s Micro Lidar Module

28BYJ-48 Stepper Motor + ULN2003 Driver

sd card

python



Our Products

Team member

Jingwei yang

Team Leader

Skill: programming
Email:jy375@nau.edu

Bo Sun

Treasurer

Skill: Hardware setup
Email:bs968@nau.edu

Alfred Gunasekara

Meeting Recorder

Skill: programming
Email:ag3232@nau.edu

Junlin Hai

Secretary

Skill: programming
Email:jh2884@nau.edu

Progress Page

Image

(If the graph is not claer, please click it.)


This Gantt Chart is our project plan for the spring semester of 2021. From the Gantt Chart, we can see that our plan's start time is the 11th of the month and the end time is March 14. Capstone's end time is March 26, so our plan has 12 days reserved to ensure the completion of our project. There are three diamond-shaped icons in Gantt Chart. They represent the milestones in our plan. Whenever a milestone is completed, it will represent a significant breakthrough in our product. The shaded part in Gantt Chart is our critical path, and our project will expand and supplement these tasks. Whether the tasks on the critical path can be completed will determine the success of our project. Most of the plan tasks will be completed by multiple team members, which can save more time.


Gantt Chart Description(updated by March 11):
 According to our previous Gantt Chart plan, all of our plans should be completed before March 14th, but the time for completing our overall plan was delayed due to more requests from our Client. Also, we plan to complete the design of the 3D model of the helmet on February 17th and 3D print the model. However, because the Client believes that the helmet can be improved, we have carried out many helmet designs. At this time, we still need to totally completed the helmet design.
 The main reason for the delay in completing this project is because we have carried out many helmet designs. But because we don't have relevant design experience, there are problems in many details. For example, when we use the camera, the edge of the helmet blocks apart (1/8) of the camera's image. In addition, according to the printed model, we found that at individual interfaces, there were also phenomena that the interfaces could not be passed through or aligned. Through the emergence of this problem, we have gained some experience. We should simulate the model with other materials before printing the actual product to understand whether the model will have a bad effect on the function of the product. In this way, we can also make improvements in advance.
 In addition, our team has completed most of the software tests on March 10th. Therefore, our team only needs to wait until the final model is completed for the final hardware test and the hardware and software joint test to complete all the plans.


Gantt Chart Description(updated by April 22):
 According to the last update, we have completed all the plans in Gantt Chart on April 10. However, due to some changes in the later period, we have also done some additional work. For example, we re-made our helmets and finished the production on March 30.
 Our team also prepared posters and speeches for Expo and completed the recording of related videos and poster design and production on April 4.
 On April 16, our team completed the final report of the Capstone project.
 On April 21, we completed the Team Presentation of our Capstone project. This also means that our entire Capstone ended smoothly.



Test Description

Test 1:
Requirements tested: The motor rotates 45 degrees each time.
Test case name: The angle of eache ratation of the stepper motor.
Type: White box
Tester: Alfred Ranasinghege Madushan Gunasekara
Date: February 22, 2021

 Our team tested whether the angle of each rotation of the motor is 45 degrees. The requirement for this test is “2.2 The motor rotates 45 degrees each time.” Since we know the principle and we can deduce the expected result, it is a white box.
 First, the setup instructions are turned on the stepper motor to verify that the stepper can work normally. According to the product manual of 28BYJ-48, we can know that it is a 4-phase 5-wire stepper. Therefore, this stepper motor needs to run 2048 steps per revolution. From this, we know that if we need the motor to move only 45 degrees each time, we need to make the stepper move only 2048/8=256 steps each time. Therefore, we add a counter with an initial value of 0 to the stepper's code and then increase the counter by one every time the stepper moves by one step. Then, every time the counter increases by 256, we stop the motor for 30 seconds. Then the tester measures the angle that the motor has moved. Therefore, the total angle of each motor movement should be 45 degrees higher than the previous record. This test understands its working principle can also predict its results, so it is a white box. Since each input is 256 steps, and the same number of steps separates each time, the difference should be 45 degrees. This test should be a unit test (matrix).
 The specific operation is as follows. We will record the angle between the position and the starting position after each 256-step movement of the motor and record it. We can easily calculate the ideal result according to the principle mentioned before. Then subtract the actual result from the ideal result. If the result's absolute value is not more than 5 degrees, then it means that there is no problem with this function.


Test 2:
Requirements tested: The user can adjust the speed of the motor.
Test case name: Motor speed control test.
Type: White box
Tester: Jingwei Yang
Date: February 24, 2021

 This test aims to detect whether the user can accurately control the motor's speed through the user interface. The requirement for this function is "5.2 The user can adjust the speed of the motor." Since we know the principle and we can deduce the expected result, it is a white box.
 In our user interface, the user can manually set the motor speed. We change the motor's speed by changing the delay time between each step of the stepper motor. Because the 28BJY-48 stepper makes 2048 steps per revolution, the time used for each revolution should be 2048* (delay time). The user can fill in the number to be an integer greater than 2. The delay time is (the number entered by the user)/1000. And the delay time unit is seconds. Therefore, the time required for each rotation of the motor = 2048 * (number entered by the user) / 1000. Because the input of this test is an integer, and the output is also a certain regular number. Therefore, the test of our group's task is Unit Test (Metrix).
 During the actual test, we performed a total of 12 steps. The first and second steps failed because the motor cannot normally rotate in these two cases. Every step of the work after that is normal. However, due to mechanical errors, the actual measured value will be slightly larger than the theoretical value. Therefore, our team believes that the error is less than 1 second to pass. Because our group’s idea is that every time the parameter increases by 1, the actual motor rotation time increases by 2 seconds. Through testing, we found that the larger the input parameter, the larger the error produced. Therefore, we will control the parameters that users can enter between 2 and 10.


Test 3:
Requirements tested: The user can adjust the range of LIDAR detection, the adjustable range is 30 cm to 300 cm.
Test case name: LIDAR control function detection.
Type: White box
Tester: Bo Sun
Date: February 25, 2021

 This test is to test that the user can control the Lidar's monitoring range by inputting parameters. This test requirement is "5.1 The user can adjust the range of LIDAR detection; the adjustable range is 30 cm to 300 cm. " Since we know the principle and we can deduce the expected result; it is a white box.
 By limiting the LIDAR distance measurement range, we can effectively reduce the frequency of object recognition so that the voice prompt function can more effectively describe the user's environment. This can also reduce unnecessary interference and improve the accuracy of the voice prompt system description. For this test, we need to place the chair in advance and then use the user interface to adjust the LIDAR range. LIDAR is used to measure the distance between the user and the chair. When the user and the chair's distance exceed the user's range, the program will print "Out of Range." When the distance between the user and the chair is less than the user's range, the program will print this value, and the unit is cm. Because in this test, we need to change the chair's position frequently, frequently reset the system, and set the parameters. Therefore, our team believes that this test should be Unit Test (Step-by-step).
 In the test, we first place the chair in a position and measure the distance between the user and the chair. Then enter a Lidar parameter, this parameter should be greater than the distance of the chair. So, we can detect the chair. Then record the position of the chair and compare it with the predicted value. Then place the chair outside the user's parameters and compare the results obtained with the predicted results. Finally, adjust the parameters to be greater than the distance between the chair and the user. Then compare the predicted value with the actual value. By repeatedly repeating the above steps, we can judge whether the user can accurately control Lidar's monitoring range. Because LIDAR itself has errors, and measurement errors will also occur during measurement, we allow an error of 5 cm.


Test 4:
Requirements tested: (1)Face recognition of familiar people.
(2)The voice system can use voice to describe whether the task in front of the user is familiar to the user.
(3)Describe the direction by the angle of rotation of the motor.
(4)The distance between the object and the user can be described according to the measurement result of LIDAR.

Test case name: User intergration testing
Type: White box
Tester: Junlin Hai
Date: February 28, 2021

 This test's primary purpose is to determine whether the voice system can accurately describe different people's descriptions in different locations. Its corresponding requirements are: “1.4 Face recognition of familiar people. 4.3 The voice system can use voice to describe whether the user's task is familiar to them. 4.4Describe the direction by the angle of rotation of the motor. 4.5 The distance between the object and the user can be described according to the measurement result of LIDAR.” Because we understand the internal organization of the system used in this test, and we can predict its results. Therefore, our group considers it a white box.
 Through face recognition of different people, the voice prompt system will perform different voice descriptions. And according to the person's location and distance, the voice system will also change the words used in the book to ensure that the description is more accurate because this test will use more than three different systems to complete a specific function. Therefore, we consider it an Integration Test.
 We used four different detection objects in this test: Jingwei, Bo, Alfred, and Jingwei's roommate. First, we let Jingwei stand in a different position to be detected. We anticipated and outdated, according to the location of Jingwei, different voice systems will also change the description of the direction and the description of the distance. Then we describe different people. Since Jingwei, Bo, and Alfred are all familiar people, the voice system will indicate their names. But Jingwei's roommate is not recorded, so the voice system will notify you, "You don't know this person." From the second step to the sixth step, the test's main content is to detect whether the voice system can accurately describe the detected object's location. The first step to the ninth step is to test whether the voice system can accurately distinguish between familiar people and strangers. Due to the LIDAR measurement errors, the actual results may deviate from the predicted results in terms of distance description. But if the error is within 5 cm, we consider it passed.


Documents Page


Testing Results Report



Testing Workbook(download)


poster


Final Project Presentation


Capstone Project Report

Contact Us

If you have any question, welcome to contact us.

This is our team leader's contact:

Email: jy375@nau.edu
phone number: (928)380-7824