Mini Grant Winner : Personal Navigator

This blog post has been prepared by Kevin Chen, winner of the MakerSpace Mini grant for the month of February, 2019.

Kevin is a first year student majoring in Computer Science

SiteSeer is a cost-effective application that improves the accessibility of travel for the visually impaired to navigate in the urban environment. Those who are visually impaired will find it difficult to complete what may be perceived as simple tasks by those who are not disabled such as crossing streets, finding directions, and safely reaching one’s destination. This project directly addresses the demand for more disability-friendly applications for the blind.

The physical components of this project include a Raspberry Pi 3 Model 3 B + (with a Power Supply) which serves as a microcomputer that operates the other physical components. For this project, the Raspberry Pi calculated the route and produces each direction to get to a destination. Using the camera, the Raspberry Pi utilizes computer vision and machine learning to track objects such as green light and red light on the street. One HDMI cable, which connects the Raspberry Pi to an external monitor, and one Pi Camera, which produces the live feed and incorporates computer vision to caution the user when he or she is attempting to cross a sidewalk, were connected to the Raspberry Pi. The SD card stores the Raspbian Stretch Operating System which acts as the processor to run Python scripts. The Raspberry Pi was powered by connecting a Micro USB to USB to the power supply, which was a portable charger.

On the 3D printed headset, which was designed as a pair of commercial glasses frames, a touch sensor was attached. This physical hardware component activated the Raspberry Pi and opens the dialogue that asks for directions and plans the user’s route. In order to connect the touch sensor to the glasses, 21 female to female extension wires were used. Using the FPC Ribbon Extension Wires, the Raspberry Pi on the waist (belt buckle) was connected to the Pi Camera on the glasses (head).

Other accessories were used to increase the usability of this product. For example, a cord protector was shaped to encompass the loose FPC Ribbon Extension Wires and female to female extension wires, both protecting and stylizing the prototype. A magenta and white Raspberry Pi 3 Case was used to protect and aesthetically enhance the Raspberry Pi. On the Raspberry Pi case, a 3M Double-Sided Tape securely attached the pocket clip to the Raspberry Pi Case. The total costs for all physical components are $149.00, which can be calculated using the data from Table 1; however, the project secured a mini-grant from the MakerSpace which covered the additional expenses, which provided additional funding.

The software components of this project is divided into two areas: the Raspberry Pi and the mobile application. For the Raspberry Pi, the software components include the Raspbian Stretch Operating System, Python3, and Google Cloud Platform. Raspbian Stretch OS, which is a Linux-based operating system which runs Python-based executables on the Raspberry Pi. Python 3 was used as the programming language that runs on the Raspbian operating system and used to program scripts to run on the Raspberry Pi.

For the mobile application, the software components include the Swift programming language, XCode, and Google Cloud Platform. Swift was used as the programming language that builds the iOS mobile application, runs the audio feedback for the user, and tracks user locations. The application, written in Swift, was built using XCode, which is the program that serves as the user interface of the project. For storage, Google Cloud was used along with its application programming interface and data storage service, Firebase.

The technical software design of the project breaks down into three categories: computer vision, GPS tracking, and the iOS mobile application. For the computer vision, this technology is used to detect and identify objects in the user’s environment using Python on the Raspbian Stretch Operating System and Google Cloud services such as Google AutoVision ML and Storage. For GPS tracking, this technology is used to guide navigation using Google services such as Google Maps Directions API and Apple GPS Services. For the iOS mobile application, this technology is used for the front-end/back-end/ audio capabilities of the project using Swift to create the app in XCode and Google services like Firebase and Speech-to-Text and Apple Text-to-Speech.

The project utilized several software components to run the mobile application and Raspberry Pi. For both the Raspberry Pi and mobile app, different modules of Google Cloud Platform were implemented such Firebase, to communicate between the Raspberry Pi and mobile app, Cloud Storage, to store live snapshots of the camera, AutoVision ML, to act as the computer vision model to predict the user’s environment, Maps Directions API, to calculate the route/directions to get to the user’s destination, and Speech Services for Speech-to-Text and Text-to-Speech for the user the communicate information to the Raspberry Pi.

For the mobile application, the project utilized Swift on the development platform, XCode, to program mobile application utilizing different modules such as CoreLocation, to track the user’s location and use the GPS to record and position the user into the correct location, Speech Services, to utilize the phone’s hardware to record and produce audio, and Firebase, to communicate and send data to the Raspberry Pi.

For the Raspberry Pi operating system, the project utilized the Google Cloud module to import features such as Firebase, VisionML, and Storage to get a snapshot of the user’s environment and predict the object using a machine learning model.


Resources:
Dexter: https://www.dexterindustries.com/howto/use-google-cloud-vision-on-the-raspberry-pi/
Google: https://cloud.google.com/vision/automl/docs/tutorial

css.php