Skip to main content

INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE USING YOLO TECHNOLOGY

Page 1

International Research Journal of Engineering and Technology (IRJET) Volume: 09 Issue: 05 | May 2022 www.irjet.net

e-ISSN: 2395-0056 p-ISSN: 2395-0072

INDOOR AND OUTDOOR NAVIGATION ASSISTANCE SYSTEM FOR VISUALLY IMPAIRED PEOPLE USING YOLO TECHNOLOGY AKILA S, MONAL S, PRIYADHARSHINI A, SHYAMA M, SARANYA M, ASSISTANT PROFESSOR, DEPARTMENT OF BIOMEDICAL ENGINEERING MAHENDRA INSTITUTE OF TECHNOLOGY, NAMAKKAL -------------------------------------------------------------------------***----------------------------------------------------------------------approach used in this project. Artificial intelligence was Abstract— Good vision is a priceless gift, but vision loss is

used in this project to recognize and evaluate items and convey information via speaker. In the suggested work, we employ darknet YOLO approaches to tackle the present system problem by reducing the recognize time of multiple objects in less time with the best time complexity. We were inspired to create this project by the need for navigation assistance between blind people as well as a broader look in at advanced technology that is becoming available in today's environment. Technology is something that exists to make human tasks easier. As a result, we use solutions to address the difficulties of visually impaired persons in this initiative. The project's goal is to aid users in navigation through the use of technology, and our engineering profession encourages us to do so.

becoming more common these days. To assist blind individuals, the visual world must be turned into an aural world capable of informing them about objects and their spatial placements. Objects recognized in the scene are given names and then transformed to speech. Everyone deserves to live freely, even those who are impaired. In recent decades, technology has focused on empowering disabled people to have as much control over their lives as possible. In this paper, an assistive system for the blind is proposed, which uses YOLO for fast detecting objects within images based on deep neural networks for reliable detection, and Open CV under Python to let him know what is around him. The acquired results suggested that the proposed model was successful in allowing blind users to move around in an unknown indoor/outdoor environment using a user interface. Artificial intelligence was used in this project to recognize and evaluate items and convey information via speaker. In the suggested work, we employ darknet YOLO approaches to tackle the present system problem by reducing the recognize time of multiple objects in less time with the best time complexity.

RELATED WORKS Many attempts at object identification and recognition have been performed with deep learning algorithms like CNN, RCNN, YOLO, and others. A literature review is undertaken in this study to better understand some of these algorithms. [1]

Keywords— Assist Blind, Open CV, Python, Deep Neural Networks

Using YOLOv3 and the Berkley Deep Drive dataset, Aleksa Corovi et al. (2018) developed a method for detecting traffic participants. This system can recognize five different object classes (truck, car, traffic signs, pedestrians, and lights) in various driving circumstances (snow, overcast and bright sky, fog, and night). The accuracy rate was 63%. [2]

INTRODUCTION Millions of people throughout the world struggle to comprehend their surroundings related to visual impairment. Despite their ability to discover alternative techniques to dealing with daily tasks, they have navigational challenges and also social difficulty. For example, finding a certain room in an unknown setting is quite difficult for them. Furthermore, it is difficult for blind and visually impaired people to tell if someone is speaking to them or to someone else during a conversation. The purpose of this research is to investigate the feasibility of employing the sense of hearing to comprehend visual objects. The visual and auditory senses are strikingly similar in that both visible objects and audio sounds could be spatially localized. Many individuals are unaware that we are capable of determining its spatial location upon a sound source simply through hearing with two ears. The goal of the project is to assist blind persons in navigating via output of a processor. Object Extraction, Feature Extraction, and Object Comparison are all part of the © 2022, IRJET

|

Impact Factor value: 7.529

Smart glasses were applied by Rohini Bharti, et al. (2019) to assist visually challenged persons. This system was built using CNN with Tensorflow, a custom Dataset, OpenCV, and a Raspberry Pi. This system can detect sixteen different classes. The technology has a 90 percent accuracy rate. [3] Omkar Masurekar, et al. (2020) developed an object detection model to aid visually impaired individuals. We utilized YOLOv3 and a custom dataset with three classes (bottle, bus, and mobile). For sound generation, Google Text To Speech (gtts) is employed. The authors discovered that recognizing the objects in each frame took eight seconds and that the accuracy reached was 98 percent. [4] |

ISO 9001:2008 Certified Journal

|

Page 2844


Turn static files into dynamic content formats.

Create a flipbook