A Deep Learning-Based Smart Assistive Framework for Visually Impaired People

2022 IEEE International Conference on Omni-layer Intelligent Systems (COINS)(2022)

引用 0|浏览9
暂无评分
摘要
According to the World Health Organization (WHO), there are millions of visually impaired people in the world who face a lot of difficulties in moving independently. They always need help from people with normal sight. The capability to find their way to their intended destination in an unseen place is a major challenge for visually impaired people. This paper aimed to assist these individuals in resolving their problems with moving to any place on their own. To this end, we developed an intelligent system for visually impaired people using a deep learning (DL) algorithm, i.e., convolutional neural network (CNN) architecture, AlexNet, to recognize the situation and scene objects automatically in real-time. The proposed system consists of a Raspberry Pi, ultrasonic sensors, a camera, breadboards, jumper wires, a buzzer, and headphones. Breadboards are used to connect the sensors with the help of a Raspberry Pi and jumper wires. The sensors are used for the detection of obstacles and potholes, while the camera performs as a virtual eye for the visually impaired people by recognizing these obstacles in any direction (front, left, and right). The proposed system provides information about objects to a blind person. The system automatically calculates the distance between the blind person and the obstacle that how far he/she is from the obstacle. Furthermore, a voice message alerts the blind person about the obstacle and directs him/her via earphones. The obtained experimental results show that the utilized CNN architecture AlexNet yielded an impressive result of 99.56% validation accuracy and has a validation loss of 0.0201%.
更多
查看译文
关键词
Convolutional Neural Network,Raspberry Pi,Ultrasonic Sensors,way finding system,situation awareness,smart cane
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要