UB ScholarWorks

A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired

Show simple item record

dc.contributor.author Elmannai, Wafa
dc.date.accessioned 2018-10-29T15:46:17Z
dc.date.available 2018-10-29T15:46:17Z
dc.date.issued 2018-08-15
dc.identifier.citation W. Elmannai, "A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired", Ph.D. dissertation, Dept. of Computer Science and Engineering, Univ. of Bridgeport, Bridgeport, CT, 2018. en_US
dc.identifier.uri https://scholarworks.bridgeport.edu/xmlui/handle/123456789/3931
dc.description.abstract The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them. en_US
dc.language.iso en_US en_US
dc.subject Assistive wearable devices en_US
dc.subject Computer vision systems en_US
dc.subject Mobility limitation en_US
dc.subject Obstacle collision avoidance en_US
dc.subject Obstacle detection en_US
dc.subject Visual impairment en_US
dc.title A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired en_US
dc.type Thesis en_US
dc.institute.department School of Engineering en_US
dc.institute.name University of Bridgeport en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search ScholarWorks


Advanced Search

Browse

My Account