Individuals with visual impairments (those who are blind or have low vision) lack equitable access to a vast amount of information from the environment. As a result, such individuals face barriers including the inability to independently read printed text, identify people and places, and anticipate physical obstacles. Although technology, and in particular, the panoply of smartphone apps, can help overcome some of these barriers (e.g., through support of GPS for location awareness and AI for interpreting the view of a camera), numerous challenges remain, even in everyday scenarios such as shopping and going to a restaurant. A panel comprised of people with visual impairments, mobile technology experts, vision rehabilitation professionals, and academics will discuss barriers and facilitators of current smartphone technology used by individuals with visual impairments in everyday real-world scenarios. This discussion is intended to foster collaborations amongst technology leaders and academics to develop smartphone technology that responds to the needs of the growing population of individuals with vision loss.
Jeremy Cooperstock, Professor, Electrical and Computer Engineering, a member of the Centre for Intelligent Machines, and a founding member of the Centre for Interdisciplinary Research in Music Media and Technology at McGill University
Short Bio: Jeremy Cooperstock (Ph.D., University of Toronto, 1996) is a professor in the department of Electrical and Computer Engineering, a member of the Centre for Intelligent Machines, and a founding member of the Centre for Interdisciplinary Research in Music Media and Technology at McGill University. He directs the Shared Reality Lab, which focuses on computer mediation to facilitate high-fidelity human communication and the synthesis of perceptually engaging, multimodal, immersive environments. He led the development of the Intelligent Classroom, the world’s first Internet streaming demonstrations of Dolby Digital 5.1, multiple simultaneous streams of uncompressed high-definition video, a high-fidelity orchestra rehearsal simulator, a simulation environment that renders graphic, audio, and vibrotactile effects in response to footsteps, and a mobile game treatment for amblyopia. Cooperstock’s work on the Ultra-Videoconferencing system was recognized by an award for Most Innovative Use of New Technology from ACM/IEEE Supercomputing and a Distinction Award from the Audio Engineering Society. The research he supervised on the Autour project earned the Hochhausen Research Award from the Canadian National Institute for the Blind and an Impact Award from the Canadian Internet Registry Association, and his Real-Time Emergency Response project won the Gold Prize (brainstorm round) of the Mozilla Ignite Challenge. Cooperstock has worked with IBM at the Haifa Research Center, Israel, and the T.J. Watson Research Center in Yorktown Heights, New York, the Sony Computer Science Laboratory in Tokyo, Japan, and was a visiting professor at Bang & Olufsen, Denmark, where he conducted research on telepresence technologies as part of the World Opera Project. Cooperstock led the theme of Enabling Technologies for a Networks of Centres of Excellence on Graphics, Animation, and New Media (GRAND) and is currently an associate editor of IEEE Transactions on Haptics, Frontiers Journal in Virtual Reality (Specialty section on haptics), IEEE World Haptics Conference, IEEE Haptics Symposium, Journal of the Audio Engineering Society, and was guest editor of Multimodal Technologies and Interaction for a special issue on Multimodal Medical Alarms.
Joe Nemargut, School of Optometry at the Université de Montréal
Short Bio: Joe Nemargut was appointed as an assistant professor in 2021 at the School of Optometry at the Université de Montréal. His expertise is in vision rehabilitation, particularly related to travel and mobility of persons living with vision loss. He is the only professor certified in orientation and mobility in Canada and one of the few in the world that are implicated in applied research. He trains both vision rehabilitation specialists and researchers. His research projects include the study of technology use to improve travel with low vision, 3D tactile maps for navigation, the use of telerehabilitation in orientation and mobility, physical activity in children with visual impairments, the impact of face masks in navigation, amongst others. He recently received funding from both the Centre de recherche interdisciplinaire en réadaptation du Montréal métropolitain (CRIR) and the Health Ministry.
Natalina Martiniello, Concordia University
Short Bio: Natalina Martiniello is a postdoctoral fellow at Concordia University, focusing on accessibility and inclusion for individuals with visual impairments. She is also a course instructor in the Graduate program in Vision Impairment and Rehabilitation at the Université de Montréal. Prior to this, she worked as a Certified Vision Rehabilitation Therapist, specializing in braille literacy and assistive technology for individuals with visual impairments across the age spectrum. As a person with lived experience, she is passionate about braille literacy and access to information. She is the Past-President of Braille Literacy Canada, a director on the International Council on English Braille, and the Vice-Chair of the International Network of Researchers with Vision Impairment and their Allies (INOVA).
Nathalie Gingras-Royer, School of Optometry of the Université de Montréal
Short Bio: Nathalie Gingras-Royer is a Master’s student in vision science in the visual impairment intervention option at the School of Optometry of the Université de Montréal. She holds a bachelor’s degree in anthropology. As a visually impaired person, she is skilled in the use of adaptive technologies and participates in research projects aimed at promoting the autonomy and social participation of people living with visual limitations, particularly through the use of inclusive technologies.