Audio Rendering of Mathematical Content
Published:
Approaches to render mathematical content in audio.
Published:
Approaches to render mathematical content in audio.
Published:
New ATM design to improve accessibility for people with visual impairments.
Published:
rethinking IDE accessibility for visually impaired developers by improving code glanceability, navigability and alertability to IDE information.
Published:
Spatial Audio UI to enhance IDE usability for developers with visual impairments.
Published in International Conference on Natural Language Processing, 2014
Text to speech (TTS) systems hold promise as an information access tool for literate and illiterate including visually impaired. Current TTS systems can convert a typical text into a natural sounding speech. However, auditory rendering of mathematical content, specifically equation reading is not a trivial task. Mathematical equations have to be read so that appropriate bracketing such as parentheses, superscripts and subscripts are conveyed to the listener in an accurate way. In this paper, we first analyse the acoustic cues which humans employ while speaking the mathematical content to (visually impaired) listeners and then propose four techniques which render the observed patterns in a text-to-speech system.
Recommended citation: Potluri.V, Rallabandi.S, Srivaastava.P and Prahallad.k. (2014). "Significance of Paralinguistic Cues in the synthesis of Mathematical Equations" International Conference on Natural Language Processing
Published in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018
In recent times, programming environments like Visual Studio are widely used to enhance programmer productivity. However, inadequate accessibility prevents Visually Impaired (VI) developers from taking full advantage of these environments. In this paper, we focus on the accessibility challenges faced by the VI developers in using Graphical User Interface (GUI) based programming environments. Based on a survey of VI developers and based on two of the authors personal experiences, we categorize the accessibility difficulties into Discoverability, Glanceability, Navigability, and Alertability. We propose solutions to some of these challenges and implement these in CodeTalk, a plugin for Visual Studio. We show how CodeTalk improves developer experience and share promising early feedback from VI developers who used our plugin.
Recommended citation: Potluri, V., Vaithilingam, P., Iyengar, S., Vidya, Y., Swaminathan, M., & Srinivasa, G. (2018, April). CodeTalk: Improving Programming Environment Accessibility for Visually Impaired Developers. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 618). ACM. https://dl.acm.org/citation.cfm?id=3174192
Published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume 2, Issue 3, 2018
The emergence of augmented reality and computer vision based tools offer new opportunities to visually impaired persons (VIPs). Solutions that help VIPs in social interactions by providing information (age, gender, attire, expressions etc.) about people in the vicinity are becoming available. Although such assistive technologies are already collecting and sharing such information with VIPs, the views, perceptions, and preferences of sighted bystanders about such information sharing remain unexplored. Although bystanders may be willing to share more information for assistive uses it remains to be explored to what degree bystanders are willing to share various kinds of information and what might encourage additional sharing of information based on the contextual needs of VIPs. In this paper we describe the first empirical study of information sharing preferences of sighted bystanders of assistive devices.
Recommended citation: Tousif Ahmed, Apu Kapadia, Venkatesh Potluri, and Manohar Swaminathan. 2018. Up to a Limit? Privacy Concerns of Bystanders and Their Willingness to Share Additional Information with Visually Impaired Users of Assistive Technologies. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 3, Article 89 (September 2018), 27 pages. DOI:https://doi.org/10.1145/3264899 https://dl.acm.org/citation.cfm?id=3264899
Published in ASSETS 19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility, 2019
Blind and visually impaired (BVI) individuals are increasingly creating visual content online; however, there is a lack of tools that allow these individuals to modify the visual attributes of the content and verify the validity of those modifications. In this poster paper, we discuss the design and preliminary exploration of a multi-modal and accessible approach for BVI developers to edit visual layouts of webpages while maintaining visual aesthetics.
Recommended citation: Venkatesh Potluri, Liang He, Christine Chen, Jon E. Froehlich, and Jennifer Mankoff. 2019. A Multi-Modal Approach for Blind and Visually Impaired Developers to Edit Webpage Designs. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 612–614. DOI:https://doi.org/10.1145/3308561.3354626 https://dl.acm.org/doi/10.1145/3308561.3354626
Published in CHI 21: ACM Conference on Human Factors and Computing Systems, 2021
Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.
Recommended citation: Venkatesh Potluri, Tadashi E Grindeland, Jon E. Froehlich, Jennifer Mankoff. 2021. Examining Visual Semantic Understanding in Blind and Low-Vision Technology Users. In CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3411764.3445040
Published:
Gave a talk on the current state of assistive technology for the visually impaired, the challenges and the possibilities to the workshop participants at the Engineering the Eye workshop organized by the Camera Culture Group, MIT Media Lab and LV Prasad Eye Institute.
Published:
We attempt to replicate speech patterns followed by human beings and go beyond with the help of cues(speech and non-speech) to render math (equations and pie charts) in audio.
Published:
I was invited to give at talk on improving programming environment accessibility for blind and visually impaired developers at google.
Published:
I was invited to give at talk on improving programming environment accessibility for blind and visually impaired developers at the Software Development Diversity and Inclusion (SDDI) workshop
Published:
Organized a one day workshop in collaboration with Frontline Eye Hospital, Chennai on assistive technology. This was an introductory workshop focused on spreading awareness and demonstrating the possibilities assistive technology opens up. The target audience were parents with children with visual impairments, rehabilitation trainers, and persons with visual impairments.
Published:
This was a 7 day workshop organized in collaboration with Frontline Eye Hospital, Chennai. The goal was to train participants with the use of assistive technology to perform basic tasks like word processing, basic accounting, email, recreation, etc.