Approaches to render mathematical content in audio.
Approaches to render mathematical content in audio.
New ATM design to improve accessibility for people with visual impairments.
rethinking IDE accessibility for visually impaired developers by improving code glanceability, navigability and alertability to IDE information.
Spatial Audio UI to enhance IDE usability for developers with visual impairments.
Published in International Conference on Natural Language Processing, 2014
Text to speech (TTS) systems hold promise as an information access tool for literate and illiterate including visually impaired. Current TTS systems can convert a typical text into a natural sounding speech. However, auditory rendering of mathematical content, specifically equation reading is not a trivial task. Mathematical equations have to be read so that appropriate bracketing such as parentheses, superscripts and subscripts are conveyed to the listener in an accurate way. In this paper, we first analyse the acoustic cues which humans employ while speaking the mathematical content to (visually impaired) listeners and then propose four techniques which render the observed patterns in a text-to-speech system.
Recommended citation: Potluri.V, Rallabandi.S, Srivaastava.P and Prahallad.k. (2014). "Significance of Paralinguistic Cues in the synthesis of Mathematical Equations" International Conference on Natural Language Processing https://www.bing.com/search?q=bluetooths&form=WNSGPH&qs=SW&cvid=83bee4ef1df84349a1c5f067917926d5&pq=bluetooths&cc=US&setlang=en-US&nclid=3F35E1613AD7ECB8402945191B94497D&ts=1698357657604&wsso=Moderate
Published in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 2018
In recent times, programming environments like Visual Studio are widely used to enhance programmer productivity. However, inadequate accessibility prevents Visually Impaired (VI) developers from taking full advantage of these environments. In this paper, we focus on the accessibility challenges faced by the VI developers in using Graphical User Interface (GUI) based programming environments. Based on a survey of VI developers and based on two of the authors personal experiences, we categorize the accessibility difficulties into Discoverability, Glanceability, Navigability, and Alertability. We propose solutions to some of these challenges and implement these in CodeTalk, a plugin for Visual Studio. We show how CodeTalk improves developer experience and share promising early feedback from VI developers who used our plugin.
Recommended citation: Potluri, V., Vaithilingam, P., Iyengar, S., Vidya, Y., Swaminathan, M., & Srinivasa, G. (2018, April). CodeTalk: Improving Programming Environment Accessibility for Visually Impaired Developers. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (p. 618). ACM. https://dl.acm.org/citation.cfm?id=3174192
Published in Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Volume 2, Issue 3, 2018
The emergence of augmented reality and computer vision based tools offer new opportunities to visually impaired persons (VIPs). Solutions that help VIPs in social interactions by providing information (age, gender, attire, expressions etc.) about people in the vicinity are becoming available. Although such assistive technologies are already collecting and sharing such information with VIPs, the views, perceptions, and preferences of sighted bystanders about such information sharing remain unexplored. Although bystanders may be willing to share more information for assistive uses it remains to be explored to what degree bystanders are willing to share various kinds of information and what might encourage additional sharing of information based on the contextual needs of VIPs. In this paper we describe the first empirical study of information sharing preferences of sighted bystanders of assistive devices.
Recommended citation: Tousif Ahmed, Apu Kapadia, Venkatesh Potluri, and Manohar Swaminathan. 2018. Up to a Limit? Privacy Concerns of Bystanders and Their Willingness to Share Additional Information with Visually Impaired Users of Assistive Technologies. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 2, 3, Article 89 (September 2018), 27 pages. DOI:https://doi.org/10.1145/3264899 https://dl.acm.org/citation.cfm?id=3264899
Published in ASSETS 19: The 21st International ACM SIGACCESS Conference on Computers and Accessibility, 2019
Blind and visually impaired (BVI) individuals are increasingly creating visual content online; however, there is a lack of tools that allow these individuals to modify the visual attributes of the content and verify the validity of those modifications. In this poster paper, we discuss the design and preliminary exploration of a multi-modal and accessible approach for BVI developers to edit visual layouts of webpages while maintaining visual aesthetics.
Recommended citation: Venkatesh Potluri, Liang He, Christine Chen, Jon E. Froehlich, and Jennifer Mankoff. 2019. A Multi-Modal Approach for Blind and Visually Impaired Developers to Edit Webpage Designs. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’19). Association for Computing Machinery, New York, NY, USA, 612–614. DOI:https://doi.org/10.1145/3308561.3354626 https://dl.acm.org/doi/10.1145/3308561.3354626
Published in ASSETS 20: The 22nd International ACM SIGACCESS Conference on Computers and Accessibility, 2020
In graduate school, people with disabilities use disability accommodations to learn, network, and do research. However, these accommodations, often scheduled ahead of time, may not work in many situations due to uncertainty and spontaneity of the graduate experience. Through a three-person autoethnography, we present a longitudinal account of our graduate school experiences as people with disabilities, highlighting nuances and tensions of situations when our requested accommodations did not work and the use of alternative coping strategies. We use retrospective journals and field notes to reveal the impact of our self-image, relationships, technologies, and infrastructure on our disabled experience. Using post-hoc reflection on our experiences, we then close with discussing personal and situated ways in which peers, faculty members, universities, and technology designers could improve the graduate school experiences of people with disabilities.
Recommended citation: Dhruv Jain, Venkatesh Potluri, and Ather Sharif. 2020. Navigating Graduate School with a Disability. In The 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '20). Association for Computing Machinery, New York, NY, USA, Article 8, 1–11. DOI:https://doi.org/10.1145/3373625.3416986 https://dl.acm.org/doi/10.1145/3373625.3416986
Published in CHI 21: ACM Conference on Human Factors and Computing Systems, 2021
Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.
Recommended citation: Venkatesh Potluri, Tadashi E Grindeland, Jon E. Froehlich, Jennifer Mankoff. 2021. Examining Visual Semantic Understanding in Blind and Low-Vision Technology Users. In CHI Conference on Human Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama, Japan. ACM, New York, NY, USA, 14 pages. https://doi.org/10.1145/3411764.3445040 https://dl.acm.org/doi/abs/10.1145/3411764.3445040
Published in Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '22), 2022
COVID-19 accelerated the trend toward remote software development, increasing the need for tightly-coupled synchronous collaboration. Existing tools and practices impose high coordination overhead on blind or visually impaired (BVI) developers, impeding their abilities to collaborate effectively, compromising their agency, and limiting their contribution. To make remote collaboration more accessible, we created CodeWalk, a set of features added to Microsoft’s Live Share VS Code extension, for synchronous code review and refactoring. We chose design criteria to ease the coordination burden felt by BVI developers by conveying sighted colleagues’ navigation and edit actions via sound effects and speech. We evaluated our design in a within-subjects experiment with 10 BVI developers. Our results show that CodeWalk streamlines the dialogue required to refer to shared workspace locations, enabling participants to spend more time contributing to coding tasks. This design offers a path towards enabling BVI and sighted developers to collaborate on more equal terms.
Recommended citation: Venkatesh Potluri, Maulishree Pandey, Andrew Begel, Michael Barnett, and Scott Reitherman. 2022. CodeWalk: Facilitating Shared Awareness in Mixed-Ability Collaborative Software Development. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '22). Association for Computing Machinery, New York, NY, USA, Article 20, 1–16. https://doi.org/10.1145/3517428.3544812 https://dl.acm.org/doi/10.1145/3517428.3544812
Gave a talk on the current state of assistive technology for the visually impaired, the challenges and the possibilities to the workshop participants at the Engineering the Eye workshop organized by the Camera Culture Group, MIT Media Lab and LV Prasad Eye Institute.
We attempt to replicate speech patterns followed by human beings and go beyond with the help of cues(speech and non-speech) to render math (equations and pie charts) in audio.
I was invited to give at talk on improving programming environment accessibility for blind and visually impaired developers at google.
I was invited to give at talk on improving programming environment accessibility for blind and visually impaired developers at the Software Development Diversity and Inclusion (SDDI) workshop
Organized a one day workshop in collaboration with Frontline Eye Hospital, Chennai on assistive technology. This was an introductory workshop focused on spreading awareness and demonstrating the possibilities assistive technology opens up. The target audience were parents with children with visual impairments, rehabilitation trainers, and persons with visual impairments.
This was a 7 day workshop organized in collaboration with Frontline Eye Hospital, Chennai. The goal was to train participants with the use of assistive technology to perform basic tasks like word processing, basic accounting, email, recreation, etc.