A power user typically uses multiple screens (monitors) to quickly look at different windows at a glance. Similarly, a proficient developer uses multiple screens: one for the IDE, one for the documentation or tutorial relevant to the program under development and one more for Web/email/personal access. Scoping down into development/writing code alone, IDEs like VisualStudio have multiple windows and panes that present the complex task of program development and debugging in a logically distinct and structured manner. The ability to switch between multiple contexts simply by glancing from one window/screen to the other is powerful. Such power and aid to productivity is unavailable to visually impaired developers who currently depend on a screen reader for program development.

Information access for a visually impaired user is a very linear and sequential process whereas the same is a very non-linear and quick process for sighted users. Screen reader users are limited to a single-dimensional, linear channel of the single voice of the screen reader coupled with numerous keyboard shortcuts and actions to traverse and acquire the complex information that they need. This limitation becomes more evident while switching within huge number of windows of applications like VisualStudio.

Soundscape for Visual Studio (SVS) is a Spatial Audio UI that converts the multi-window complex GUI of Visual Studio into a soundscape surrounding the developer, coupled with interaction techniques to provide capabilities that are routinely used by sighted developers that hitherto have been inaccessible to VIPs. SVS converts the current state of VS into a spatial audio cloud consisting of spoken text and non-speech audio that surrounds the developer. For instance, sighted developers get a lot of important information from IDEs through visual cues like syntax coloring, swiglies and tooltips. These visual cues convey information to the user without obstructing the primary task of reading code. Through SVS, we explore the design of an experience to convey this information in audio without obstructing screen reader behavior. SVS would also be capable of conveying different pieces of information (from the output, stack trace and the locals windows) simultaneously through this audio cloud. Through improvements to the audio user experience, our goal is to see if the user is able to dynamically extract the information needed to perform the task of program development at productivity levels much higher than the current screen-reader based process.

Project collaborators

Sujeath Pareddy and Dr. Manohar Swaminathan