Accessibility
(Allen School)
Thursday, November 29, 2018, 3:30 pm
EEB-105
Abstract
Please join us for a rapid tour of accessibility research in the Allen School at the CSE Colloquium this Thursday, Nov 29th. We will have six talks on a range of topics, including: 3D-printed tactile overlays for visually impaired smartphone users, data-driven analyses of mobile app accessibility, sound awareness feedback systems for people who are deaf and hard of hearing, large-scale analyses of online studies with people with disabilities, new tools that model and visualize pedestrian infrastructure, and new block-based programming environments for blind children. Come learn about state-of-the-art accessibility research over some light refreshments and support your fellow research peers."
Talk 1: Interactiles: 3D Printed Tactile Interfaces on Phone to Enhance Mobile Accessibility
Speaker: Xiaoyi Zhang, CSE PhD student
Abstract: The absence of tactile cues such as keys and buttons makes touchscreens difficult to navigate for people with visual impairments. Increasing tactile feedback and tangible interaction on touchscreens can improve their accessibility. However, prior solutions have either required hardware customization or provided limited functionality with static overlays. Prior investigation of tactile solutions for large touchscreens also may not address the challenges on mobile devices. We therefore present Interactiles, a low-cost, portable, and unpowered system that enhances tactile interaction on Android touchscreen phones. Interactiles consists of 3D-printed hardware interfaces and software that maps interaction with that hardware to manipulation of a mobile app. The system is compatible with the built-in screen reader without requiring modification of existing mobile apps. We describe the design and implementation of Interactiles, and we evaluate its improvement in task performance and the user experience it enables with people who are blind or have low vision.
Bio: Xiaoyi Zhang is a PhD candidate at Paul G. Allen School of Computer Science & Engineering University of Washington, working with Professor James Fogarty. He will join Apple Seattle as a Researcher in Dec 2018. He authored 17 publications and 2 patents in human-computer interaction, with a focus on accessibility and personal informatics. He interned at Apple, Google Research, and Microsoft Research, and built a team that won LAHacks, HackUCLA and Facebook Hackathon. He received his bachelor in computer science at UCLA. More information can be found at www.XiaoyiZhang.me
Talk 2: An Epidemiology-Inspired Large-Scale Analysis of Mobile App Accessibility
Speaker: Anne Ross, CSE PhD student
Abstract: An Epidemiology-Inspired, Large-Scale Analysis of Mobile App Accessibility
Mobile application (app) accessibility for people with disabilities is often considered at the level of a single app but rarely on a larger scale of the entire app "ecosystem," such as all apps in an app store, their companies, developers, and user influences. We constructed an epidemiology-inspired conceptual framework to expand how app accessibility is approached. I will present the first large-scale, population-level assessment of app accessibility, which we based on our framework. We analyzed 5,753 free Android apps for the prevalence of missing labels on three types of image-based buttons. Missing labels create barriers for screen reader users, such as people who are blind or have low vision. I will then present an assessment of a potentially influential environmental factor; the Android Studio automated Lint tests. The results of these analyses motivate further work in enhancing app accessibility and offer potential improvements to existing tools.
Bio: Annie is a PhD student in the Paul G. Allen School of Computer Science and Engineering at The University of Washington. Her broad interest is in Human Computer Interaction. She is passionate about enabling diverse groups of people to access, interact with, and communicate information through technology. She is currently working with James Fogarty and Jacob O. Wobbrock on making Android applications more accessible to people with disabilities. She has interned at Microsoft Research in the Enable and Ability group and received her B.S. in computer science from Colorado State University. More information can be found at https://homes.cs.washington.edu/~ansross/.
Talk 3: Sound Awareness Feedback for People who are Deaf or Hard of Hearing
Speaker: Dhruv Jain, CSE PhD student
Abstract: Sound awareness has wide-ranging impacts for people who are deaf or hard of hearing (DHH), from being notified of safety-critical information like a ringing fire alarm to more mundane but still useful sounds like the clothes dryer ending a cycle. Sound awareness also affects social interactions with hearing people. For example, knowing not only that someone is speaking but also where to focus visual attention during conversations is a prerequisite for effective speechreading, where visual signals such as lip movement and body language are used to interpret speech. In my talk, I would briefly discuss our three efforts in providing sound awareness feedback to DHH people: (1) GlassEar: A system to visualize direction of sound source on an HMD; (2) HoloCaptions: augmented reality real-time captioning on an HMD, and (3) HomeSounds: A smart-home based general sound awareness system.
Bio: I am a second year PhD student in CSE, working on assistive technology for people who are deaf and hard of hearing with Profs. Jon Froehlich and Leah Findlater. You can find more about me here: dhruvjain.info.
Talk 4: Volunteer-Based Online Studies With Older Adults and People with Disabilities
Speaker: Qisheng Li, CSE PhD student
Abstract: There are few large-scale empirical studies with people with disabilities or older adults, mainly because recruiting participants with specific characteristics is even harder than recruiting young and/or non-disabled populations. Analyzing four online experiments on LabintheWild, we show that volunteer-based online experiments that provide personalized feedback attract large numbers of participants with diverse disabilities and ages and allow robust studies with these populations that replicate and extend the findings of prior laboratory studies. We additionally analyzed participants’ feedback and forum entries that discuss LabintheWild experiments to understand their motivations to take part in such studies – findings that we use to inform design guidelines for online experiment platforms that adequately support and engage people with disabilities.
Bio: I am a second-year Ph.D. student working with Katharina Reinecke on 1) supporting and engaging people with disabilities on online experiment platforms, and 2) developing technology for researchers to recruit and study people with disabilities and older adults on large-scale.
Talk 5: Modeling the Pedestrian Network: AccessMap and OpenSidewalks
Speaker: Nick Bolten, PhD Candidate in ECE
Abstract: Pedestrians have diverse and complex mobility needs and preferences that are not taken into account by typical transportation modeling and data. A typical pedestrian mobility model treats individuals as slow cars that can also use parks and college campuses with no consideration for personal and environmental challenges such as steep hills, a lack of sidewalks on a busy street, raised curbs between the sidewalk and the street, or surface qualities like material composition or uplifts. The AccessMap project develops interactive maps for pedestrian mobility through research and design of visual metaphors, automatic trip planning, and the overall user experience that made possible by a richer pedestrian data model. AccessMap is currently live for the city of Seattle, with 500 unique users per month, with other cities staged for release in the immediate future. The OpenSidewalks project helps define and gather open, detailed pedestrian information that is normally missing or collected in an unwieldy format so that projects like AccessMap are possible anywhere on earth. The OpenSidewalks project's tools and outreach have resulted in the mapping of the sidewalks and crosswalks of the city of San Jose, California, a detailed pedestrian path map of the University of Washington, and several other in-progress mapping efforts in other states, and an improved crosswalk model in OpenStreetMap, as part of collaborations with members of the OpenStreetMap community, transit authorities, and volunteers.
Bio: I am a PhD candidate working with Dr. Anat Caspi, director of the Taskar Center for Accessible Technology on modeling pedestrian accessibility and mobility, namely the AccessMap and OpenSidewalks projects. We also teach a VIP course where students carry out pedestrian accessibliity engineering, design, and research projects from start to finish.
Talk 6: Blocks4All: making blocks-based programming environment accessible to blind children
Speaker: Richard Lander, CSE Professor Emeritus
Abstract: Blocks-based programming environments are a popular tool to teach children to program, but they rely heavily on visual metaphors and are therefore not fully accessible for children with visual impairments. We evaluated existing blocks-based environments and identified five major accessibility barriers for visually impaired users. We explored techniques to overcome these barriers in an interview with a teacher of the visually impaired and formative studies on a touchscreen blocks-based environment with five children with visual impairments. We developed Blocks4All that implements an accessible blocks-based programming environment based on prior research and results from our formative study. Blocks4All was evaluated by another five blind children.
http://stemforall2018.videohall.com/presentations/1078
Bio: Richard E. Ladner, Professor Emeritus in the Paul G. Allen School of Computer Science & Engineering, graduated from St. Mary's College of California with a B.S. in 1965 and received a Ph.D. in mathematics from the University of California, Berkeley in 1971, at which time he joined the faculty of the University of Washington. In addition to his appointment in the Department of Computer Science & Engineering, he held Adjunct appointments in the Department of Electrical Engineering and in the Department of Linguistics.
After many years of research in theoretical computer science, he has turned his attention to accessibility technology research, especially technology for deaf, deaf-blind, hard-of-hearing, and blind people. In addition to research, he is active in promoting the inclusion of persons with disabilities in computing fields. He is the Principal Investigator for the National Science Foundation funded AccessComputing and AccessCSforAll.