Vous consultez : Le Typhlophile / Participation à la conférence CSUN - Annexes
Accès au contenu | Accès au au menu | Touches d'accès rapide du site

Participation à la conférence CSUN - Annexes

Quatre cannes blanches.

Lundi 11 décembre 2017 à 04:39:13 HnE

Tournois d'échecs pour déficients visuels

Chercher sur le site

Interroger Google

Logo de Google.


Photographie d'une machine bimanuelle à écrire le Braille.
La Perkins Brailler a été inventée par un Américain Frank H. Hall à la fin du XIXe siècle. Un clavier de 9 touches permet de reproduire les 64 symboles braille et d'effectuer toutes les tâches requises pour l'écriture. De construction robuste, elle n'a à peu près pas connu de transformation majeure et elle est toujours en usage malgré l'avènement des systèmes informatisés.





Typhlophile écrit en braille.
Une vitrine virtuelle à l'attention des AMIS DES AVEUGLES

Le Typhlophile / Participation à la conférence CSUN - Annexes

CSUN 99

par Jean-Marie D'Amour

Annexes (suite)

Recent Developments in Accessible Web-based Multimedia

Geoff Freed
Project Manager, Web Access Project
CPB/WGBH National Center for Accessible Media
WGBH Educational Foundation
125 Western Ave.
Boston, Ma 02134
voice/tty: 617 492-9258
fax: 617 782-2155
e-mail: mailto:%20geoff_freed@wgbh.org

 

Introduction

For millions of Americans, the World Wide Web is an exciting new tool for learning and communicating. For millions of disabled computer users, however, the Web's enhanced graphics, audio, and video capabilities are out of reach. The Web Access Project, begun in 1996 by the CPB/WGBH National Center for Accessible Media (NCAM), is part of the global effort to help lower or remove accessibility barriers from the Web. NCAM is a research and development facility dedicated to the issues of media technology for people with disabilities in their homes, workplaces, schools and communities. NCAM is the latest media access initiative of WGBH, Boston's public broadcaster, which founded The Caption Center in 1972 and Descriptive Video Service in 1990. With a background in making the content of media accessible, NCAM's contribution has focused largely, but not exclusively, on this aspect of Web access.

This paper will describe NCAM's efforts to make Web-based multimedia more accessible to users who are deaf, hard of hearing, blind or visually impaired. In order to make multimedia more accessible, NCAM has developed techniques which apply broadcast-based accessibility technologies-- closed captions and audio descriptions-- to the Web. As of this writing, NCAM has experimented with at least four methods, using Apple's QuickTime (TM) software, Microsoft's Synchronized Accessible Media Interchange (SAMI) format, the World Wide Web Consortium's (W3C) Synchronized Multimedia Integration Language (SMIL), and WGBH's MAGpie authoring software.


QuickTime and Captioning

Apple's QuickTime 3.0 and MoviePlayer, which comes with QuickTime, allow captions and descriptions to be added to a movie using either a Macintosh or PC. Previous versions of QuickTime software may also be used, but only on the Macintosh platform. Whatever version of QuickTime is used for creation, though, the end result may be played back on either a Macintosh or PC.

a QuickTime movie is made up of separate video and audio tracks. At least one multimedia player (MoviePlayer version 2.1 or higher) allows the user to toggle the tracks on and off. Because they are discrete, a movie may have multiple audio and video tracks, any number of which may be selected by the user. a user can select the appropriate language track at the time of playback.

In addition to video and audio tracks, multiple text tracks may be included with the clip. a text track becomes, for access purposes, a caption track, but can also be used to provide foreign-language subtitles or even as a search engine indexed by keywords. If the user views the movie clip directly from the Web site using streaming software, the caption track is open-- that is, it can't be turned off. However, if the clip is downloaded and played locally using QuickTime software, the caption track may be toggled on or off, thus simulating closed captions. (Note: QuickTime 3.0 allows this toggling on either the Macintosh or PC; previous versions of QuickTime allow toggling only on the Macintosh.) If the clip is downloaded and played using any other multimedia player, the captions remain open.

a captioned movie clip, therefore, contains the normal video and audio tracks plus the additional text track. Unlike broadcast captions, which obscure a portion of the visible picture, captioned movie clips display the text track in a small window below the video (although QuickTime 3.0 allows the window to be positioned virtually anywhere). In its experiments, NCAM was able to fit approximately 19 rows of text below a movie clip before running out of space on the computer monitor. However, displaying more than three rows of text at once may prove impractical as the viewer may have difficulty reading the captions and keeping up with the video.

Sample captioned QuickTime movie clips and step-by-step details of the captioning process may be found at NCAM's Web site.


QuickTime and Audio Descriptions

Not only is it possible to add text tracks to a QuickTime movie clip, it is also possible to add extra audio tracks-- specifically, an audio description track, which increases a movie clip's accessibility for people who are blind or visually impaired. Audio descriptions of QuickTime clips are similar to those found on certain television programs or home videos. Brief narration describing key visual elements are inserted into the pauses of the dialog. This narration makes it easier for blind or visually impaired users to follow the action of a movie clip. The narration track is recorded separately and, using QuickTime's MoviePlayer 2.1 or greater, pasted into the movie. (QuickTime 3.0's MoviePlayer will add sound using either a PC or Macintosh; earlier versions of MoviePlayer will add sound using a Macintosh only.) Like captioned QuickTime movies, the user may toggle the audio description track on and off, depending on the movie playback device being used. To view several different examples of described movie clips, visit NCAM's Web site. Instructions on creating described movie clips can be found here, as well.


Microsoft SAMI

While QuickTime captioning and description methods require authors to encode accessibility features into the multimedia file itself, research is underway to simplify this process. In the fall of 1998, Microsoft(R) released a new accessibility authoring format and associated tools called the Synchronized Accessible Media Interchange (SAMI) format. SAMI synchronizes the primary media (video, for example) with externally stored and referenced caption or audio description content. Because SAMI is based on HTML, it can be adopted easily by those already familiar with Web-page authoring. This also allows developers to easily add or point to captioning content for Web-based or offline multimedia, such as CD-ROM. SAMI files are text files, so they can be read by any operating system. Caption or description files can be stored and transmitted from the same location as the primary media or can be played in sync with media originating anywhere on the Web, as long as the time codes and references are properly matched. The SAMI file format specification is available to the public as an open standard (no licensing fees).

a SAMI captioning or description file contains timecode information which corresponds to elapsed time in a multimedia source file, such as audio, video or animation. The source file can be played by Microsoft's Media Player, which synchronizes it with a SAMI file to render the captions or descriptions at the appropriate time. The user can toggle on or off either the captions or descriptions. Users also have great flexibility in adjusting the appearance and presentation of captions to suit their needs and preferences. SAMI supports captioning in multiple languages, and is also well suited for synchronized text highlighting. For more information, including sample SAMI multimedia clips, visit the Microsoft Accessibility site.


The W3C's Synchronized Multimedia Integration Language (SMIL)

To ease the authoring process of TV-like multimedia presentations on the Web, the W3C has designed the Synchronized Multimedia Integration Language (SMIL). Released in June of 1998, SMIL allows for the creation of time-based, streaming r audio descriptions. WGBH's experience in captioning thousands of clips for Microsoft Encarta, as well as providing descriptive narration for DVDs, has helped inform the design of the tool. MAGpie will be available from the NCAM Web site.


Benefits of Accessible Movie Clips

Deaf and hard-of-hearing Web users are the immediate and obvious beneficiaries of captioned movie clips. However, the benefits extend beyond this audience. Those using computers which lack sound capability, for example, can view captioned clips and follow the soundtrack visually rather than aurally. Also, as many educators have already discovered, captions used in conjunction with both audio and video can be a valuable tool for improving reading skills of children and adults.

a captioned movie's text track can also be used as a reference tool: some movie players have a "search" feature which allows the user to scan the text track for a specific keyword or phrase, making it easy to locate a specific spot in the movie clip. Depending on the software, this search function works even when a text track is hidden.

Another useful feature of a captioned movie clip is the transcript which is generated as part of the captioning process. Displaying a link to the movie's transcript allows the user to read the text before deciding if it is worth the time to download and view the movie. At a minimum, transcripts may be used by those who do not have any video-playback capability, as a partial substitute for the clip itself. For maximum accessibility, transcripts should always be used in conjunction with audio-only clips.

Like captions, the benefits of audio descriptions reach beyond the primary audience of blind or visually impaired users. Preliminary research has shown that described movies or television programs can help reinforce concepts or vocabulary in classroom situations. The same can be true for Web-based multimedia. Even more importantly, a Web-based movie clip is not limited to being played only in real-time. That is, the clip may be stopped, started and randomly accessed at will, or different audio and/or video tracks may be paused while other tracks continue to play. For example, during a clip that deals with a complex math equation, the video may be paused while the audio-description track delivers an in-depth explanation of the equation displayed on the screen. When applied to science or math multimedia, this technique allows for greater understanding of concepts that might otherwise go by the viewer too quickly.

Adding captions or descriptions to Web-based multimedia has one further potential benefit-- preservation of bandwidth. As more and more people log on to the Web, and as content providers utilize byte-intensive multimedia, access for all users will become slower and slower. As accessibility technologies are perfected, however, users will be able to request and download specific media components. That is, a blind user will be able to ignore the video portion of a movie clip but retrieve the program audio and descriptions only, thus avoiding the transfer of large amounts of unneeded data. Likewise, a deaf user may only want to download the video and caption portions of a clip, ignoring all audio.

For more information on multimedia access technology, visit the NCAM Web site.


Increasing the Accessibility of the Web through style sheets, scripts, and "plug-ins"

Wendy Chisholm
chisholm@trace.wisc.edu

Mark Novak
novak@trace.wisc.edu

Trace R&D Center
5901 Research Park Blvd.
Madison, WI 53719

Abstract

The W3C WAI Page Authoring Guidelines (Vanderheiden, et al, 1998a) contains nineteen general concepts that Web page authors should follow to make their pages more accessible and usable, not only to people with disabilities, but for newer page viewing technologies (mobile and voice), for electronic agents such as indexing robots, and etc. In an accompanying document, Techniques for "W3C WAI Page Authoring" (Vanderheiden, et al, 1998b), each of the Page Authoring Guidelines are further explained, with one or more techniques that may be used to satisfy the guideline.

In this paper/presentation, we will talk about and demonstrate how scripts and style sheets can be implemented today, and still work on systems that do not support scripts and style sheets ("Transform gracefully"). We also talk about and demonstrate how the data in a table can be presented and navigated both via scripting and by an accompanying application ("Context and navigation").


Introduction

The W3C WAI Page Authoring Guidelines demonstrate that there are three basic concepts to create accessible Web sites:

  • Make sure pages transform gracefully across users, techniques, and situations.
  • Provide context and orientation information for complex pages or elements.
  • Maximize usability by following good design practices.

The document stresses that "Accessibility does not mean minimal page design, it means thoughtful page design (Vanderheiden, et al, 1998a)." Therefore, newer technologies such as scripts and style sheets should not be avoided, but designed and used with care. In this paper/presentation, we will show how scripts and style sheets can be used to increase the accessibility of a page. We will also show how marking up a table according to the HTML 4.0 specification will allow other programs and eventually browsers, to restructure the data in a table to allow a user to understand and navigate the table via speech, the keyboard, etc.

Note. We use many acronyms in this paper. Fear not, they are defined at the end.


Ensuring that scripts transform gracefully

In the WAI Page Authoring Guidelines, under the first category "A. Make sure pages transform gracefully across users, techniques, and situations," the ninth guideline states: "

A.9 Ensure that pages using newer W3C features (technologies) will transform gracefully into an accessible form if the feature is not supported or is turned off.

Some more recent features that are not completely backwards compatible include frames, scripts, style sheets, and applets. Each release of HTML has included new language features. For example, HTML 4.0 added the ability to attach style sheets to a page and to embed scripts and applets into a page.

Older browsers ignore new features and some users configure their browser not to make use of new features. These users often see nothing more than a blank page or an unusable page when new features do not transform gracefully. For example, if you "B. Provide context and orientation information for complex pages or elements," the third guideline states:

B.3 Ensure that tables (not used for layout) have necessary markup to be properly restructured or presented by accessible browsers and other user agents. Many user agents restructure tables to present them. Without appropriate markup, the tables will not make sense when restructured. Tables also present special problems to users of screen readers. These guidelines benefit users that are accessing the table through auditory means (e.g., an Automobile PC which operates by speech input and output) or viewing only a portion of the page at a time (e.g., users with blindness or low vision using speech or a braille display, or other users of devices with small displays, etc.).

Techniques:

  • Provide summaries for tables (via the "summary" attribute on TABLE). [Priority 3]
  • Identify headers for rows and columns (TD and TH). [Priority 2]
  • Where tables have structural divisions beyond those implicit in the rows and columns, use appropriate markup to identify those divisions (THEAD, TFOOT, TBODY, COLGROUP, the "axis" and "scope" attributes, etc.). [Priority 2]
  • Provide abbreviations for header labels (via the "abbr" attribute on TH). [Priority 3]

The following current interim technique is discussed in guideline A.12. ("Use interim accessibility solutions so that assistive technologies and older browsers will operate correctly."): 5.Until user agents and screen readers are able to handle text presented side-by-side, all tables that lay out text in parallel, word-wrapped columns require a linear text alternative (on the current page or some other). [Priority 2]


Scripts to navigate tables

We looked at solving "the table problem" using a client side JavaScript. Our approach was to:

  • experiment with JavaScript to determine what information was available within the TABLE element of HTML 4, the DOM, and the event model of Microsoft Internet Explorer 4.0;
  • if possible, collect all this information into a structure or format that maintained the row/column header integrity;
  • and, allow navigation within this structure while providing individual cell data information to the user.

To associate row/column header information with each data cell, we used an approach recommended in the HTML 4.0 specification (Raggett, et al, 1998). Essentially the steps in pseudo code would be as follows:

  • from a given data cell, iterate left until a row header is found
  • from a given data cell, iterate up until a column header is found
  • in both cases, stop if the edge of the table is reached -or- if another data cell is encountered after a header cell.

Note, even though additional Table information is sometimes available in the HTML 4.0 standard (e.g., headers attribute, axis attribute, etc.), this information was collected but not used for our experimental scripts.


Use of a separate plug-in or executable program to navigate tables

Our next approach was to use the extended object model exposed by IE 4, and access a Web page using a combination of C++ and COM via an executable program.

Advantages to this approach:

  • not limited to running within a Web page
  • not limited to running within a particular internet domain (security)
  • access to keyboard and mouse information ahead of the browser if needed
  • access to additional System resources (file system, hardware ports, display, etc.) if needed

Since the introduction of IE 3, Microsoft has supported a component architecture that allows increasing levels of access and control over their browser from an external application. Using COM, an external application can attach to IE and perform a variety of tasks, such as limiting the internet addresses one can access, or provide an audio indication when a page has completed loaded. Using COM also allows an external application to access the DOM and the event model of IE, and therefore access and possibly control all the elements of a Web page.

DAISY work at TPB - establishing a new talking book model

Kjell Hansson
IT-officer
Swedish Library of Talking Books and Braille
SE-122 88 Enskede / Sweden
Phone: +46-8-399350 Fax: +46-8-6599467
E-mail: kjell.hansson@tpb.se / hansson@ibm.net

DAISY getting real in Sweden
Lars Sönnebo, IT-advisor
Swedish Library of Talking Books and Braille
S-122 88 Enskede / Sweden
www.tpb.se

 

The report to the Government

The commission

Investigate the existing model, asking libraries and users (readers) to find out what is wrong and what works well.

Make a survey of current and future technology that may be of use for talking book purposes - for production, distribution and reading.

Describe possible methods and set-ups for production, distribution and book reading.

Define new goals and suggest how to go forward.

Results

Report delivered to the Swedish Ministry of Education in November 1998. It was also distributed to libraries and other involved institution.

New projects have been started, aiming to test and then implement the new technology to make it work on a full scale.


The Swedish model for talking books

What is produced at TPB?

TPB is a governmental institution with a central responsibility for production of literature in alternative formats to people with print impairments. The main areas of production are talking books, Braille books and e-text books. The production of talking books on cassette has up to now been the bulk part of TPB:s work.

TPB:s goal is to make 25% of the total yearly amount of Swedish literature in alternate format. In practical figures, this means 3000 to 3500 new titles per year. The main part of this production is talking books.

Who borrows from TPB?

Material produced by TPB is lent, not sold. The Swedish legislation for talking book production allows TPB to make talking books or alternative format versions of almost any published printed material, provided that there are no commercial elements involved.

Individuals do not borrow their books directly from TPB. Instead, all lending goes via local libraries, who in turn borrow from TPB. An exception to this is print impaired university students, who receive their textbook material directly from TPB.

No in-house production

TPB uses commercial companies for the actual recording work. The work is supervised by TPB:s staff to ensure the best possible quality. The contracted producers use professional narrators and proof-readers.

The local libraries also have stocks of frequently read talking book titles, which means they only have to borrow from TPB when they haven't got what a reader is asking for.

Current collection

TPB has over 55000 talking book titles in their library - a total of about half a million playing hours. The amount grows by over 3000 titles per year.


TPB and DAISY

Invented by TPB

The original concept for DAISY was invented and defined by TPB as an internal project started in 1993. The first tools for production and reading of DAISY books were also developed by TPB during 1994 and 1995.

In 1996, the international DAISY Consortium was formed. Since then, the development of the concept and tools have been carried out by the Consortium.

TPB and DAISY

TPB has an active role in the DAISY Consortium, and will be one of the first members to make use of the new talking book concept on a large scale.

TPB works to keep up well established experience in efficient use of DAISY technology. This experience can then be transferred to TPB:s subcontracted producers. TPB has produced over 200 DAISY titles, using the first version of the DAISY talking book format (called 'DAISY 1'). During the autumn of 1998, a shift in technology and standards will take place with the occurrence of the next generation of the talking book format - DAISY 2.

DAISY 2

The first versions of the new tools for producing titles in DAISY 2 format were released in October 1998. TPB will take part in testing and evaluation work together with other DAISY Consortium members, and will also start experimental production in DAISY 2 during the last months of 1998.

An experimental studio will be set up locally at TPB, which will be used for DAISY 2 production on a somewhat larger scale. This set-up will be used for optimisation of the production process and will also be used for training of TPB:s producers when the time is ripe to contract these for more regular DAISY 2 production.


The mission and the vision

The new talking book production and distribution system outlined in TPB:s report to the Government will offer a number of significant improvements over the old system, based on talking books stored on compact cassette, distributed by post to the local libraries. Some of the main benefits are listed below.

Digital standardised format

DAISY 2 is a flexible format for talking books in a broad sense - the books can contain narrated human speech (audio) as well as e-text, images etc. in any mixture. Due to copyright restrictions however, TPB will mainly make use of the audio capabilities of DAISY 2, until the day the legislation for talking books has been changed.

Structured audio

The underlying concept of DAISY is structure, which means DASIY books containing audio are "structured audio books". The concept of structured audio offers DAISY books readers a host of new possibilities when it comes to efficient navigation reading control. The new possibilities will be of most evident benefit to "advanced" readers such as student, but will also offer a lot of help to readers of almost any kind of literature.

Better sound quality

Digital audio technology can be used to offer talking book readers much higher audio quality that was possible using compact cassettes. The quality of digital audio does not degrade as it is being copied, which means it is possible to listen to a DAISY book with the same fidelity as in the recording studio.

Extended services to readers

As said above, the introduction of DAISY for talking books will in itself mean a lot to the talking book readers. The report also points out a number of new and improved services that can be provided by local libraries to the end users, such as shorter delivery times, customised distribution formats to fit different kind of playback equipment, no waiting time for borrowing popular titles and so on.

Most of these new services will be dependent on use of wideband digital communication network for talking book distribution. This new form of distribution may eventually take over completely replace sending of DAISY books by post on a data carrier such as CD-ROM. Instead, the talking book media can be produced by the local library.

Preparing for the future - e-text, talking newspapers, hybrid books

DAISY technology may be used for other types of production than just plain talking books. TPB aims to start using DAISY as the format also for pure e-text books as soon as agreements have been made with the publishers. TPB will also promote the of the DAISY format also for "talking newspapers", i.e. where the text has been narrated.

TPB regards the concept of the "hybrid book" as a very attractive kind of publication for the future. a hybrid book will contain information in doubled formats: e-text and audio. Text and audio will be fully synchronised, which will be a powerful form of media for e.g. dyslectic readers.

MS Phone: Accessible By Design

David Bolnick
Microsoft Corporation
Redmond, Wa

The Microsoft Cordless Phone System is much more than an ordinary phone or answering machine. It is the first 900MHz cordless phone that links to the PC to help the user manage calls and messages with greater control and flexibility. By combining the power of the PC and the Microsoft Call Manager software, the Microsoft Cordless Phone System helps users manage their calls.

Once the software is installed and the user has entered some names in the address book, they can start using the phone system like any other phone, in any room in their house. The Microsoft Call Manager software running on the PC gives the user more convenience and flexibility than other phones because it includes Voice Command dialing, enhanced Caller ID features, and a sophisticated voice mail system. The user places calls by speaking into their phone and saying a simple phrase like "Call Mom" or "Call the Office." The Microsoft phone recognizes up to 40 different names and numbers. In addition, the user can navigate through their messages using voice commands. The phone has Caller ID. As a result the user can let their phone or PC announce the caller's name before they pick up the phone. The user can either answer the phone, or let the call go to voice mail. The Microsoft Phone also supports Private Greetings. The user can create private greetings for different callers, like "Hi Tom, we're not home right now. Please leave a message." Caller Priorities (assign priorities to incoming calls) lets high priority callers ring through while the Do Not Disturb feature is turned on. The system can also send calls right to voicemail and block unwanted calls.

With the Microsoft Phone, users can create multiple mailboxes for personal or business calls and track calls automatically. The Call Manager application records name, phone number, date, time and duration of the last 1,000 calls made by the user. Phone users can retrieve messages stored on their PC from anywhere - the cordless handset, the PC, or from a remote phone. The 40 channel, 900MHz cordless phone gives the user greater range and clarity than a standard cordless phone.

Microsoft Phone includes many features that benefit individuals with disabilities. From TTY compatibility to the shape and positioning of buttons for ease of use by individuals who are blind, Microsoft Phone is the latest example of a Microsoft product that addresses accessibility by design. In addition, the Microsoft Phone was the first Microsoft product to benefit from a review by the company's Access Review Boards. These boards cover issues for users with a range of disabilities. Microsoft's Hardware Group incorporated the boards feedback which shaped the final product and made it accessible.

Microsoft Phone has the following accessibility features for users who are hearing- or visually-impaired, or have some disability that makes it difficult to pick up the handset:

  • The printed manuals are available on the CD-ROM in a format that can be read by a screen reader.
  • a TTY window is available in the Microsoft Call Manager software.
  • a full audio description of the buttons on the handset is available by pressing the Help button (0) on the handset, and then pressing 1.
  • Voice commands help to reduce keypad use.
  • Blind and visually-impaired users can use the voice command "Report System Status" at the handset to hear what features of the answering system are turned on or off.
  • The earpiece on the handset is hearing-aid-compatible.
  • The buttons on the handset provide both tactile and audio feedback.
  • Variously shaped buttons on the handset provide easier orientation for blind and visually impaired users.
  • Indicator lights on the handset provide visual feedback when the phone is ringing, in use, or on hold, when you have messages, and when Do Not Disturb is turned on.
  • The enclosed hook-and-loop fastener strip can be attached to the bottom of the charging cradle to enable a user to dial the handset without having to remove it from the cradle. Attach one piece of the fastener along the bottom front lip of the cradle, and the other piece to a hard surface where you want to locate the phone.
  • The loudspeaker enables a user to take notes without holding the handset.

AMERICAN FOUNDATION FOR THE BLIND National Technology Program

11 Penn Plaza, Suite 300

New York, NY 10001

Phone: (212) 502-7642 FAX: (212) 502-7773

e-mail: techctr@afb.net

a Survey of Windows Screen Reader Users:

Recent Improvements in Accessibility

C.L. EARL, J.D. LEVENTHAL

Reprinted with permission from the Journal of Visual Impairment & Blindness Vol. 93, No. 3, and is copyright 1999 by the American Foundation for the Blind, 11 Penn Plaza, Suite 300, New York, NY 10001.

a Survey of Windows Screen Reader Users: Recent Improvements in Accessibility

Crista Earl and Jay Leventhal

The purpose of the survey reported here was to gather information about Windows accessibility from the perspective of people who are visually impaired (who are either blind or have low vision) and use screen readers. a previous survey (Leventhal & Earl, 1997) revealed that even experienced users had difficulty accessing Windows. The current survey shows a much greater comfort level, though some areas, such as formal training and access to databases, continue to be a problem.

The survey

The survey was conducted from August 22 to October 15, 1998. Over 400 people were contacted by telephone or E-mail, 200 of whom responded.

Questions

Respondents were asked what hardware they use; whether they use a braille display or screen magnification in addition to synthetic speech; what Windows or DOS applications they use; what methods they used to learn Windows; why they began using Windows; and how comfortable they feel using Windows. Respondents were also asked if they were able to perform successfully each of a list of tasks in the Windows environment and to comment on the performance of those tasks. They were then asked to list any additional tasks or Windows applications they would like to be using but were not.

Participants

The survey participants were drawn from among the 526 members of the American Foundation for the Blind's Careers and Technology Information Bank (CTIB) who use Windows screen readers. CTIB is a network of visually impaired people who have agreed to consult with other visually impaired people about how they perform their jobs and the technology they use. Of the 200 people who responded, 83% have a college degree, 40% have a graduate degree, 93% are currently employed, and 62% have no useful vision.

Among the respondents, 22% work in the assistive technology field, 22% are computer programmers or network administrators, 5% are attorneys or judges, 7% are rehabilitation counselors or teachers, 4% are secretaries or receptionists, 5% are college professors or directors of university services, 6% are administrators in rehabilitation or education, and 4% are scientists. Clearly, the survey participants are highly successful visually impaired users who might be expected to use Windows applications and Windows screen readers with a higher level of success than would a random sample of visually impaired computer users.

Responses

The respondents used the following Windows-based synthetic speech programs: JAWS (Job Access with Speech) for Windows from Henter-Joyce: 68%; Window-Eyes from GW Micro: 35%; WinVision from Artic Technologies: 15%; ASAW (Automatic Screen Access for Windows) from MicroTalk Software: 6%; Window Bridge from Syntha-Voice Computers: 3%; ScreenPower for Windows from TeleSensory Corp.: 3%; and outSPOKEN for Windows from ALVa Access Group: 3%. (Note: The survey results may not total 100% because of rounding and because many respondents used more than one program.) Twenty-four percent of the respondents reported using more than one Windows-based screen reader, 27% reported using a braille display in addition to synthetic speech, and 11% used screen magnification along with a screen reader.

Almost all of the respondents reported using word processors--Microsoft Word and Corel WordPerfect; E-mail packages--Qualcomm's Eudora and Microsoft Outlook; and World Wide Web browsers--Microsoft's Internet Explorer and Netscape's Navigator. In contrast, only 37% reported using a Windows spreadsheet and only 17% were using a Windows database. Tasks performed in Windows

Respondents were presented with a list of 22 Windows tasks and asked if each was something that they do easily, do with difficulty, cannot do, or had never attempted. Because 89% submitted their responses via E-mail, it was not surprising that most respondents replied that they could read and reply to E-mail messages easily. What was surprising was that although 92% of the respondents had tried to use Windows Help, 55% had difficulty with or could not use this feature.

Similarly, a high percentage of respondents (56%) had difficulty filling out forms on the Internet, an essential skill for users for whom the Internet is the only means of access to otherwise printed materials. More than three-quarters of the respondents had never played games or joined a chat group, and 40% had never used a Windows database or spreadsheet.

In comparison with the respondents to the previous survey (Leventhal & Earl, 1997), the current users are doing more with Windows and using a wider variety of Windows applications at a more sophisticated level. In the previous survey, only the following tasks could each be accomplished by more than half of the respondents: navigating from window to window, formatting a document in a word processor, running a spell checker, installing new applications, and reading and replying to E-mail messages. In the current survey, of the 22 tasks listed, only the following were attempted by fewer than half the respondents: looking up items in an encyclopedia, playing games, participating in a chat group, scheduling and checking appointments, using mainstream optical character recognition software, entering and reading data in a spreadsheet or database, and doing advanced formatting (such as preparing complex tables) in a word processor. In addition, tasks not generally attempted by beginners (managing files, changing colors or sounds, and installing software) were considered easy by a large number of the respondents (66%, 48%, and 38%, respectively).

Although most participants still used DOS for some applications or for file management--75% versus 95% in the previous survey--most used Windows regularly and successfully. Two typical responses were the following: "I was led to believe that using Windows would be very difficult but have found the transition quite smooth" and "I really feel comfortable in Windows now and never thought I would be."

In spite of the positive comments about Windows in general, many users mentioned specific applications or categories of applications they wanted to use but could not access. Most notably, databases were among applications considered inaccessible.

Training

To compound the challenge that accompanies learning any new system, a large number of respondents had never had formal training. In the previous survey, 36% of the respondents had had some formal training. Among the current respondents, the percentage is higher (48%) but still low. Further, not all respondents gave high marks to the formal training they did receive. General Windows training classes not specifically designed for visually impaired people were rated by the group lower than were books about Windows (3.5 compared to 5.3.)

The following are some typical responses about training: "There needs to be a greater awareness of professionals in the field of rehabilitation for the blind that computer skills are no longer a luxury... when it comes to employment" and "Training remains a major problem, as instructors do not seem to have the necessary knowledge or teaching skills to impart information adequately." a third respondent wrote: "One of the biggest problems and concerns that I have with sighted instructors of Windows screen readers and various Windows applications is that many times when the student gets stuck [the instructors] just pick up the mouse and fish them out of the problem. I think a lot of them need to learn how to do it blindfolded and then be very comfortable in using keystrokes."

What it all means

One of the clearest findings of this survey is that the longer respondents had been using Windows the higher their comfort level. On a scale of 0-10, with 0 being "totally clueless" and 10 being "a real expert," the average comfort level was 6.4. People who began using Windows in 1998 had an average comfort level of 5.0, whereas the average rating for those who began using Windows before 1995 was 8.1.

It was interesting to note that 84% of the respondents agreed that reading and replying to E-mail messages was easy, but the group showed less agreement about the ease of word processing programs. This discrepancy might be attributable to the respondents' lack of experience with Windows word processors, since 36% of them still used a DOS word processor. The difficulty and range of word processing tasks compared to those of E-mail programs is probably also a factor.

The authors were surprised by the respondents' frustration about performing specific tasks, especially using the Help feature, running a spell checker, and installing applications. At the same time, much of the frustration expressed in the earlier survey is nearly gone. Respondents still mentioned the difficulties involved in learning Windows and its applications, but they were much more positive about using Windows in general. Few respondents mentioned problems that could be interpreted as errors in their screen readers' off-screen model or other complete screen-reader failures. Only a few mentioned unlabeled graphics as problems. a problem that was mentioned often (at least 96 times) was the amount of time it took to select an accessible application and learn to use it. The following are some representative comments: "My main frustration is not having the teaching materials [written specifically for visually impaired users] that I need to learn" and "Be prepared to spend a lot of time learning and have a lot of patience because it is easy to get lost and have to start over."

 

 

 

 

Recommendations

The previous survey of Windows screen reader users concluded that more training for visually impaired users was greatly needed. This conclusion is still valid, in light of the fact that over half the respondents have not received formal training.

As one respondent wrote:

"I think the transition from DOS to Windows was the most shattering experience I've had to date as a person with a disability. My performance dropped to half for over six months and I truly did not feel I could compete... I would have gladly purchased some training intervention just to straighten this out, but it was not available." In addition, respondents made it clear that improvements in training are essential.

Many survey participants recommended that users take what they learn from formalized approaches and go on to explore on their own. One respondent offered the following techniques: "The first thing I always do when I am trying to learn a new application is to look at the choices on the menu bar and pull-down menus and take note of any available shortcuts... I experiment with the tab and arrow keys to see what options I can access." Another recommendation was for users to get a solid foundation in Windows and screen reader basics. As one respondent wrote: "I have found that it is easier to learn new Windows applications if you have a good basic knowledge of Windows concepts, such as navigating dialog boxes."

Many respondents recommended Internet resources, such as listservs and newsgroups, as sources for learning about Windows. Some respondents especially suggested listservs devoted to a particular speech package.

Manufacturers of screen readers have gone a long way to improve access. To benefit fully from these improvements, users need to stay informed about updates to their screen readers, learn to use new features, and inform their screen reader manufacturers about bugs in their products.

One respondent eloquently summed up the situation: "Windows is sure easier to use than it was just a couple of years ago. But that wouldn't be possible without the dedicated efforts of our adaptive technology providers. I'm sure there will be more bumps in the road, but if we all stick together and support our respective software and hardware manufacturers we will all continue to survive and, yes, even thrive."

Jay D. Leventhal, senior resource specialist, and Crista L. Earl, resource specialist, Technical Evaluation Services, National Technology Program, American Foundation for the Blind, 11 Penn Plaza, Suite 300, New York, NY 10001; <techctr@afb.net>.

Retour au sommaire



Fac ut videam (Faites que je vois)
Le mot latin Fac écrit en braille. 
Le mot latin Ut écrit en braille. 
Le mot latin Videam écrit en braille.

Éphéméride du jour

En ce 11 décembre de l'an de grâce 2004. France une première : une personne handicapée visuelle obtient le brevet national de moniteur des premiers secours. (BNMPS).

Saviez-vous que :

Louis Braille naquit le 4 janvier 1809 à Coupvray, petit village agricole de la banlieue Est de Paris. Il mourut à Paris, à l'âge de 43 ans, le 6 janvier 1852. C'est vers 1825 qu'il mit au point l'alphabeth qui, déshormais, porte son nom. Aujourd'hui encore, il est utilisé comme système de lecture et d'écriture par les aveugles du monde entier.

TyphloPensée

« La brillance conceptuelle ne rend pas les lois de la physique aveugles. »

Norman Augustine

Étymologie

Typhlophile tire sa racine de « typhlo » d'origine grecque et qui veut dire « cécité »; et « phile » veut dire ami, sympathisant, etc. Donc, Typhlophile veut dire l'ami des aveugles.

Un clin d'œil vers :

Haut de la page.

Politique d'accessibilité du site
[Certifier Bobby Approved (v 3.2). | Description]
[Validation HTML/XHTML du W3Québec | Valide CSS! | Ce document rencontre les conformités Valid XHTML 1.0 Strict]
DERNIÈRE MISE À JOUR DU SITE 20 janvier 2012
© 1996/2017; Le Typhlophile - Longueuil, Québec (Canada)

Pour vos commentaires et suggestions.