Skip to main content
Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
J Vis Impair Blind. Author manuscript; available in PMC 2010 Jan 5.
Published in final edited form as:
J Vis Impair Blind. 2005; 99(4): 219–232.
PMCID: PMC2801896
NIHMSID: NIHMS159056
PMID: 20054426

Personal Guidance System for People with Visual Impairment: A Comparison of Spatial Displays for Route Guidance

Jack M. Loomis, Ph.D., professor, James R. Marston, Ph.D., postdoctoral researcher, Reginald G. Golledge, Ph.D., professor, and Roberta L. Klatzky, Ph.D., professor

Abstract

This article reports on a study of route guidance using a navigation system that receives location information from a Global Positioning System receiver. Fifteen visually impaired participants traveled along 50-meter (about 164-foot) paths in each of five conditions that were defined by the type of display interface used. One of the virtual displays—virtual speech—led to the shortest travel times and the highest subjective ratings, despite concerns about the use of headphones.

In the past decade, a number of research projects around the world have demonstrated the feasibility of using satellite signals from the Global Positioning System (GPS) to locate blind travelers in outdoor environments, to guide them along routes to destinations, and to provide them with information about nearby points of interest (Golledge, Klatzky, Loomis, Speigle, & Tietz, 1998; Helal, Moore, & Ramachandran, 2001; LaPierre, 1998; Loomis, Golledge, Klatzky, Speigle, & Tietz, 1994; Makino, Ishii, & Nakashizuka, 1996; Petrie, Johnson, Strothotte, Raab, Fritz, & Michel, 1996). Indeed, several commercial navigation systems are now available, notably the BrailleNote GPS from Pulse Data and the Trekker from VisuAide, that are being purchased in significant numbers. Most of the devices in these projects, including the commercial ones, use text to convey information to the user, by way of electronic braille displays or synthesized speech.

For many navigation tasks, a GPS-based personal guidance system for people who are blind needs to give only simple textual instructions, such as “go straight for three blocks, turn left, and proceed two blocks” (Gaunet, in press; Gaunet & Briffault, in press). Using skills learned from professional orientation and mobility (O&M) instructors, it is easy for persons who are visually impaired (that is, are blind or have low vision) to follow curbs and use traffic noise to maintain the correct heading on a sidewalk. However, given that automobile and aircraft navigation systems invariably supplement textual information with pictorial information that is displayed on a video screen, indicating a desire by drivers and pilots for spatial displays, it is reasonable to assume that blind travelers may also want more than just textual information in certain situations. For example, a traveler may want to locate a specific doorway to a retail or office building, a specific bus or transit stop, or a mailbox or pay phone. In addition, travel is not always on defined and bounded sidewalks that parallel streets. Some navigation may take place in large, open areas, such as parks, large concourses, plazas, and parking lots, and in large and ill-defined areas, such as a university campus. In these situations, some type of display that provides direct spatial information about the egocentric directions and distances of environmental locations via hearing or touch would seem useful as a substitute for or supplement to textual information that is conveyed by synthesized speech or a refreshable braille display. The idea is that the stored locations within the computers' spatial database would appear on demand around the user at their correct locations (directions and distances) or, at least, at their correct directions. A recent survey showed that visually impaired people have considerable interest in spatial displays for assistance with navigation (Golledge, Marston, Loomis, & Klatzky, 2004).

From the beginning, our Personal Guidance System project at the University of California, Santa Barbara (UCSB), has considered using spatial displays to convey information about the environment, rather than conveying the same information through synthesized speech alone. The original idea was to convey the locations of both waypoints and off-route landmarks (such as points of interest) using auditory virtual reality (Loomis, 1985). In this conception, synthesized speech would be conditioned by a virtual acoustic display so as to appear to come from different locations within the auditory space of the user, ideally coinciding with the actual locations of the waypoints and off-route landmarks.

More recently, our group has proposed another type of spatial display, one involving a haptic pointer interface, or HPI (Loomis, Golledge, & Klatzky, 2001). The HPI mimics the functioning of a Talking Signs® receiver, which is used in conjunction with the Talking Signs system of remote infrared signage (Crandall, Brabyn, Bentzen, & Myers, 1999; Hatakeyama et al., 2004; Loughborough, 1979; Marston & Golledge, 2003). In this system, each “sign” consists of an infrared transmitter with a stored “utterance” that identifies the location (such as an entrance to a building or a telephone booth). When the user's handheld receiver is pointed in the direction of one of the transmitters, the user hears the identifying utterance. Precise sensing of the direction to the transmitter is achieved by sweeping the hand left and right and noting the direction of the greatest signal strength.

For the HPI that was used in the current study, the user holds a pointer (a small rectangular stick) to which an electronic compass is attached. Auditory information from the computer is displayed using a shoulder-mounted speaker. In another version of the HPI, which was implemented by the second author, an actual Talking Signs receiver is interfaced to the computer. The electronic compass is mounted on top of the receiver, and the auditory information is displayed using the receiver's speaker. With this version, it is possible to switch seamlessly from localizing real Talking Signs to localizing locations that are stored in the computer's database. With either version of the HPI, the computer senses the orientation of the handheld device and outputs acoustic information (speech or tones) when the orientation is within some tolerance of the direction to the waypoint or location that is stored in the database. The experience of using the HPI is similar to that of using a Talking Signs receiver.

Another type of spatial display, proposed by ,Ertan, Lee, Willets, Tan, and Pentland (1998), is an array of tactile stimulators that is worn on the torso. Many other spatial displays using hearing and touch are possible, including an array of tactile stimulators that is worn around the neck, a variant of the HPI with a tactile stimulator that vibrates when the hand is pointing in the direction of a way-point, and still another variant of the HPI that signals body orientation, rather than hand orientation, to the computer.

This article reports on an experiment that compared five different spatial displays in a task involving route guidance. As in previous work that compared four different auditory display modes (Loomis, Golledge, & Klatzky, 1998), the performance measures and subjective judgments by the participants in this study were the basis for judging the relative effectiveness of the different interfaces.

Method

Participants

Fifteen people who were legally blind (9 men and 6 women) were recruited from the Santa Barbara area; their average age was 39 (SD = 14.7). Of the 15, 8 were born legally blind, 1 became blind at age 5, and 6 became legally blind when they were teenagers or older. The etiology of their blindness varied; 3 participants had retinopathy of prematurity, 2 each had retinitis pigmentosa or glaucoma, 1 had macular degeneration, 1 had lost muscle control in the eye, and the other 6 had some type of retinal or optic nerve damage. Six participants had no useful vision; 2 could see shapes at arm's length, and the other 7 could identify objects at arm's length. Eleven participants were regular cane users, 2 used guide dogs, and 2 did not use any navigation aid in their regular travel. All but 1 had some type of O&M training.

Test paths

Three different paths were created within the campus database and located in a part of the UCSB campus that was flat, free of obstacles, and covered with concrete. These paths could be used in both directions, giving six paths for testing. Each path was 50 meters (164 feet) long and had six turns, with seven segments of variable length. Each path had three 45-degree turns, two 90-degree turns, and one 135-degree turn, although in different orders. Each participant experienced the five display conditions using five different paths.

Hardware and software

Although we have developed two lighter-weight versions of the UCSB Personal Guidance System, for this experiment we opted to use a bulkier and heavier version with a GPS receiver that provides more accurate positioning. Admittedly, the bulky system we used would be unacceptable to blind travelers, but lightweight systems with the same accuracy will eventually become commonplace. The version that we used consisted of a Toshiba notebook computer, Trimble 12-channel differential GPS receiver, Honeywell magnetic sensor, peripheral interface card, and stereo headset. The GPS receiver provided the user's location with an absolute accuracy that was generally better than 1 meter when it was receiving corrections from the Omnistar differential correction service (provided by a geostationary satellite). All the equipment was carried in a backpack that was mounted on an aluminum frame (see Figure 1). The receiver and antenna were contained within a dome-type case that was mounted atop a mast that was attached to the frame. The receiver was powered by a lead acid rechargeable battery. The weight of the entire assembly was just 2.3 kilograms (about 5 pounds). The small magnetic sensor was used as a solid-state compass to provide heading information and could be attached to the user's headset as a head-direction sensor (see Figure 1A), worn on the torso as a body-direction sensor (see Figure 1B), or attached to the pointer held in the hand (see Figure 1C). For several of the display interfaces, auditory information was provided via a small amplified loudspeaker that was positioned just in front of the participant's shoulder. The application software consisted of a database of the UCSB campus in DXF format, a Microsoft SAPI 4.0 text-to-speech engine, a DirectX 8 DirectSound 3D virtual auditory display, and a custom menu-driven and console-based user interface.

An external file that holds a picture, illustration, etc.
Object name is nihms-159056-f0001.jpg

Photographs of the second author wearing the three hardware configurations of the UCSB Personal Guidance System used in the experiment. A. The configuration used with the two virtual displays, with the electronic compass mounted on top of the headphone strap. B. The configuration used with the body-pointing display, with the electronic compass mounted in front near the waist. The sound was delivered by the speaker that was mounted just in front of the shoulder. C. The configuration used with the two HPI displays. The electronic compass was mounted on the small pointer that was held in the hand. The sound was delivered by the speaker that was mounted in front of the shoulder.

Spatial displays

The five spatial displays that were tested in the experiment involved presenting auditory information and included a magnetic compass. The following are detailed descriptions of the five interfaces. The implementations were chosen so as to contrast the functional attributes that were of particular interest: speech versus nonspeech information, spatialized versus nonspatialized auditory output, the presence or absence of acoustic distance cues, and monitoring of the user's hand orientation versus body orientation. These different functional attributes necessitated different physical display features. In particular, the auditory information was delivered either by headphones or by a speaker that was worn near the shoulder, and the magnetic compass was mounted on the top of the headphone strap to sense head orientation, near the waist to sense body orientation, or on the pointer that the participant held in one of his or her hands (see Table 1).

Table 1

Features of the five displays and the details of their implementation.

Type of Display
FeaturesVirtual speechVirtual toneHPI toneHPI speechBody pointing
Auditory sourceHeadphonesHeadphonesShoulder-mounted
speaker
Shoulder-mounted
speaker
Shoulder-
mounted speaker
Compass locationHeadHeadHandheld pointerHandheld pointerTorso
Nonspeech acoustic signalNoneTone (5 per
second); swept
tone (2.3 per
second)
Tone (5 per second)NoneTone (5 per
second)
Acoustic off-course indicatorNoneYes; type
of tone
Yes; tone if pointer is
accurate; bearing if
off by more than
90 degrees
Speech; straight
left/right bearing if
off by more than
90 degrees
Yes; tone if
pointer is
accurate; bearing
if off by more
than 90 degrees
Acoustic directional spatializationYesYesNoneNoneNone
Acoustic distance spatializationIntensityIntensityIntensityNoneIntensity
Haptic pointerNoneNoneHandheldHandheldTorso

During testing in each of the five interface conditions, the participants were guided over one of the six paths. During a trial, display information guided a participant toward the next waypoint. When the participant arrived within 2.1 meters (about 7 feet) of the waypoint, the next waypoint was activated. For all five types of displays, activation of the next way-point was accompanied by synthesized speech of this form: “Next waypoint X at Y Z,” where X represents the number of feet, Y represents the number of degrees, and Z represents the direction in which to go (either left, right, or straight).

Virtual speech

This display was similar to the virtual mode in the earlier study (Loomis et al., 1998). The participant wore headphones with the electronic compass attached to the strap (see Figure 1A). The computer continuously issued synthesized speech that indicated the current distance (in feet) to the next way-point; an utterance giving distance (such as “24”) was given 72 times per minute. The speech was spatialized by the virtual acoustic software, so that it appeared to come from the direction of the waypoint. (Virtual acoustic software converts a monaural input signal into a binaural output signal. The binaural signal contains cues for direction and distance and, when played through earphones, appears to come from a particular direction and distance in space.) The intensity of the sound of spatialized speech increased as the participant approached the waypoint.

Virtual tone

The hardware configuration for this display was the same as for the virtual-speech display, but the participant heard tones, rather than synthesized speech. The tones were spatialized and appeared to come from the direction of the next waypoint. If the participant's head was pointing toward the waypoint within a tolerance of 10 degrees on either side, a tone was issued (a roughly triangular waveform with a periodicity of 560Hz). The on-course tone appeared five times per second, with a duration of 160 milliseconds and a silent interval of 40 milliseconds. If the relative bearing (the difference between the direction to the waypoint and the pointing direction of the hand) exceeded 10 degrees, a frequency-swept tone (a “whooping” sound) was issued at the rate of 2.3 times per second. Spatialized speech indicating the distance in feet to the next waypoint (such as “24”) was provided every 8 seconds (7.5 times per minute). The sound of the speech, on-course tones, and off-course tones all increased in intensity as the waypoint was approached. One tone or the other was always present (except for the time needed to switch among them), regardless of whether speech was present.

HPI tone

With this display, the participant held the haptic pointer in one hand (see Figure 1C). Whenever the hand pointed within 10 degrees of the direction to the next waypoint, the computer issued a rapid sequence of beeping tones to the shoulder-mounted speaker. The tones were the same as the on-course indication for the virtual-tone display. The intensity of the tones increased as the waypoint was approached. Speech indicating the distance in feet to the next waypoint was issued every 8 seconds. Whenever the relative bearing exceeded 90 degrees, the computer issued synthesized speech giving the relative bearing, rounded to the nearest 10 degrees (for example, “120 left”). The presence of speech did not influence the presence or absence of tones.

HPI speech

This display was similar to the previous display, except that synthesized speech was presented instead of tones. If the relative bearing to the next waypoint was within 10 degrees, the word straight was presented. For relative bearings greater than 10 degrees but less than 90 degrees, the words left and right signaled the direction to turn. For relative bearings greater than 90 degrees, speech was issued giving the relative bearing. The intensity of the speech did not vary with the distance to the next waypoint.

Body pointing

This interface was identical to the HPI-tone display except that the compass was mounted on the torso at the waist (see Figure 1B). The participants had to aim their bodies toward the next waypoint (within 10 degrees) to receive the tones indicating alignment.

Table 1 delineates the physical and corresponding functional features of the five displays and summarizes how they differed from each other in terms of these features. In this research, we gave less emphasis to choosing displays that differed in just two or three features. Instead, our primary goal was to compare the two basic designs (virtual sound and HPI) and, on the basis of extensive discussions and pilot testing, to choose specific implementations of the two basic designs that we judged would function well. We believed that both basic designs would be effective, but we did not have any predictions about how the different displays would differ in terms of the participants' route-following performance and subjective evaluations.

Procedure

General

Each participant was tested with each of the five displays on five different paths during a single session. The testing session for each participant lasted about two hours (with rest breaks). The scheduling of the session was determined well in advance on the basis of the expected accuracy of the positioning of the GPS for the test site (available on a web site provided by Trimble Navigation). We tried to schedule sessions only when both the position dilution of precision and the horizontal dilution of precision were less than 2.0 for the entire session. (Position dilution of precision is a number relating to the precision of GPS data for positions in 3-dimensional space and reflects the geometry of the current satellite configuration. A small number indicates good precision. Horizontal dilution of precision is similar, but relates to positions varying only over the ground surface, with altitude being irrelevant.) Using a high-accuracy GPS receiver under these conditions, absolute error was usually less than 1 meter (about 3 feet), as judged by how close the participants walked to the physical locations that corresponded to the stored waypoints.

Interview and familiarization

The participants were asked about the cause and severity of their visual impairments and reported what type of obstacle-avoidance aid they used. A 5-point scale was used to compare their sense of mobility to that of others on nine important travel tasks. To assess their knowledge of angles expressed in degrees, several values were spoken by the experimenter, and the participants were asked to point in those directions relative to straight ahead. The participants were then given two tactile maps to familiarize them with the path-following task and procedure. First, they each traced a finger over a tactile map that had an example of a route with various turn angles. Then they examined a map with circles that represented the 2.1-meter (about 7-foot) radius around each waypoint, which made them aware that activation of each succeeding waypoint would occur before they arrived at the precise location of the current waypoint. Finally, they were read a script that explained how the experiment would proceed.

Field test

The participants with any residual vision were blindfolded and then led to the field-test site. All the participants had some experience using a long cane and were required to use it in the experiment in whichever hand they preferred. The other hand was used to hold the haptic pointer in the two conditions in which it was used. When the testing began, the participants were read a short description of the first display they were about to experience. The correct hardware configuration was arranged, after which they tried using the system with that display on a practice route of approximately 128 meters (about 420 feet). After the completed the practice route, they were allowed to ask questions about the system or the display being tested.

Next, the participants were taken to the starting point of one of the six test paths. The display was activated, and they then attempted to walk accurately and without much hesitation over the seven segments of the path. Because the paths were defined only within the computer database, successful completion of a path was possible only if the navigation system functioned properly and a participant was able to use the information correctly. The trajectory was recorded at a sampling rate of 1 per second. In addition, the time to complete the path was recorded. After they completed the path, the participants were led back to the start and traversed the path again. Following the two tests, the participants were asked to offer any positive or negative comments about the display, as well as any other comments. The entire procedure, including practice, was repeated for the remaining four displays. The order of the displays was randomized across the participants, as was the assignment of the different paths to the different display conditions.

Posttest interview

After the field test was concluded, the participants were asked to evaluate the five displays. They were asked to rank order the five displays (rank 5 = best) and to assign a rating of from 1 to 10 (10 = best). Their final comments were solicited at this time.

Results

On all 10 test trials, each of the 15 participants was able to follow the test path and arrive at the final waypoint without undue delay, an impressive feat that has only recently become possible with the advent of wearable, highly accurate, GPS systems. The mean actual distance traveled by all the participants for all the displays (for the 50-meter, or 164-foot, paths) was just 62 meters (about 203 feet), and the mean travel time was 110 seconds, which corresponds to an average walking speed of .56 meters (about 2 feet) per second, which is a good speed, considering that there were six turns on the path. Figure 2 gives examples of very good and very poor performance for the same path, with respective travel distances of 59.4 meters (about 195 feet) and 115.2 meters (about 378 feet).

An external file that holds a picture, illustration, etc.
Object name is nihms-159056-f0002.jpg

Examples of the measured walking trajectories. A. An example of a particularly poor performance, in which the participant walked 115.2 meters (378 feet) for a 50-meter (164-foot) path. B. An example of a particularly good performance, in which the participant walked 59.4 meters (about 195 feet) for the same 50-meter (164-foot) path. The solid line represents the intended path, and the asterisks represent the walked path.

Figure 3 presents data on the participants' mean performance for the five displays. It includes plots for the time taken to complete the path and the total distance traveled, along with standard errors of the mean. It is also interesting to note how quickly the best-performing participants traversed the paths. The mean travel times for the six fastest participants for the five displays were HPI tone = 100 seconds, HPI speech = 103.8 seconds, body pointing = 92.3 seconds, virtual tone = 86.4 seconds, and virtual speech = 70.7 seconds.

An external file that holds a picture, illustration, etc.
Object name is nihms-159056-f0003.jpg

Performance measures obtained for the 15 participants. Both mean travel time (in seconds) and mean travel distance (in meters) are plotted for the five display conditions. The error bars represent one standard error of the mean.

One-way analyses of variance (ANOVAs) that were computed for the means in Figure 3 revealed a significant effect of display on travel time, F(4,56) = 4.92, p < .01, but no effect on travel distance. For travel time, post hoc tests (Fisher's protected least-significant difference) indicated these significant contrasts at the .05 level: Virtual speech led to faster travel than all other displays except virtual tone, and virtual tone led to faster travel than HPI tone.

Figure 4 presents the participants' evaluative judgments. It includes plots for the mean ranks (from 1 to 5 with 5 as the most preferred) and the mean ratings (from 1 to 10 with 10 the best), along with the standard errors of the mean. One-way ANOVAs that were computed for the means of Figure 4 revealed significant effects of display on both rank, F(4,56) = 6.97, p < .001, and rating, F(4,56) = 5.42, p < .001. For both measures, post hoc tests indicated these significant contrasts: Virtual speech was judged better than all the other displays except body pointing, and body pointing was judged better than both HPI speech and virtual tone.

An external file that holds a picture, illustration, etc.
Object name is nihms-159056-f0004.jpg

Subjective judgments by the 15 participants. Both mean rank (best = 5) and mean rating (best = 10) are plotted for the five display conditions. The error bars represent one standard error of the mean.

Also important are the many comments that the participants made about the various displays. Some of the more salient comments are summarized next.

HPI speech and HPI tone

The participants made a number of negative comments about having to hold the pointer when using HPI speech and HPI tone, especially the need to keep it level (because tilting the compass gives erroneous readings). However, a few participants liked the pointer because it was faster for locating a waypoint than turning the body with the body-pointing display. Opinion was divided about speech versus tones. Some participants thought that the tones required less cognitive effort for judging alignment with the waypoint, but others found speech to be more informative (especially for the direction of errors) and the tones to be more annoying.

Body pointing

Several participants thought that the hands-free operation with body pointing was a distinct advantage over the similar HPI-tone display. Conversely, several noted the sluggishness or difficulty of scanning with the body.

Virtual speech and virtual tone

There were many negative comments about the headphones and the auditory signals from virtual-speech and virtual-tone displays blocking environmental sounds, which reinforced the comments about the use of headphones expressed in the survey by Golledge et al. (2004). Conversely, the participants liked the quick and informative signals provided by the two displays in addition to the hands-free operation. Several complained about the continuous presence of tones in the virtual-tone display, and one person said that the speech was more localizable than the tones. Several participants also liked the continuous updating of distance to the next waypoint when using virtual speech, a feature that is unique to this display.

Discussion

The fact that all 15 participants were able to complete all tests involving the 50-meter (164-foot) paths with a mean travel distance of only 62 meters (about 203 feet) and a mean travel time of 110 seconds is an important result by itself. It means that the technology has promise for guiding people who are visually impaired precisely through space when precise GPS data are available. Eventually, small but accurate GPS receivers that use differential correction will be readily available. It can be expected that precise guidance like that observed here will be possible with such receivers, coupled with directional information from an electronic compass, such as the one used in this study, or from a directional gyro.

Because all the displays that were used in this study made use of a compass and the rapid updating of information about the direction to the next waypoint using the hand, body, or virtual sound, they are all examples of what we have termed spatial displays. A nonspatial display is one that uses speech or braille output to provide more general directions for travel without frequent updates about the direction of the next waypoint. We did not evaluate nonspatial displays in this study and thus cannot speak to their effectiveness for precise route guidance as studied here, but surely the performance of any system that lacks a directional sensor like a compass will suffer, as indicated by our earlier research (Loomis et al., 1998).

In comparing the performance and subjective assessments of the different displays, it is important to note that the virtual-speech display provided continually updated information on the distance to the next waypoint, whereas the other displays provided this information only every 8 seconds. For the short route segments (averaging 7.1 meters, or about 23 feet), this difference is not likely to have contributed much to the variation in the participants' performance, but it probably influenced the participants' subjective evaluations, since several participants commented positively on this feature of the display.

Because the travel distance did not vary significantly with the different displays, we focused on travel time. The two virtual displays led to the fastest mean travel times (see Figure 3). In our previous study (Loomis et al., 1998), the virtual display also led to the shortest travel time compared with three other headphone speech displays. The probable reason for the advantage in the current experiment is that when the participant arrived at a waypoint, the direction to the next waypoint was immediately apparent through perceptual localization. For the other displays, the hand or body had to rotate into alignment with the direction of the next waypoint before its direction could be precisely known. The finding that the distance did not vary significantly across the displays further indicates that the differences in time are attributable to the turns, rather than to the straight segments of the path.

The subjective evaluations indicated that the participants judged virtual speech to be the best, but part of the reason, as we indicated earlier, was the availability of information about the distance to the next waypoint. The next-best option was body pointing, which was judged better than both the HPI-speech and the virtual-tone displays. Comments by a few participants indicated some possible reasons: the absence of headphones and the freedom from holding anything in the hand.

It is interesting that the virtual displays were rated highly despite a large number of negative comments about the headphones blocking environmental sounds. The participants liked the information provided by the displays in the test situation, but, when they thought about how the displays would be used in real navigation, they were concerned about the blocking of environmental sounds. There are two aspects to this blocking: actual occlusion of the acoustic signals reaching the ear canals and the perceptual masking of environmental sounds by the auditory signals from the display. The problem of occlusion may be dealt with in several ways. We have informally experimented with bone-conduction earphones, which do not occlude other sounds. However, because of the much-faster conduction of sound in bone than in air, the binaural direction cues (interaural intensity difference and interaural time difference) are distorted. In addition, the acoustic efficiency of bone-condition earphones is much lower than that of normal earphones, which entails higher power consumption, and they can be uncomfortable. As an alternative, we are now evaluating small air tubes that conduct sound into the ear canals from acoustic transducers that are positioned above the ears. These air tubes produce little occlusion of environmental sounds, a fact that has been acknowledged by the blind people who have tried them in our preliminary research. As for masking of environmental sounds, it is likely to be a problem that is common to all forms of auditory display. If so, virtual displays using earphones would still seem to have great promise once the problem of occlusion is resolved.

This experiment was concerned with route guidance using 50-meter (164-foot) multisegment paths in an area that was free of obstacles and environmental cues, like curbs and traffic sounds. Because of the contrived nature of the task, the results of the experiment are not readily generalizable to other navigation tasks that travelers who are blind confront. More research needs to be done in different settings with different navigation tasks and a diversity of visually impaired users to evaluate the relative effectiveness of text-based displays and various forms of spatial display. It is possible, for example, that traveling along the regular grid of streets in a city may be easier with conventional speech displays, whereas negotiating a complex path through a suburban park with many trees and bushes, while maintaining awareness of location within the larger environment, may be easier with a spatial display. Preliminary results from research we are conducting has supported these expectations.

Acknowledgments

This research was supported by Grant 09740 from the National Eye Institute to the University of California, Santa Barbara (Jack Loomis, principal investigator) and Grant SB020101 from the National Institute on Disability and Rehabilitation Research to the Sendero Group (Michael May, principal investigator). The authors thank Jerry Tietz for configuring the hardware and developing the custom software used in the experiment as well as for the suggestion to use tones in connection with the haptic pointer interface. The authors also thank two anonymous reviewers for their helpful suggestions, one of whom generously took time to create Table 1.

Contributor Information

Jack M. Loomis, Department of Psychology, University of California, Santa Barbara, CA 93106; ude.bscu.hcysp@simool.

James R. Marston, Department of Geography, University of California, Santa Barbara, CA 93106; ude.bscu.goeg@jnotsram.

Reginald G. Golledge, Department of Geography, University of California, Santa Barbara, CA 93106; ude.bscu.goeg@egdellog.

Roberta L. Klatzky, Department of Psychology, Carnegie Mellon University, Baker Hall 342C, Pittsburgh, PA 15213; ude.umc.werdna@ykztalk.

References

  • Crandall W, Brabyn J, Bentzen B, Myers L. Remote infrared signage evaluation for transit stations and intersections. Journal of Rehabilitation Research and Development. 1999;36:341–355. [PubMed] [Google Scholar]
  • Ertan S, Lee C, Willets A, Tan H, Pentland A. Digest of the Second International Symposium on Wearable Computers. IEEE Computer Society; Washington, DC: 1998. A wearable haptic navigation guidance system; pp. 164–165. [Google Scholar]
  • Gaunet F. Verbal guidance rules for a localized wayfinding aid intended for blind pedestrians in urban areas. Universal Access in the Information Society. in press. [Google Scholar]
  • Gaunet F, Briffault X. Exploring the functional specifications of a localized wayfinding verbal aid for blind pedestrians: Simple and structured urban areas. Human-Computer Interaction. in press. [Google Scholar]
  • Golledge RG, Klatzky RL, Loomis JM, Speigle J, Tietz J. A geographic information system for a GPS based personal guidance system. International Journal of Geographical Information Science. 1998;12:727–749. [Google Scholar]
  • Golledge RG, Marston JR, Loomis JM, Klatzky RL. Stated preferences for components of a personal guidance system for nonvisual navigation. Journal of Visual Impairment & Blindness. 2004;98:135–147. [Google Scholar]
  • Hatakeyama T, Hagiwara F, Koike H, Ito K, Ohkubo H, Bond CW, Kasuga M. Remote infrared audible sign-age system. International Journal of Human-Computer Interaction. 2004;17:61–70. [Google Scholar]
  • Helal A, Moore S, Ramachandran B. Drishti: An integrated navigation system for visually impaired and disabled; Proceedings of the 5th International Symposium on Wearable Computer; Oct, 2001. [Online]. Available: http://www.harris.cise.ufl.edu/projects/publications/wearableConf.pdf. [Google Scholar]
  • LaPierre C. Personal navigation system for the visually impaired. Department of Electronics, Carleton University; Ottawa, Ontario, Canada: 1998. Unpublished master's thesis [Online]. Available: http://lapierre.jammys.net/master's. [Google Scholar]
  • Loomis JM. Digital map and navigation system for the visually impaired. Department of Psychology, University of California; Santa Barbara: 1985. Unpublished manuscript. [Google Scholar]
  • Loomis JM, Golledge RG, Klatzky RL. Navigation system for the blind: Auditory display modes and guidance. Presence: Teleoperators and Virtual Environments. 1998;7:193–203. [Google Scholar]
  • Loomis JM, Golledge RG, Klatzky RL. GPS-based navigation systems for the visually impaired. In: Barfield W, Caudell T, editors. Fundamentals of wearable computers and augmented reality. Lawrence Erlbaum; Mahway, NJ: 2001. pp. 429–446. [Google Scholar]
  • Loomis JM, Golledge RG, Klatzky RL, Speigle JM, Tietz J. Proceedings of the First Annual International ACM/SIGCAPH Conference on Assistive Technologies (Assets ′94), Marina Del Rey, CA, October 31–November 1. Association for Computing Machinery; New York: 1994. Personal guidance system for the visually impaired; pp. 85–91. [Google Scholar]
  • Loughborough W. Talking lights. Journal of Visual Impairment & Blindness. 1979;73:243. [Google Scholar]
  • Makino H, Ishii I, Nakashizuka M. Proceedings of the 18th annual meeting of the IEEE EMBS. IEEE Engineering in Medicine and Biology Society; Piscataway, NJ: 1996. Development of navigation system for the blind using GPS and mobile phone connection. [Google Scholar]
  • Marston JR, Golledge RG. The hidden demand for activity participation and travel by people with vision impairment or blindness. Journal of Visual Impairment & Blindness. 2003;97:475–488. [Google Scholar]
  • Petrie H, Johnson V, Strothotte T, Raab A, Fritz S, Michel R. MoBIC: designing a travel aid for blind and elderly people. Journal of Navigation. 1996;49:45–52. [Google Scholar]
-