«Stefan Münzer*, Hubert D. Zimmer*, Maximilian Schwalm*, Jörg Baus+ & Ilhan Aslan+ *Department of Psychology, Brain and Cognition Unit + German ...»
In the present study, three different navigation assistance conditions (differing with respect to presentation modality and the presentation of additional survey-like information) are compared with a map-based condition in a pedestrian wayfinding situation. In the assistance conditions, route information was always presented as view-based pictures at intersections (i.e. pictures of the environment as seen from the participants’ current point of view) together with a direction command. In two of three navigation assistance conditions, the direction command was presented auditorily-verbally (i.e. as a speech command together with the view-based picture of the intersection), while in one of the conditions, the direction was visually indicated as a red line included in the picture of the intersection. This variation was intended to check whether a combination of modalities (auditory-verbal plus visual) would lead to better learning. Multimedia learning theory (e.g. Mayer & Moreno, 1998) and cognitive load theory (Chandler & Sweller, 1991; Sweller & Chandler, 1994; Sweller, Van Merrienboer & Paas, 1998) predict that a combination of the auditory-verbal with the visual modality should be superior. However, recent studies conducted in our laboratory have demonstrated a robust picture superiority effect for route learning over combinations of verbal and pictorial information in route instructions.
Two of the three navigation assistance conditions provided additional information that was intended to enhance spatial learning. A visual animation showed, from a bird's eye view, Computer Assisted Navigation and the Aquisition of Route and Survey Knowledge 8 the shape of the current way segment from the last intersection to the present one and its continuation to the next intersection. Thus, the animation provided an allocentric view on three intersections, including their spatial relations. Animations generally are thought to enhance understanding of dynamic processes (e.g. Mayer & Anderson, 1992; Mayer & Moreno, 2002; but see Hegarty, Kriz & Cate, 2003). In the present experiment the animations illustrated the movement along the shape of the walked way as an aid for building the relation between the egocentric views on the landmarks, the actual movement in the environment, and the shape of the way seen from a bird’s eye view. It was hypothesized that this additional context information would support the acquisition of survey knowledge.
2.1 Participants and Design Sixty-four subjects took part in the study (33 were female, 31 were male). The mean age was 24 years (range 17 – 45). All participants were first-time visitors to the zoo environment. Participants were paid for their participation. The four experimental conditions mentioned in the introduction were realized in a between-subjects design. Sixteen participants took part in each condition. There was the same number of female and male participants in each of the conditions (accidentally in the map condition there were 9 females and 7 males).
2.2 Materials The zoo of Saarbrücken (Germany) was chosen as the real environment. A route through the zoo was selected as the walking path. The route consisted of 16 decision points and 15 pathway segments (see Figure 1). It was the same for all subjects in all conditions.
In three out of four conditions, participants were equipped with a personal digital assistance (PDA) computer (Hewlett-Packard iPAQ 5450 Pocket PC). This PDA computer Computer Assisted Navigation and the Aquisition of Route and Survey Knowledge 9 served as the pedestrian navigation system. When a participant has reached a critical intersection along the route, a picture of that intersection appeared on the screen of his/her PDA. This picture (photograph) always corresponded to the actual view from the participants' perspective. The PDA computer used by the participant was connected via a wireless local area network using Bluetooth technology to another PDA computer that was used by the experimenter. The experimenter could send a signal to the participants' PDA computer.
According to the signal, the next navigation information presentation appeared on the participants' PDA computer screen. This signal was sent at pre-defined positions along the route such that every participant received the navigation information at the same position.
The presentation was varied as follows:
Visual + context: Before a direction command was presented, there was a visual animation which consisted of a partial route showing the previous, the current and the next intersection, depicting their spatial relations from a bird's eye view. Additionally, the shape of the way was filled with an animation effect. When the animation started, the view-based picture of the current intersection was small, like a thumbnail-preview. It was visually attached to the representation of the current intersection. At the end of the animation, the attached view-based picture of the current intersection zoomed to a larger size until it filled the entire screen (see Figure 2 a). The direction was indicated by a red line in this picture of the intersection (as depicted in Figure 2 a).
Auditory + context: In this condition, the picture of the intersection was complemented with an auditory-verbal command (e.g. "turn left"). There was a red dot in the picture indicating the exact position at which the command should be applied (see Figure 2 b). In this condition, the same spatial context animation was presented as in the visual + context condition.
Computer Assisted Navigation and the Aquisition of Route and Survey Knowledge 10 Auditory: The picture of the intersection was presented, and the direction command was provided auditorily-verbally as in the auditory + context condition. However, no context was presented (see Figure 2 c).
Map: In the map-based wayfinding condition, participants did not use PDAs. They were shown a sheet of paper (DIN A 4 size) with a map fragment of the zoo. A fragment covered a part of the original route involving three or four intersections on the route. The map fragments only showed the shape of the ways, but no landmarks. The original route had been broken down into four such partial routes, and for each a fragmentary map had been prepared.
Additionally, there were photographs of the three or four intersections of this part of the route depicted on the paper. These photographs were the same as used on the PDA computers in the navigation assistance conditions, but without location or direction information. The pictures were numbered to indicate their order. The start and destination positions on the map were marked, and the critical intersections (decision points) were indicated by dots (see Figure 3).
With this information the to-be-adopted route could be inferred. The presentation of the map fragment required participants to find their way from a map under controlled conditions and similar landmark information as in the navigation assistance conditions (i.e. pictures of the intersections).
The knowledge acquired by the participants was tested with a route recognition test and a test on survey knowledge. In the route recognition test, the ability to remember the correct direction given a picture of an intersection was tested. The same pictures that were presented during the walk were presented without any direction information on a tablet PC (Acer Travelmate C110) in randomized order. The images contained sensitive areas marked by red rectangles that could be tabbed by the subjects to indicate the direction they had taken at each decision point (see Figure 4). There were 13 items in the route recognition test for which the participant had to choose among two alternatives. There were three items in the test for which the participant had to choose among three alternatives.
Computer Assisted Navigation and the Aquisition of Route and Survey Knowledge 11 Survey knowledge was tested by a spatial relocation task. In this task, thumbnail pictures of the intersections had to be placed at their correct locations on a roadmap of the zoo. No landmarks were shown on this map. The pictures were the same as used in the route recognition test. Subjects were presented with the map on a tablet PC (Acer Travelmate C110) and they had to drag and drop the pictures of the intersections on the map (see Figure 5).
The software for the pedestrian navigation system on the PDA computers as well as the route recognition and the survey knowledge test software on the tablet PC was developed and provided for the present study by the German Research Center for Artificial Intelligence (DFKI, Saarbrücken, Germany).
2.3 Procedure The experiment consisted of two parts. The guided walk realized the incidental study phase and afterwards, the two tests were administered. Participants were not informed that they would be tested for route and survey knowledge. The study was introduced as a usability study of navigation assistance systems.
In the three navigation assistance conditions, the PDA computer was handed to the subject. In the auditory-verbal conditions, subjects wore one in-ear earphone. During the tour, the experimenter followed with a separate PDA computer at a distance of about 10 meters.
When the subject has passed a pre-defined position, the experimenter sent a signal from his/her PDA computer to the subjects' PDA computer. The subjects' PDA computer then presented the relevant information for the approached intersection (the view-based picture of the current intersection, direction information and animation) dependent on the condition. In the map-based wayfinding condition, the participant was shown a map fragment. The participant had to infer which direction he/she would have to go at the intersections, and he/she was asked to indicate the direction by drawing an arrow on the picture for each of the intersections. The participant was asked to memorize the directions and the views of the three Computer Assisted Navigation and the Aquisition of Route and Survey Knowledge 12 or four intersections for this segment of the walk. The map fragment then was taken away, and the participant walked the partial route from memory. This procedure was repeated for all four route segments until they the original route was completed. Walking the tour in the zoo environment took approximately 25 minutes.
In the test phase, the route recognition test was administered first, followed by the survey knowledge test. Both tests were given to the participants after the study phase. Neither in the route recognition test nor in the survey knowledge test there was a time limit for giving the answer in a particular trial. Only accuracy performance was calculated. Since there was no laboratory room available in the zoo, participants completed the tests using a tablet PC suitable for outdoor use. Testing lasted about 25 minutes.
3. Results Route memory performance was evaluated on the basis of the number of correctly remembered directions in the route recognition test. Overall, participants performed quite well in this test (7 % to 25 % errors; see Table 1). The 95 % confidence intervals do not overlap with chance performance (see Figure 6), which is about 50 %. The four conditions differed. A one-way between subjects analysis of variances with navigation condition as factor revealed a significant difference for the relative number of false directions, F (3, 60) = 7.14, MSE = 0.0135, p 0.001. No significant difference was obtained between the three conditions in which navigation assistance was used (F (2,45) = 0.85), and also all pair-wise comparisons were far from significance. However, route memory after map-based wayfinding was significantly better than after walking guided by navigation assistance (F (1, 60) = 19.54, MSE = 0.0135, p 0.001). Thus, route memory performance was good in the navigation assistance conditions, but it was nearly perfect in the map-based wayfinding condition. An additional analysis including gender as factor yielded no significant gender effects for route memory performance.