Ants can navigate over long distances between their nest and food sites using visual cues [1, 2]. Recent studies show that this capacity is undiminished when walking backward while dragging a heavy food item [3-5]. This challenges the idea that ants use egocentric visual memories of the scene for guidance [1, 2, 6]. Can ants use their visual memories of the terrestrial cues when going backward? Our results suggest that ants do not adjust their direction of travel based on the perceived scene while going backward. Instead, they maintain a straight direction using their celestial compass. This direction can be dictated by their path integrator  but can also be set using terrestrial visual cues after a forward peek. If the food item is too heavy to enable body rotations, ants moving backward drop their food on occasion, rotate and walk a few steps forward, return to the food, and drag it backward in a now-corrected direction defined by terrestrial cues. Furthermore, we show that ants can maintain their direction of travel independently of their body orientation. It thus appears that egocentric retinal alignment is required for visual scene recognition, but ants can translate this acquired directional information into a holonomic frame of reference, which enables them to decouple their travel direction from their body orientation and hence navigate backward. This reveals substantial flexibility and communication between different types of navigational information: from terrestrial to celestial cues and from egocentric to holonomic directional memories.
This study examined how finger-touch input performance (i.e., task completion time, failure status, and error rate) and subjective ratings (i.e., performance and physical demand) are influenced by touchscreen gestures' type and direction. Twenty participants performed one-touch (i.e., drag and swipe) and two-touch (i.e., pinch and spread) gesture tasks on a tablet, using several major directions (i.e., 8 directions for one-touch and 4 directions for two-touch gestures). The results showed that swipe was approximately 4.5 times faster than drag, but pinch and spread showed no significant difference in task completion time. Dragging and pinching showed more failures or higher error rates compared to swiping and spreading, respectively. One-touch gestures in the horizontal directions were rated to have higher performance and lower physical demand than those in the vertical and diagonal directions. Two-touch gestures in the horizontal directions took the shortest time but caused more failures and higher error rates. Practitioner Summary: This study provides evidence for the effects of touchscreen gestures' type and direction on human performance and subjective ratings, which varied depending on the number of fingers used. Designers should arrange related touchscreen components accordingly, to improve touch-finger input performance and reduce user workload.
Misunderstanding each other’s intentions is one of the most common causes of shipping accidents. By sending out a number of waypoints ahead and displaying them on the Electronic Chart Display and Information System (ECDIS) a ship’s intentions would be clearly visible for other ships. Displaying ships' intentions would be a major change compared to navigation today. It could be very beneficial but it could also have unintended consequences. This paper reports on findings from an evaluation looking for unintended consequences of change using system simulation. During the simulation an unanticipated behavior was observed. Bridge crews started to click and drag waypoints too negotiate crossing situations ahead of time. The behavior could be compared to agreeing over the VHF. However further research is needed to evaluate this new behavior and how it aligns to COLREGS.