Article

Perceived timing of vestibular stimulation relative to touch, light and sound

Department of Psychology, Multisensory Integration Laboratory, Centre for Vision Research, York University, Toronto, ON M3J 1P3, Canada.
Experimental Brain Research (Impact Factor: 2.04). 05/2009; 198(2-3):221-31. DOI: 10.1007/s00221-009-1779-4
Source: PubMed

ABSTRACT

Different senses have different processing times. Here we measured the perceived timing of galvanic vestibular stimulation (GVS) relative to tactile, visual and auditory stimuli. Simple reaction times for perceived head movement (438 +/- 49 ms) were significantly longer than to touches (245 +/- 14 ms), lights (220 +/- 13 ms), or sounds (197 +/- 13 ms). Temporal order and simultaneity judgments both indicated that GVS had to occur about 160 ms before other stimuli to be perceived as simultaneous with them. This lead was significantly less than the relative timing predicted by reaction time differences compatible with an incomplete tendency to compensate for differences in processing times.

Download full-text

Full-text

Available from: Laurence R Harris,
2 Followers
 · 
19 Reads
  • Source
    • "). Such a process to be effective, however, is created through development and learning as suggested by Hebb (1949) which uses post-movement feedback loops to carry environmental information (at latencies >15 ms, Liddell and Sherrington 1924; Lisberger 1984; Miles et al. 1986; Myklebust 1990; Corden et al. 2000; Barnett-Cowan and Harris 2009) and which explains why, once the learning of a task has been completed, movement execution becomes faster and more accurate. The completion of this process can take a very long time when mastering a language or becoming a worldclass athlete. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In this review, we examine the importance of having a body as essential for the brain to transfer information about the outside world to generate appropriate motor responses. We discuss the context-dependent conditioning of the motor control neural circuits and its dependence on the completion of feedback loops, which is in close agreement with the insights of Hebb and colleagues, who have stressed that for learning to occur the body must be intact and able to interact with the outside world. Finally, we apply information theory to data from published studies to evaluate the robustness of the neuronal signals obtained by bypassing the body (as used for brain-machine interfaces) versus via the body to move in the world. We show that recording from a group of neurons that bypasses the body exhibits a vastly degraded level of transfer of information as compared to that of an entire brain using the body to engage in the normal execution of behaviour. We conclude that body sensations provide more than just feedback for movements; they sustain the necessary transfer of information as animals explore their environment, thereby creating associations through learning. This work has implications for the development of brain-machine interfaces used to move external devices. Note: on page two, last paragraph, line 4 from bottom of the document: '...Fig. 1ab afferents' should read '...1a and 1b afferents.'
    Experimental Brain Research 09/2015; DOI:10.1007/s00221-015-4423-5 · 2.04 Impact Factor
  • Source
    • "Postural control provides an experimental context appropriate to highlight the interaction of multiple sensory inputs originating from different sensory systems (Hatzitaki et al., 2004). Body stability strongly depends on the non-linear aspects of the sensory fusion process and its temporal dynamics (Black and Nashner, 1984; Jeka et al., 2000; Horak and Hlavačka, 2002; Barnett-Cowan and Harris, 2009; Rowland and Stein, 2014). In turn, this depends to a large extent on the nature of the signals involved and their spatiotemporal relationship (Hlavačka et al., 1999). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Maintaining equilibrium is basically a sensorimotor integration task. The central nervous system (CNS) continually and selectively weights and rapidly integrates sensory inputs from multiple sources, and coordinates multiple outputs. The weighting process is based on the availability and accuracy of afferent signals at a given instant, on the time-period required to process each input, and possibly on the plasticity of the relevant pathways. The likelihood that sensory inflow changes while balancing under static or dynamic conditions is high, because subjects can pass from a dark to a well-lit environment or from a tactile-guided stabilization to loss of haptic inflow. This review article presents recent data on the temporal events accompanying sensory transition, on which basic information is fragmentary. The processing time from sensory shift to reaching a new steady state includes the time to (a) subtract or integrate sensory inputs; (b) move from allocentric to egocentric reference or vice versa; and (c) adjust the calibration of motor activity in time and amplitude to the new sensory set. We present examples of processes of integration of posture-stabilizing information, and of the respective sensorimotor time-intervals while allowing or occluding vision or adding or subtracting tactile information. These intervals are short, in the order of 1-2 s for different postural conditions, modalities and deliberate or passive shift. They are just longer for haptic than visual shift, just shorter on withdrawal than on addition of stabilizing input, and on deliberate than unexpected mode. The delays are the shortest (for haptic shift) in blind subjects. Since automatic balance stabilization may be vulnerable to sensory-integration delays and to interference from concurrent cognitive tasks in patients with sensorimotor problems, insight into the processing time for balance control represents a critical step in the design of new balance- and locomotion training devices.
    Frontiers in Systems Neuroscience 10/2014; 8:190. DOI:10.3389/fnsys.2014.00190
  • Source
    • "In RT tasks, the difference between the sensory modalities provides us with an approximate value of the lag that one of the sensory modalities has to have with respect to another one in order for the participant to perceive them as simultaneous. From RT results, the time needed to react to a visual stimuli is about 150–220 ms (e.g., Brenner and Smeets, 2003; Barnett-Cowan and Harris, 2009), although this value can vary depending on factors such as the intensity of stimulation (e.g., Schiefer et al., 2001). However, one must take into account that RT is a behavioral measure and so the values provided do not only contain the signal processing time but also the time needed to react. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Many actions involve limb movements toward a target. Visual and proprioceptive estimates are available online, and by optimally combining (Ernst and Banks, 2002) both modalities during the movement, the system can increase the precision of the hand estimate. The notion that both sensory modalities are integrated is also motivated by the intuition that we do not consciously perceive any discrepancy between the felt and seen hand's positions. This coherence as a result of integration does not necessarily imply realignment between the two modalities (Smeets et al., 2006). For example, the two estimates (visual and proprioceptive) might be different without either of them (e.g., proprioception) ever being adjusted after recovering the other (e.g., vision). The implication that the felt and seen positions might be different has a temporal analog. Because the actual feedback from the hand at a given instantaneous position reaches brain areas at different times for proprioception and vision (shorter for proprioception), the corresponding instantaneous unisensory position estimates will be different, with the proprioceptive one being ahead of the visual one. Based on the assumption that the system integrates optimally and online the available evidence from both senses, we introduce a temporal mechanism that explains the reported overestimation of hand positions when vision is occluded for active and passive movements (Gritsenko et al., 2007) without the need to resort to initial feedforward estimates (Wolpert et al., 1995). We set up hypotheses to test the validity of the model, and we contrast simulation-based predictions with empirical data.
    Frontiers in Psychology 02/2014; 5:50. DOI:10.3389/fpsyg.2014.00050 · 2.80 Impact Factor
Show more