Supramodal representation of temporal priors calibrates interval timing | |
Zhang, Huihui ; Zhou, Xiaolin | |
刊名 | JOURNAL OF NEUROPHYSIOLOGY
![]() |
2017 | |
关键词 | supramodal prior Bayesian modeling interval timing INTERNAL CLOCK MULTISENSORY INTEGRATION AUDITORY DOMINANCE TIME PERCEPTION DISCRIMINATION MEMORY VISION HUMANS UNCERTAINTY MODALITY |
DOI | 10.1152/jn.01061.2015 |
英文摘要 | Human timing behaviors are consistent with Bayesian inference, according to which both previous knowledge (prior) and current sensory information determine final responses. However, it is unclear whether the brain represents temporal priors exclusively for individual modalities or in a supramodal manner when temporal information comes from different modalities at different times. Here we asked participants to reproduce time intervals in either a unisensory or a multisensory context. In unisensory tasks, sample intervals drawn from a uniform distribution were presented in a single visual or auditory modality. In multisensory tasks, sample intervals from the two modalities were randomly mixed; visual and auditory intervals were drawn from two adjacent uniform distributions, with the conjunction of the two being equal to the distribution in the unisensory tasks. In the unisensory tasks, participants' reproduced times exhibited classic central-tendency biases: shorter intervals were overestimated and longer intervals were underestimated. In the multisensory tasks, reproduced times were biased toward the mean of the whole distribution rather than the means of intervals in individual modalities. The Bayesian model with a supramodal prior (distribution of time intervals from both modalities) outperformed the model with modality-specific priors in describing participants' performance. With a generalized model assuming the weighted combination of unimodal priors, we further obtained the relative contribution of visual intervals and auditory intervals in forming the prior for each participant. These findings suggest a supramodal mechanism for encoding priors in temporal processing, although the extent of influence of one modality on another differs individually. NEW & NOTEWORTHY Visual timing and auditory timing influence each other when time intervals in the two modalities are drawn from two adjacent distributions and are randomly intermixed. A Bayesian model with a supramodal prior (distribution of intervals from both modalities) outperforms the model using sensory-specific priors in describing participants' performance. A generalized model further reveals that the prior is represented as a weighted average of the distribution of time intervals from the two modalities, which differ individually.; National Basic Research Program of China (973 Program) [2015CB856400]; SCI(E); SSCI; ARTICLE; 2; 1244-1256; 118 |
语种 | 英语 |
内容类型 | 期刊论文 |
源URL | [http://ir.pku.edu.cn/handle/20.500.11897/469040] ![]() |
专题 | 心理与认知科学学院 |
推荐引用方式 GB/T 7714 | Zhang, Huihui,Zhou, Xiaolin. Supramodal representation of temporal priors calibrates interval timing[J]. JOURNAL OF NEUROPHYSIOLOGY,2017. |
APA | Zhang, Huihui,&Zhou, Xiaolin.(2017).Supramodal representation of temporal priors calibrates interval timing.JOURNAL OF NEUROPHYSIOLOGY. |
MLA | Zhang, Huihui,et al."Supramodal representation of temporal priors calibrates interval timing".JOURNAL OF NEUROPHYSIOLOGY (2017). |
个性服务 |
查看访问统计 |
相关权益政策 |
暂无数据 |
收藏/分享 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论