University of Cambridge > Talks.cam > The Centre for Music and Science (CMS) > Serial Order in Behavior: Unifying Acoustic Meaning and Rhythm in Audition, Speech, and Music

Serial Order in Behavior: Unifying Acoustic Meaning and Rhythm in Audition, Speech, and Music

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact Sarah Hawkins.

Please note change of venue to English Faculty

How do conscious percepts of auditory events arise, whether in recognizing discrete acoustic sources, or auditory streams of speakers or music? What are the functional units and processing levels that control such conscious percepts? How are cognitive working memories designed to temporarily store sequences of acoustic items and to support learning and stable memory of the unitized acoustic units, called list chunks, that organize conscious recognition? How does a hierarchy of processing stages generate a working memory representation of acoustic event sequences that is increasingly rate-invariant and frequency-normalized? How do familiar invariant representations of sequence meaning interact with, and help to determine, the perceived rhythm with which a sequence is heard? How are new rhythms flexibly used to recall the same sequence, as when a song or speech utterance is made with a different rhythm? In particular, how does the cortical stream for generating invariant sequences interact with the cortical stream for representing auditory scene analysis and its frequency-specific and rhythm-sensitive properties, as occurs during the perception of pitch and timbre? Why do all working memories, whether linguistic, spatial, or motor, share basic neural designs, and thus generate similar temporal order and error distribution properties? How does the brain integrate contextual information over many milliseconds to disambiguate noise-occluded acoustical signals? How are sound sequences that are heard in noise consciously heard in the correct temporal order, even when noise-occluded sounds are disambiguated by contexts that may occur many milliseconds before or after each sound is presented? These questions get a unified answer in Adaptive Resonance Theory, or ART , which is currently the most advanced theory of how primate brains learn to attend, recognize, value, and predict a changing world. ART predicts that all conscious states are resonant states, that consciously heard acoustic sequences are represented by resonant waves, and that perceived silence is a temporal discontinuity in the rate that such a resonant wave evolves through time. ART has begun to classify the resonances that underlie conscious experiences of seeing, hearing, knowing, and feeling, as part of its analysis of how brain processes of consciousness, learning, expectation, attention, resonance, and synchrony interact.

There will be a short wine reception after the talk. If you intend to come, please tell Sarah Hawkins (sh110@cam). Likewise if you wish to join us for dinner on Friday evening (all welcome), or to speak privately about your work with Stephen Grossberg.

This talk is part of the The Centre for Music and Science (CMS) series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2024 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity