COOKIES: By using this website you agree that we can place Google Analytics Cookies on your device for performance monitoring. |
University of Cambridge > Talks.cam > Inference Group > Bayesian learning of visual chunks by human observers
Bayesian learning of visual chunks by human observersAdd to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Philip Sterne. Efficient and versatile processing of any hierarchically structured information requires a learning mechanism that combines lower level features into higher level chunks. We investigated this chunking mechanism in humans with a visual pattern-learning paradigm. Based on Bayesian model comparison, we developed an ideal learner that extracts and stores only those chunks of information that are minimally sufficient to encode a set of visual scenes. Our ideal Bayesian chunk learner not only reproduced the results of a large set of previous empirical findings in the domain of human pattern learning, but also made a key prediction that we confirmed experimentally. In accordance with Bayesian learning but contrary to associative learning, human performance was well above chance when pair-wise statistics in the exemplars contained no relevant information. Thus, humans extract chunks from complex visual patterns by generating accurate yet economical representations and not by encoding the full correlational structure of the input. This talk is part of the Inference Group series. This talk is included in these lists:
Note that ex-directory lists are not shown. |
Other listsFriends of Clare Hall Art Society Cambridge University Computing and Technology Society (CUCaTS) Thinking Society: How is understanding possible?Other talksOncological Imaging: introduction and non-radionuclide techniques & radionuclide techniques MEMS Particulate Sensors Panel comparisons: Challenor, Ginsbourger, Nobile, Teckentrup and Beck BOOK LAUNCH: Studying Arctic Fields: Cultures, Practices, and Environmental Sciences Asclepiadaceae Single Cell Seminars (August) Towards a whole brain model of perceptual learning |