LCAP GitHub repository


The D-REX model v2 is designed for exploring the computational mechanisms in the brain involved in statistical regularity extraction from dynamic sounds along multiple features. Utilizing a Bayesian inference framework for performing sequential prediction in the presence of unknown changepoints, this model can be used to explore how statistical properties are collected along multiple perceptual dimensions (e.g., pitch, timbre, spatial location). Perceptual parameters can be used to fit the model to individual behavior.

Download includes README, model code, and a helper function for displaying model output --- all code is in MATLAB.

Please enter the information below to download:


NOTE: Contact information will be used solely to track usage of this software.

The D-REX model is designed for exploring the computational mechanisms in the brain involved in statistical regularity extraction from dynamic sounds. Utilizing a Bayesian inference framework for performing sequential prediction in the presence of unknown changepoints, this model can be used to test alternative statistics collected by the brain while listening to ongoing sounds. Perceptual parameters can be used to fit the model to individual behavior.

Download includes README, model code, and a helper function for displaying model output --- all code is in MATLAB.

Please enter the information below to download:


NOTE: Contact information will be used solely to track usage of this software.
Published EEG and psychophysics data.

Please enter the information below to download:


NOTE: Contact information will be used solely to track usage of this software.
The stream segregation model leverages the multiplexed and non-linear representation of sounds along an auditory hierarchy and learns local and global statistical structure naturally emergent in natural and complex sounds. The three key components of the architecture are : (1) A stochastic network RBM layer that encodes two-dimensional input spectrogram into localized specto-temporal bases based on short term feature analysis; (2) A dynamic aRBM that captures the long-term temporal dependencies across spectro-temporal bases characterizing the transformation of sound from fast changing details to slower dynamics. (3) A temporal coherence layer that mimics the Hebbian process of binding local and global details together to mediate the mapping from feature space to formation of auditory objects.

Please enter the information below to download:


NOTE: Contact information will be used solely to track usage of this software.

This research is funded by: