Optimizing the motion-sensitive channel in biologically-plausible manner to compensate non consideration in current saliency models.
Many current saliency models are feature-based and do not consider motion/temporal change when computing a saliency map. This model is purely bottom-up, object-based saliency model that considers temporal change for computing a dynamic visual saliency map given any video sequence. Current work consists of optimizing the motion-sensitive channel in a biologically-plausible manner. Continuing work involves implementing this model on FPGA hardware.