Najim Dehak's laboratory mission:
Researching new technologies for a better society
Our lab focusses on two verticals: (1) Speech & Language (2) Artificial Intelligence in Medicine
In recent years, smartphones like Apple’s Siri and home speakers like Amazon’s Alexa and Google home have made speech a common way to communicate with machines around the world. This rapid development in human-machine interaction favors the development of speech interfaces as a natural and easy way to communicate with machines and mobile devices. Our research is focused on extracting useful and varied kinds of information from human voices. The speech signal is complex and contains a tremendous amount of diverse information including, but not limited to: classical linguistic message (the most important information, humans use for everyday communication), language spoken, speaker characteristics (i.e., identity, age, and gender), speaker’s emotional state and possible degree of intoxication. The last two characteristics, emotional state, and degree of intoxication can enable applications across a broad spectrum ranging from health-related applications. For example, a patient’s emotional state can be identified by using a dialogue system, which would be capable of producing reports for a physician based onseveral communications between the patient and the system.
In the area of medicine, our commitment is to use artificial intelligence and machine learning techniques to find biomarkers to reduce diagnosis time of neurological diseases, predict the frailty of subjects, and optimize the decisions to be taken with patients in critical care. During the last decade, there has been an increase of interest in the use of artificial intelligence in medicine, especially since the arrival of new paradigms employing deep neural networks that can boost the diagnosis and assessment accuracy in multiple scenarios. With these motivations and to fulfill or commitment, we have a team that combines in biomedical engineering, human language technologies, signal processing, and machine learning to collaborate with a multidisciplinary group at the Johns Hopkins Hospital in the proposal of new diagnostic and prognostic tools that will provide faster and more accurate diagnosis. Our approaches leverage information from different sources: speech, eye movement, and handwriting, in order to get a broarder view of the motor and cognitive functioning of the body.
12/02/2021: Our lab, in collaboration with University of Illinois, has been awarded with a grant from the NSF Program on Fairness in Artificial Intelligence in Collaboration with Amazon (FAI). Our proposal's title is "FAI: A New Paradigm for the Evaluation and Training of InclusiveAutomatic Speech Recognition"
12/01/2021: Our PhD student Saurabhchand Bhati will be giving an invited talk titled "Segmental Contrastive Predictive Coding for Unsupervised Acoustic Segmentation" at the ISCA SIGML Seminar Series, on December 13, 2021. More info at: https://homepages.inf.ed.ac.uk/htang2/sigml/seminar/
10/31/2021: This week, we have presented our work entitled "Vowel space area metrics in dysarthric speakers undergoing speech and singing therapy" ath the Asilomar 2021 conference. Find our poster here.
07/02/2021: Our lab was awarded with the Venture Discovery Fund (VDF) from the Richman Family Precision Medicine Center of Excellence in Alzheimer’s disease (RC PMCoE-AD). This is a collaborative project with Dr. Esther Oh (PI), Dr. Najim Dehak (Co-PI), Dr. Laureano Moro Velazquez (Co-I) and Dr. Quincy Samus (Co-I) (Department of Psychiatry and Behavioral Sciences). The project is titled - SynchroAD: New biometric signals for the detection and evaluation of Alzheimer’s disease using artificial intelligence.