Working for a better future
Najim Dehak received his PhD from School of Advanced Technology, Montreal in 2009. During his PhD studies he worked with the Computer Research Institute of Montreal, Canada. He is well known as a leading developer of the I-vector representation for speaker recognition. He first introduced this method, which has become the state-of-the-art in this field, during the 2008 summer Center for Language and Speech Processing workshop at Johns Hopkins University. This approach has become one of most known speech representations in the entire speech community.
Dr. Dehak is currently a faculty member of the Department of Electrical & Computer Engineering at Johns Hopkins University. Prior to joining Johns Hopkins, he was a research scientist in the Spoken Language Systems Group at the MIT Computer Science and Artificial Intelligence Laboratory. His research interests are in machine learning approaches applied to speech processing, audio classification, and health applications. He is a senior member of IEEE and member of the IEEE Speech and Language Technical Committee.
Jesús Villalba received his M.Sc. in Telecommunication Engineering (2004) and Ph.D. in Biomedical Engineering (2014) from University of Zaragoza (Spain). His Ph.D. thesis focused on several topics related to speaker recognition in adverse environments. During his PhD. studies, he interned at Brno University of Technology (BUT) and collaborated with the company Agnitio. After graduating, he became a Research Engineer at Cirrus Logic International, working on robust speaker recognition. In October 2016, he joined the Johns Hopkins Center for Language and Speech Processing (CLSP) as Postdoctoral Fellow. In December 2019, Jesus was appointed Assistant Research Professor. His current research interest relates to information extraction from speech like speaker identity, language, age and emotion. he is also interested in speaker diarization and unsupervised learning for speech related applications.
Laureano Moro-Velazquez collaborates with colleagues in the departments of Neurology and Critical Care at Johns Hopkins School of Medicine to develop new tools for the diagnosis of neurodegenerative diseases and to assess frailty and resilience in the elderly, and works with a team at CLSP studying new methods for automatic speech recognition in under-resourced languages.
In addition to performing research, Moro-Velazquez designed and teaches Machine Learning for Medical Applications, a course for both undergraduate and graduate students. He received the Johns Hopkins University Teaching as Research fellowship in 2020.
Moro-Velazquez received the Spanish Ministry of Economy and Competitiveness mobility grant in 2017 and serves as a reviewer of a variety of publications, including IEEE’s ACM Transactions on Audio, Speech, and Language Processing; Journal of Selected Topics in Signal Processing; and Transactions on Neural Systems & Rehabilitation Engineering.
Previously, Moro-Velazquez was a post-doctoral fellow at CLSP and was mentored by Najim Dehak, associate professor in the Department of Electrical and Computer Engineering. Moro-Velazquez received both his PhD with honors (cum laude) in Systems and Services Engineering for the Information Society (2018) and his Master Degree in Telecommunications Engineering (2006) at Universidad Politécnica de Madrid. He earned his undergraduate degree in Sound and Image Engineering (2003) at Technical University of Madrid. In between receiving his master’s and PhD, Moro-Velazquez worked at Brüel & Kjaer from 2008 to 2013 as an acoustic engineer and head of training, as well as a technical manager of certified laboratory.
Piotr Zelasko is an assistant research scientist in the Center for Language and Speech Processing (CLSP) who is an expert on automatic speech recognition (ASR) and spoken language understanding (SLU).
His current research focuses on applying multilingual and cross-lingual speech recognition systems to categorize the phonetic inventory of a previously unknown language and on improving defenses against adversarial attacks on both speaker identification and automatic speech recognition systems. He is also addressing the question of how to structure a spontaneous conversation into high-level semantic units called dialog acts, and working on Lhotse + K2, the next-generation speech processing research software ecosystem.
Zelasko co-organized the Kaldi Community Forum – a virtual conference on the future directions of automatic speech recognition – during the summer of 2020. A member of Department of Electrical and Computer Engineering Associate Professor Najim Dehak’s group, Zelasko also mentors the group’s PhD and master’s students.
Before joining Johns Hopkins, Zelasko worked as a machine learning consultant for Avaya (2017-2019), and as a software engineer for Techmo (2015-2017). He is also a member of the International Speech Communication Association (ISCA), the Institute of Electrical and Electronics Engineers (IEEE) and IEEE’s Signal Processing Society (IEEE SPS).
Zelasko received his PhD (2019), as well as his master’s (2014) and undergraduate degrees (2013) in acoustic engineering from AGH University of Science and Technology in Kraków, Poland.