Hopkins course explores artificial intelligence and deep learning

June 11, 2019
Students present the Occlusion project, which can identify human shapes in visual images

Students present the Occlusion project, which can identify human shapes in visual images

Machines are now predicting stock market changes, detecting cancer, translating documents, and even composing symphonies—all thanks to an exciting new subset of artificial intelligence known as deep learning.

This spring, the Whiting School of Engineering introduced Hopkins undergraduate and graduate students to the basic concepts in this field in the course Machine Learning: Deep Learning.

In deep learning, artificial neural networks—computer algorithms modeled after the human brain—learn to perform specific tasks by analyzing large amounts of training data. Deep learning is rapidly becoming a hallmark of many new technologies, such as Spotify’s recommended song feature or safety mechanisms in self-driving cars.

“Our students need to acquire a solid understanding of the underlying theory and gain hands-on experience with today’s tools, so they can push the boundaries of the field,” said course instructor Mathias Unberath, an assistant research professor of computer science and a member of the university’s Laboratory for Computational Sensing and Robotics and Malone Center for Engineering in Healthcare. “Proficiency in machine learning techniques like deep learning is highly sought after in the job market, both for academic and industry positions, so I hope this course contributes to ensuring the success of our graduates.”

Hopkins engineers and computer scientists are now using deep learning to tackle problems once thought to be too complex for computers to solve.

For example, a team of Hopkins students has developed an algorithm that detects humans in videos and images even if the human is obstructed. Created by students He Crane Chen, Pengfei Guo, and Yunmo Chen, the algorithm, called the Occlusion Project, has the potential to lend itself to more sophisticated video surveillance systems.

Last month, the Deep Learning course culminated with a poster and demo session during which students presented 16 projects on deep learning applications.

For their project, Emily Cheng—a computer science major and a member of the Class of 2019—Alexander Glavin, a rising senior in biomedical engineering, and Disha Sarawgi, a robotics graduate student, created an image segmentation program that can detect lung metastasis from medical scans.

First, their network analyses a large set of medical scans and segments out the scans with tumors. Then, the model isolates the tumor data and identifies if that patient is at risk for lung metastasis.

“Our biggest challenge was the classification. The model could easily identify which scans had tumors, but it was much harder to teach the model to learn which scans were at risk for lung metastasis,” said Cheng. “With some more work, we think our findings could prove helpful for developing an automated system for diagnosing lung metastasis.”

Read the full story on The Hub >>

Back to top