Published:
Author: Salena Fitzgerald
Matt Wang, headshot

To help musicians, producers, and even listeners better understand the complex nature of FM synthesis, an applied mathematics and statistics sophomore at the Whiting School of Engineering has developed a set of visualization tools–one of them interactive–that reveal what makes its uniquely rich tones so compelling. 

Frequency Modulation synthesis creates sound by layering and combining sine waves—those smooth, pure tones used when people take a hearing test. Musicians bring together those waves to form songs. 

“The formula looks simple—one sine function inside another—but the sounds it makes are surprisingly rich and dynamic,” said Matt Wang. 

He shared his findings on April 29 at Design Day, the  Whiting School of Engineering’s annual event students’ solutions to real-world challenges.  

To better understand and visualize how FM synthesis works, Wang turned to spectrograms–visual representation of sound frequencies over time.  STFT Spectrogram

“If a sound is simple, you see one bright stripe,” Wang explained. “But FM synthesis showed shifting, layered textures, which corresponded directly to the perceived changes in the audio–vibration and fluctuating tones.” 

To help others see what they hear, Wang created a series of audio-aligned visualizations that animated how FM frequencies evolve over time, one video shows how key frequency “peaks” shift and interact offering an intuitive glimpse into the math behind music.  

“You can drag through each frame and watch how the frequencies rise and fall. It’s like watching music breathe.” 

Starting with Dexed, a software plugin that emulates the sounds produced by Yamaha’s legendary DX7 synthesizer, Wang dove into the mechanics of FM synthesis by creating and manipulating sounds himself. Dexed lets users adjust six sound generators, called “operators,” each with its own frequency, amplitude, and shape. The way these operators are connected shapes the final sound.  

“One cool thing I noticed,” Wang said, “was that pairing a 100 Hz frequency with a slightly different one, like 110 Hz, created richer, more resonant sounds than using something mathematically neat like a 200 Hz frequency. This mismatch seems to add harmonic tension and complexity, which could be part of what makes these FM tones feel so expressive.” 

This observation, though simple, points to the nuanced way our brains perceive timbre and tone. The math might say one thing, but the ear responds to something deeper—less predictable, and more intriguing, he said. 

Beyond visualization, Wang explored reconstructing complex FM sounds with simpler mathematical models. By identifying key frequency “peaks” and tracking their amplitude and phase over time, he recreated a streamlined version of the original signal using only sine waves. This approach strips the sound down to its mathematical core, helping to reveal what makes FM synthesis so expressive.  

“It’s like breaking a symphony into a few clear notes and then building it back up,” he said. “I wanted to show that you can recreate something intricate with just a few essential parts—if you understand the math behind it.” 

With smoothing techniques like phase tracking (preserving the position within each waveform cycle) and amplitude normalization (adjusting volume levels to a consistent range), the reconstructed sounds come surprisingly close to the originals.  

“It’s not perfect,” the student admits, “but the goal is to make FM synthesis more understandable and explainable through math while also opening doors to creating more intuitive sound design tools for musicians and producers.”