Compiled, no installers or audio drivers. Just drag the file into your DAW's plugin folder and go. Includes VST, AU, AAX formats for Mac. Windows coming soon!
Pendulum Waveshaper Demo
Description:

 

  • This is an audio waveshaper distortion plugin that mutates an input sound based on the motion of a double pendulum. The user controls the length and speed of the pendulum in real time.
  • The waveshaping algorithm creates two peak harmonics by making two copies of the input signal, squaring one and cubing the other. The motion of the pendulum defines how much of each harmonic you hear. 
  • As the pendulum moves, we calculate the distance from its tip to two points on the plane called harmonic1 and harmonic2. In the visualizer, you can see that as one point gets larger, the other gets smaller, representing both the distance and each harmonic's volume changing.
Process:

 

This plugin was written in C++ using the RackAFX IDE with Visual Studio. I focused on the audio processing algorithm first, developing a waveshaper that creates two distinct harmonics in any input signal. 

 

I prototyped the double pendulum visualizer in Processing, moving it into JavaScript using P5.js so that I could add user interface controls and other JavaScript tricks later. This allowed me to focus on implementing the algorithms for motion and tuning my pendulum length and speed controls without complicating things with audio.

Code Sample:

 

This is the heart of the algorithm, which is called every time a new audio sample comes in.

 

float yL = wet * (
		pow(xL, 3.0) * (distance1 / attenuatorAmount) + 
		pow(xL, 2.0) * (distance2 / attenuatorAmount)
	) + dry * xL;

yL = yL / (abs(yL) + 1);

 

Variables:

 

  • xL
    Current input value from left channel. The code is repeated for the right channel with a variable called xR. 
  • wet, dry
    Two sliders on the UI to control the amount of dry unmodified signal you hear with how much of the waveshaped wet signal. 
  • distance1, distance2
    The distance from the tip of the double pendulum to two points representing the volume of the two harmonics created by the two exponents. Calculated based on pendulum speed.
  • attenuatorAmount
    A constant value. Translates the visual measurement of pixels in distance1 and distance2 into audio range for mixing an amount of harmonic 1 with harmonic 2. 

 

The input signal is cubed and then multiplied by the distance from the pendulum tip to the harmonic1 point. The smaller the distance, the quieter that harmonic is. 

pow(xL, 3.0) * (distance1 / attenuatorAmount)

 

Add in the input signal squared, scaled by the distance from the pendulum tip to the harmonic2 point. 

+ pow(xL, 2.0) * (distance2 / attenuatorAmount)

 

The cubed and squared signals are multiplied by the wet amount. Then we add in some of the dry signal. 

wet * (
	pow(xL, 3.0) * (distance1 / attenuatorAmount) + 
	pow(xL, 2.0) * (distance2 / attenuatorAmount)
) + dry * xL;

 

To prevent clipping, we scale yL to a value between 0 and 1. Then yL goes to the output buffer.

yL = yL / (abs(yL) + 1);

 

Challenges:

 

Once the audio and visual halves were both working separately, putting them together was an excellent challenge. My favorite bug was when the audio seemed to be working, but the pendulum display seemed to be jumping around randomly, even though I used the same math in the JavaScript version and it worked.

 

Solution:

 

After a long staring contest, and a lot of mashing the 'slow down' button, I figured out that the motion was not random - it was just far too fast. Then, I remembered! Audio processing has to happen much faster than visual processing. Our eye can make due with fewer frames per second than our ear, so I figured it was a reasonable assumption that the visuals were not being processed 44,100 times per second by default. Especially since video processing is computationally expensive.

 

Instead of updating the physical model of the pendulum on every new audio frame, I changed the code to update the physical model whenever the visuals are redrawn:

 

case GUI_TIMER_PING:
{
	if(m_pWaveFormView)
	{
		getAcceleration1();
		getAcceleration2();
		getCoordinates();
		getDistances();
		updateVelocityAndAccel();
		m_pWaveFormView->setCoordinates(x1, y1, x2, y2, harmonic1x, harmonic1y, harmonic2x, harmonic2y, distance1, distance2);
		m_pWaveFormView->invalid();
	}
	return NULL;
}

 

We calculate the acceleration of pendulum 1 and 2, we get the coordinates of the tip, we calculate the distances from that tip to each harmonic point, we update the velocity and acceleration for the next iteration, and then we draw the pendulum in the WaveFormView object.

 

The audio is using the same physical model values as the visualizer, it's just polling them more often. This also added the unintended side effect completely planned feature of making the plugin sound much smoother because the pendulum values were not being recalculated at audio rate anymore.

 

This was a memorable bug because I had to think about it for awhile, but then all I had to do was move those function calls over to the visual callback and there was a startling difference. 

Technologies Used:
C++, Ableton Live
,