top of page
Writer's pictureKJ Ha

Listening Machines Final

Demo video

Music: "River Flows In You" by Yiruma



From the very beginning, I had an idea of what I wanted to do in terms of its visual form. I wished to create a sound visualizer for meditative music that can calm or soothe people.


The first attempt:

Although I was satisfied with its visuals, I felt the mechanism behind it is way too simple. Only the volume of the input sound is processed controlling the size of the shapes.



Second attempt:

Here, frequency of the input sound has an effect on the x position of particles. At this time, the way it has turned out wasn't really satisfying in terms of the visual elements.


While I was searching for inspiration, I found this example on the p5.js website.

I've decided to incorporate the fading effect from the example with the sound elements.



Third attempt:

Both frequency and volume of the input sound feed into the outcome. The user now can change the size of the ellipse with the slider on the top left corner.



Fourth (Final) attempt:




Color Modes

There are five different color modes for users to choose from.


cloudColors = [
    color(0, 0, 0), // black
    color(60, 78, 96), // blue
    color(255, 194, 0), // yellow
    color(213, 8, 66), // red
    color(81, 137, 82) // green
];

---

let cloudColor = cloudColors[sliders.color.value()];
cloudColor.setAlpha(this.lifespan / 2);
fill(cloudColor);

I've used a setAlpha function to achieve a natural fading effect, letting transparency value to increase gradually.



Black

Blue

Yellow

Red

Green



Volume and Frequency


There are two variables, xPos and yPos, that are in charge of processing sound information.

        
const model_url = 'https://cdn.jsdelivr.net/gh/ml5js/ml5-data-and-models/models/pitch-detection/crepe/';
        
        
---
let volume = mic.getLevel();
checkVolumeThresholds(volume);
let mappedVolume = map(volume, minVolume, maxVolume, 0.0, 1.0);

// xPos = map(spectrum[0], 0, 255, 0, width);
xPos = map(freq.toFixed(2), 0, 850, 20, width - 20);
yPos = map(mappedVolume, 0.0, 1.0, 0, height);

---
function listening() {
  console.log('listening');
  pitch = ml5.pitchDetection(
    model_url,
    audioContext,
    mic.stream,
    modelLoaded
  );
}

function modelLoaded() {
    console.log('model loaded');
    pitch.getPitch(gotPitch);
}

function gotPitch(error, frequency) {
    if (error) {console.error(error);
    } else {if (frequency) {
        freq = frequency;
    }
    pitch.getPitch(gotPitch);
    }
}
        

To process the frequency values of the input sound, I've tried the FFT.analyze function earlier; however, at least for this particular sketch, I found out that the pitch detection model gave a better result.


Now I set 850 to be the maximum value of frequency that can be processed but for higher-pitched songs, a bigger number would work better. Adding an additional slider to manually set the maximum frequency might be the solution to this issue.


For the volume, I named the variable yPos for convenience reasons but what it really does is controlling the size of the particles.



Speed




let sliderConfigs = [
--- 
   {
       name: "speed",
       min: 100,
       max: 1000,
       current: 200,
       step: 100,
-- 

// Schedule next circle
next = millis() + sliders.speed.value();

The "speed" slider determines when the next set of particles will appear. If the faster the tempo of the song, the smaller the number is recommended.




Comments


bottom of page