top of page
Search

Synth Parameter Learning ML (Python, PyTorch)

  • Writer: Niccolo Abate
    Niccolo Abate
  • Nov 11, 2023
  • 1 min read

Intro:

This code sample covers an ML exercise and a very early, rough prototype of a audio learning tool for novel sound creation, preset generation, and software evaluation. Specifically, I will show the basic synth implementation and the ML model creation and training.


The goal of the project is to train a network to learn the relationship between the output audio and the parameters needed to create this audio, allowing the model to fit the synth parameters to arbitrary input audio samples, with the final vision being a tool which can take an audio sample and return parameters giving the best fit approximation of the sample, through the constraints of the synth it was trained on.


The full notebook can be viewed to see the full training environment or the production notebook can be run to see the function of the prototype, both on google colab.


Code Overview:

The code shown below is just the core components, without a lot of the miscellaneous data handling including synth parameter handling, dataset generation, etc. Specifically, the simple FM synth implementation, including high / low frequency oscillators and simple ADS envelopes, is shown, as well as the ML model definition and training.


The rest of the code structure is viewable in the full notebook.


Code:


Simple FM Synth vectorized Numpy implementation:


Pytorch model creation and training:


Related Posts

See All
K-D Tree (C++)

Intro: This code sample covers a generalized K-D Tree class I implemented for use in two audio plugins at Red Timbre Audio, both of which...

 
 
 

Kommentare


bottom of page