Thanks to everyone who tuned into our virtual meetup yesterday. We’ve put the videos for you individually below.
Themes discussed included creating an audio plug-in with MATLAB, getting a job as an audio programmer, and making your own audio AI classifier. Enjoy!
Gabriele Bunkheila (Senior Product Manager, Mathworks)
“Creating Audio Plugins with MATLAB“
Have you ever wanted to write your first audio plugin but found C++ too challenging to get started? Are you passionate about audio signal processing but don’t quite yet have deep enough software engineering skills? Do you use MATLAB to code our audio algorithms and would love to turn those into VST, AU, or other types of audio plugins?
In this short session, you will learn how to write your first VST plugin using only MATLAB code. We will cover all the basic ideas required to create a plugin, including structuring code for real-time efficiency and defining interactive interfaces for parameter tuning. We’ll make use of practical coding examples – prior programming experience will be beneficial but is not required.
If you are a student, make sure you check out the MATLAB Plugin AES Student Competition, at its 3rd edition in 2020. More information at aes.org/students/awards/mpsc/
Spencer Rudnick (they/them) (Software Engineer, Ableton)
“How to Get a Job as an Audio Programmer”
If you’ve ever thought about what it takes to become a professional music software developer, but you aren’t sure how to get there, then this talk is for you.
Spence Rudnick (they/them) transitioned from web development to music software development, finding their dream job with the help of twitter and the web audio community. Along the way, they learned C++, dipped their toes into the world of DSP, and contributed to open-source web audio tools.
Scott Hawley (Professor of Physics, Belmont University)
“Vibrary: Train Your Own AI Audio Classifier”
Producers and composers typically have vast audio sample libraries, purchased by different vendors who supply different metadata. Finding the sounds you want can be tricky. Some apps employ pre-trained machine learning models to aid in this, but may not allow for the criteria that users want to search for.
I’ll be talking about Vibrary, an open-source, JUCE-based consumer product to help music producers and composers *train their own* machine learning models to help them find the audio clips and samples they want on their hard drives. This is done via a client-server model in which the GUI client app developed with Art+Logic, Inc. will communicate with a GPU-powered ’training server’ in the cloud that trains a neural network to recognize the users’ intended categories of audio tags.
Vibrary can also serve as a GUI front end for other machine learning models created by other researchers and companies, who may swap in their own models on the training server. I’ll show a demo of this, offer ways to try out this work-in-progress project, and invite developers to submit their own improvements to the project!
About the Audio Programmer Meetups
We host monthly presentations from those looking to share their discoveries in music tech and software development. Some example topics include:
Exploring cutting edge technologies for audio development
Best practices for real-time programming
Music information retrieval
If you are interested in presenting a talk or demo, please submit a proposal at https://theaudioprogrammer.com/submit