Sensory Percussion Tools for Max/MSP

Not sure how many people here use Max/MSP or Max for Live stuff, but I’m
posting this here for now as a teaser I’m still in the development process. I’m basically building a set of abstractions, tools, and patches for using Sensory Percussion pickups natively in Max (without using the SP software at all).

Here’s a little teaser video showing some of the bits:

I’ve had a one pickup set for a few years now and never really implemented it in a meaningful way as I found the SP software too limiting (for what I wanted to do), and sending MIDI over from the software to Max was also too limited in terms of resolution. The learning and matching was great, as was the onset detection (press rolls!), but it wasn’t worth the overhead.

Well more recently I got around to just building a bunch of what I want in Max using some machine learning tools as part of a research project I’ve been a part of for a couple of years (FluCoMa).

The video only shows a couple of the bits I’m working on at the moment, but I wanted to just put some floaters out there in case others were interested in this kind of thing.

Things that are currently planned:

  • super low latency onset detection (the SP software is about 8-12ms slower, mainly since it has to do all the machine learning funny business)
  • microphone correction via convolution (so you can use the “audio” from the Sensory Percussion pickup without it sounding terrible)
  • audio descriptor analysis (this one is huge, as you can map things like “brightness” (spectral centroid) and “noisiness” (spectral flatness) to things, rather than using trained zones which you morph in between, which get broken if you put stuff on/off the drum
  • training and matching (like the normal Sensory Percussion software)
  • a corpus-based sampler where you can load a huge folder of samples, and then “navigate” via audio descriptors by playing your drum(s) without needing to map specific samples to specific regions
  • high-resolution signal-rate control for use with eurorack modules like the ES-8

I’ve got a bunch more utility-type things and other little bits of codes to support stuff as well.

Some of the bits of code will be “ready to use”, including some Max for Live devices, but a lot of it will be building blocks to build more complex and unique things for those that want to explore that side of things.

As soon as I get a few bits built and polished I’ll start posting them to my Github.

Oh, if anyone is curious what all that stuff on my snare is, I’ve been working on loads of extensions to my snare over the years (Kaizo Snare, Transducers).

6 Likes

Hi Rodrigo!

Thanks for sharing, I’m really interested in this kind of research.

Looking forward to check your devices!

Giovanni

1 Like

Thanks man !

1 Like

Oh my god. This sounds very, very, very interesting! Thank you so much for pushing this further!
I use Ableton as my main DAW and the VST was a great direction with the ability to record with lower latency, but the latency is still a pain point for me and how it works together with ableton is still a bit… not so intuitive and seamless. How do you manage to get around the machine learning of the SP plugin and still use the capabilities?

“microphone correction via convolution (so you can use the “audio” from the Sensory Percussion pickup without it sounding terrible)”

could you expand on this? :slight_smile:

… i hope it doesnt all go the very complex and nerdy route, but some of these amazing things become accesible and useable and user friendly. Because theres so much potential in these sensors, i feel.

2 Likes

Because of how it’s built, the pickup itself functions as a regular microphone. So what the SP software/plugin gets is audio, and then it does its magic on that.

So what I do instead is take the audio directly into Max and do my own stuff with it.

The onset detection was the first thing I worked on, and it took me a while to fine tune the settings to get it somewhere where it tracked as well as the SP software. I think I’ve gotten there, and because it’s bypassing the machine learning stuff, if you want really low latency, you can have it. If you want audio descriptors or machine learning classification, you can have that too, it just takes a bit longer as you have to wait for a useful analysis window to pass before it can analyze and match stuff.

This is more of a little perk thing since it’s easy enough to do. But because the SP pickup is a microphone, it can be processed like anything else. The “audio” that comes from it is a bit shit sounding because it’s really a magnetic sensor, which gives you a fast and clear transient, but doesn’t have a full bodied sound, and is quite hissy.

What I’ve done (similar to this video where I did a similar thing for a hihat contact mic) is create a recording using the SP pickup and a nice mic (Earthworks DM20) and then feed the audio from those into a set of tools for impulse response processing (another tech project I was involved with a few years ago). What it does is create an “inverse eq” of the sound of the SP pickup (more or less) and applies that in a way tries to make the SP pickup sound like the DM20. You end up getting a custom EQ curve, that in this case looks like this:

(green is the raw measurement, black is the smoothed version, and red is the inversion)

The final impulse response looks like this:

(black is the final one, red is a filter to cut some lows, and green is the final step from the previous process)

This isn’t something you’d have to mess with to use it, but since you asked what’s going on, I wanted to explain that bit.

Here’s a video of the same kind of process, along with audio, but for a hihat contact mic with a 3d-printed mount I made:

Actually, not sure if I can attach audio, but here is the impulse response for correcting an SP pickup on a snare drum:
correctionSensoryPercussion.wav.zip (15.5 KB)

I’m going to try to make it use friendly, but sort of Max-facing. If you just want a super plug-and-play option, the native SP software/plugin are there and do what they do well. The bits I’ll build should be able to be put together and used without having to get under the hood for everything, and there will be some more standalone-ish options (like a training/matching thing, and the corpus sampler, etc…), but I’ll try to make it in a way that you can go as deep as you want to go with it.

4 Likes

Here’s some performance footage of the corpus-based sampler in action.

So this is 3094 samples that are all pre-analyzed and then being played by by audio analysis coming from the SP pickup and a DPA 4060. So rather than individually mapping samples to zones or regions (or things like center-to-edge), the loudness, brightness (spectral centroid) and noisiness (spectral flatness) of each attack is used to find the nearest match from the sample database.

Not only is this super expressive, but it’s much easier to setup since you don’t have to decide on what samples you want, you don’t have to setup splits, round robins etc…

6 Likes

Wow Rodrigo, this is incredible.

I’m so into it. How soon do you imagine some of these tools being available? The corpus based sampler is of particular interest to me.

Thanks!

Depending on people’s geek level, I can put up greener versions of things to get people going while I still sort things out.

Stuff like the onset detection and correction are done and all that. It’s the corpus-based sampler, and general machine learning/matching stuff I’m still tweaking and playing with.

For example, with that I recently worked out some spectral compensation, where the way you play not only impacts what sample gets triggered but it also applies the spectral envelope (basically the EQ) of what you played to the sample. Imagine it as a per-sample “matching EQ”. It sounds really fantastic as you get another layer of nuance to the whole thing.

This isn’t a musical demo by any stretch, but so you can see/hear what I mean:

So there’s a couple more things like this that I’m messing around with to figure out what kind of features to put into it.

I do have another corpus-based sampler, though it’s more for mosaicking rather than matching samples one-to-one. (here’s a talk on it if you’re interested, and it can be downloaded (for free) here).

3 Likes

Wow, this other corpus-based sampler is so impressive and is actually something I’ve been searching for for over a year now. It pretty much contains every function I could’ve hoped for, especially the pitch quantization (loud/pitch knobs). Reminds me a little of Tatsuya Takahashi’s Granular Convolver, which is a similar device built on a raspberry pi and put into a hardware enclosure. Your device seems a lot more thorough though.

Looking forward to playing around with it!

2 Likes

That is spectacular, Rodrigo! Thank you so much for getting back to me in such details. I am here with a big grin on my face. Love what you are doing, man.

I missed the email with notification on your reply, so sorry for the delay. But if you want someone to test some of this on, please write me!:blush:

1 Like

This is great! I just found this topic because I want to integrate SP into Max/MSP but I didn’t know how. I have some questions, it would me great if you can guide me a little bit.

  • Why are you using SP if you’re using it as a microphone input?
  • How it SP communicating with Max/MSP?
  • What is that LED panel that you have around your snare?

I saw your work (Kaizo Snare, Transducer Snare) and I must say it’s pretty inspiring. Thank you very much for showing us your development :slight_smile:

Hey, awesome, glad I’m not the only one out there trying to do this.

This will make more sense with the answer to the second question, but I’m only using the SP sensor in these videos. I don’t use the software at all. After some initial testing I found that I was getting a bit of latency when sending MIDI from the SP software to Max, and on top of that the controllers (Speed, Velocity, Timbre) are only sent as low resolution (0-127) MIDI values.

On top of that, I wanted to integrate SP with my own samples/samplers and didn’t want to make everything again inside their sampler.

The way the sensor connects to the SP software is as “audio” anyways, so what I’m “hearing” in Max is the same thing that SP software is “hearing”. It’s just a matter of figuring out what to do with that signal.

It isn’t at all in these videos. I haven’t opened up the SP software in a long time…

I did make a simple patch a while back (which I can share if you want) that takes all the MIDI messages from the SP Software and routes it, along with a simple visualizer, for use in Max.

That’s a Novation Dicer. It’s actually a DJ controller meant to side on a turntable. It turns out the radius of a turntable is close enough to a snare that it fits comfortably kind of “inside” the rim. I have a 3d-printed mount that holds it in place and attaches it to the snare.

I’ve tried a few other things, ROLI Blocks, etc…, but most things are either too big or clunky to fit nicely on the snare. I recently backed a thing on Kickstarter that I hope will replace the Dicer. Fingers crossed.

2 Likes

I was meaning the sensor! Why do you prefer to use the sensor if you’re not using the software? Would it be better a microphone or some set of microphones?

That’s why I want to integrate the sensor to Max actually. With Ableton I was getting some latency too.

I would love to check out that patch if you’re willing to upload it.

That controller looks amazing, the Novation Dicer looks very handy too. I will get one of those whenever I can.

Thank you very much, Rodrigo. Looking forward to see that patch and your answers. I’m Andrés by the way, I’m from Chile :slightly_smiling_face:

1 Like

Believe me, I’ve tried. The main thing is you get a really clean transient from the SP Sensor. it’s not actually a microphone, but a magnetic “hall effect sensor”, which is what also helps it avoid cross talk from other drums. With the onset detection algorithm I’ve been using, I can get really instant response, I just can’t get the very tight press rolls, mainly because in the acoustic world, the sound of the drum is still decaying, whereas with the magnetic sensor, each attack gives a clean spike to get a signal from.

As a point of reference, the top channel is a DPA 4060, and the bottom channel is the SP sensor:

Indeed. When chasing those last few milliseconds of latency, every piece of the puzzle counts. Their overall approach is cool if you like their sampler and want to work with it, but that’s not for me.

Here you go. There’s a comment field showing what CC messages I assigned all the controller stuff to.

SP Basic Setup.maxpat.zip (7.4 KB)

2 Likes

Love this!

Thanks for what you’re doing!

Thanks! Will definitely check it out this week.

Hi Rodrigo,

don’t want to bother but I’m not into coding at all and I’m messing around with the combine 1.0 patch.
how do I create a json file from wav files in order to load custom samples in the patch?

Thanks in advance!
Giovanni

Ah cool. Glad you’re digging it.

This part, for now, has to run in Max itself. But you can use this:
Create Corpus.zip (133.0 KB)

1 Like

Super!

Thanks a lot, will try working with it

it seems something’s missing, it doesn’t go forward with the analysis.