Thanks!
Depending on people’s geek level, I can put up greener versions of things to get people going while I still sort things out.
Stuff like the onset detection and correction are done and all that. It’s the corpus-based sampler, and general machine learning/matching stuff I’m still tweaking and playing with.
For example, with that I recently worked out some spectral compensation, where the way you play not only impacts what sample gets triggered but it also applies the spectral envelope (basically the EQ) of what you played to the sample. Imagine it as a per-sample “matching EQ”. It sounds really fantastic as you get another layer of nuance to the whole thing.
This isn’t a musical demo by any stretch, but so you can see/hear what I mean:
So there’s a couple more things like this that I’m messing around with to figure out what kind of features to put into it.
I do have another corpus-based sampler, though it’s more for mosaicking rather than matching samples one-to-one. (here’s a talk on it if you’re interested, and it can be downloaded (for free) here).