{"id":5472,"date":"2011-10-16T22:46:51","date_gmt":"2011-10-17T05:46:51","guid":{"rendered":"http:\/\/www.nearfuturelaboratory.com\/?p=5472"},"modified":"2017-08-18T17:58:47","modified_gmt":"2017-08-18T17:58:47","slug":"supercollider-a-class-at-la-public-school","status":"publish","type":"post","link":"https:\/\/blog.nearfuturelaboratory.com\/2011\/10\/16\/supercollider-a-class-at-la-public-school\/","title":{"rendered":"SuperCollider: A Class at LA Public School"},"content":{"rendered":"
I’m taking a class through the LA chapter of The Public School<\/a> on the SuperCollider<\/a>, like..application, I guess it is. It’s more of a programming environment for making and processing sound. Good fun stuff. I really want to invest more time and energy towards the audio project and this seemed like a good way to start on that goal.<\/p>\n Ezra Buchla<\/a> is teaching it.<\/p>\n SuperCollider is a rather terse programming environment with a kinda curious set up that requires services\/servers to actually run the programs you write \u2014\u00a0or that are interpreted. I’m making assumptions that this was done so as to allow a distributed model of processing when things get hairy or maybe its just a fetish of the network and its possibilities for elastically distributing processing. In any case, what I’m most interested in is being able to real-time process sound and a little less interested in generative sound synthesis.<\/p>\n <\/p>\n I never thought (just without even checking) at how rich and sort of \u2014\u00a0overwhelmingly weedy the SuperCollider API was. I mean \u2014\u00a0there’s tons there. I wish that some embedded hardware-y stuff could actually consume\/interpret it somehow so that I could have really portable sound processing capabilities. What I’d like is a way to do process sound but I don’t want to have to do it on a laptop or something \u2014 should be able to do it in something the size of a 1\/4″ stereo audio jack or something.<\/p>\n <\/p>\n But, to contradict myself, I may in fact be also somewhat interested in generative sound synthesis \u2014\u00a0making sounds with things, objects and algorithms. It’s on the @2012 list of things to be some kind of music maker of some sort and meeting up with Henry Newton-Dunn<\/a> (who made BlockJam<\/a> a precursor and prior art of Siftables\/Sifteo<\/a> by a good 6 or 7 years) at the AIGA Pivot conference a couple of days ago<\/a> was fortuitous because I remembered that he was a DJ back there in Tokyo. We had some excited conversations about sound and audio and DJ’ng and software to do all that. I feel a collaboration in the near future!<\/p>\n In fact, here’s Henry himself \u2014 this is probably when we first met in, like..2005 in Tokyo. Some hepcat spot. Check out that Hi-Fi “Set” in the background!<\/p>\n I’m taking a class through the LA chapter of The Public School on the SuperCollider, like..application, I guess it is. It’s more of a programming environment for making and processing sound. Good fun stuff. I really want to invest more time and energy towards the audio project and this seemed like a good way to … Continue reading SuperCollider: A Class at LA Public School<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[23,24,203],"tags":[1181,1186,878,1227],"yoast_head":"\n<\/a>\n<\/div>\n
<\/a>\n<\/div>\n
<\/a>\n<\/div>\n
<\/a>\n<\/div>\n","protected":false},"excerpt":{"rendered":"