I'm posting here to celebrate a few fresh things from today:
1) ossia score 3.5.3 was released :)
2) We're going to give a lab on interactive graphics on embedded platforms at SIGGRAPH in Vancouver in August, which teaches how to do real-time visuals with interaction on Raspberry Pi:
This is really cool! Live music, game shows, holiday light displays, and anything in between can hugely benefit from this kind of tech.
The whole Who Wants To Be a Millionaire sequence comes to mind (where, on an arbitrarily timed cue, the lights physically rotate downwards, synchronized with the electronic score and floor panel animations, to bring pressure onto the contestant). And from a bit of research, they needed to do a fair amount of work for that, which arguably could have been "orchestrated" from software like this: https://www.tpimeamagazine.com/robe-rig-lights-who-wants-to-...
> Synching the lighting consoles to receive MIDI triggers from the show’s gaming computer which activates specific commands for sound and video related to screen content was an intense talk that took plenty of work and lateral thinking. Additionally, more signals from the lighting console were used to access the media server operating a series of pixel SMD effects inbuilt in the set – so there was a lot of synching happening!
I'm also aware of software like https://lightkeyapp.com/en - but ossia score seems to focus more on temporal flexibility/coding/behavior as the primary focus, whereas Lightkey focuses on the physical layout of lighting at any given time. Arguably the feature sets should merge - Blender's ability to have multiple views that emphasize or de-emphasize the timeline comes to mind!
These things shouldn't be blocked behind massive investments. Anyone who can put a few cheap tablets on stands and plug in a MIDI keyboard should have best-in-class visualization capabilities, and be able to iterate on that work as more professional hardware becomes available. It's one of the things I love about open source.
I love this sort of thing. I wish there were a better alternative for ISF[1], it's quickly showing it's age. The kind of GPU sand boxed graph construction that this enables would be really powerful with the right "linker". I'm thinking about drafting a proposal for wesl[2] to have a more ergonomic reflection and metadata system to make this kind of quick and scrappy pipeline construction feel first class in shader tooling. Slang has something like this, so does GDShader and the shader tooling for unity.
Chataigne is a really good software but I'm not sure they're too comparable...
Nowadays ossia is more about the content creation part, with a whole graphics pipeline amenable to VJ and real-time audioreactive visuals, where you can for instance have AI models like streamdiffusion & the like (https://streamable.com/zfrbo3) or just play with VST plug-ins and drum machines to make beats (https://streamable.com/fc02so)
All the recent artworks I've worked on involving ossia have used it exclusively, for instance for light, sound and video design, while if I'm not mistaken Chataigne is more commonly used in conjunction with for instance software such as Live or TouchDesigner.
Plus a variety of Video DJ platforms like VDMX, Arkaos, GrandVJ, which have some of this functionality, and then a lot of free and commercial DMX-512 lighting control software and hardware that can be interfaced to these show control systems. Q-Lab is widely used in the stage show industry. Chataigne is one I hadn't heard of.
It seems OK to me, although good documentation would help much better, I think. It says the documentation is currently in-progress, so if it is not good enough now then hopefully will be good enough later.
I'm posting here to celebrate a few fresh things from today:
1) ossia score 3.5.3 was released :)
2) We're going to give a lab on interactive graphics on embedded platforms at SIGGRAPH in Vancouver in August, which teaches how to do real-time visuals with interaction on Raspberry Pi:
https://s2025.conference-schedule.org/presentation/?id=gensu...
3) The Ars Electronica prize results were announced today and two works using ossia-max, our Max/MSP binding, got featured at Ars Electronica 2025:
- Organism + Excitable Chaos by Navid Navab and Garnet Willis got the Digital Musics & Sound Art Golden Nica
https://calls.ars.electronica.art/2025/prix/winners/16969/
- On Air by Peter van Haaften, Michael Montanaro and Garnet Willis got a Digital Musics & Sound Art honorary mention
https://calls.ars.electronica.art/2025/prix/winners/17358/
Earlier this year, ossia was also featured at the Venice Biennale, it has been used for the Pavillon of Ireland: https://www.innosonix.de/pavilion-of-ireland-at-the-venice-b...
This is really cool! Live music, game shows, holiday light displays, and anything in between can hugely benefit from this kind of tech.
The whole Who Wants To Be a Millionaire sequence comes to mind (where, on an arbitrarily timed cue, the lights physically rotate downwards, synchronized with the electronic score and floor panel animations, to bring pressure onto the contestant). And from a bit of research, they needed to do a fair amount of work for that, which arguably could have been "orchestrated" from software like this: https://www.tpimeamagazine.com/robe-rig-lights-who-wants-to-...
> Synching the lighting consoles to receive MIDI triggers from the show’s gaming computer which activates specific commands for sound and video related to screen content was an intense talk that took plenty of work and lateral thinking. Additionally, more signals from the lighting console were used to access the media server operating a series of pixel SMD effects inbuilt in the set – so there was a lot of synching happening!
I'm also aware of software like https://lightkeyapp.com/en - but ossia score seems to focus more on temporal flexibility/coding/behavior as the primary focus, whereas Lightkey focuses on the physical layout of lighting at any given time. Arguably the feature sets should merge - Blender's ability to have multiple views that emphasize or de-emphasize the timeline comes to mind!
These things shouldn't be blocked behind massive investments. Anyone who can put a few cheap tablets on stands and plug in a MIDI keyboard should have best-in-class visualization capabilities, and be able to iterate on that work as more professional hardware becomes available. It's one of the things I love about open source.
I love this sort of thing. I wish there were a better alternative for ISF[1], it's quickly showing it's age. The kind of GPU sand boxed graph construction that this enables would be really powerful with the right "linker". I'm thinking about drafting a proposal for wesl[2] to have a more ergonomic reflection and metadata system to make this kind of quick and scrappy pipeline construction feel first class in shader tooling. Slang has something like this, so does GDShader and the shader tooling for unity.
[1]: https://isf.video/ [2]: https://github.com/wgsl-tooling-wg/wesl-rs
Wow, ossia has come a long way! Pretty impressive for a solo-dev project I have to say :)
Previous Show HN, 13-sept-2018: https://news.ycombinator.com/item?id=17982771
Related discussion, 26-sept-2020: https://news.ycombinator.com/item?id=24600824
Chataigne ftw
Chataigne is a really good software but I'm not sure they're too comparable...
Nowadays ossia is more about the content creation part, with a whole graphics pipeline amenable to VJ and real-time audioreactive visuals, where you can for instance have AI models like streamdiffusion & the like (https://streamable.com/zfrbo3) or just play with VST plug-ins and drum machines to make beats (https://streamable.com/fc02so)
All the recent artworks I've worked on involving ossia have used it exclusively, for instance for light, sound and video design, while if I'm not mistaken Chataigne is more commonly used in conjunction with for instance software such as Live or TouchDesigner.
How does it compare to Millumin?
I think this general class of software is called Show Control. There are commercial and open source projects that also do it in some form:
https://en.wikipedia.org/wiki/MIDI_Show_Control
https://v-control.com/
https://qlab.app/
https://troikatronix.com/
https://derivative.ca/
Plus a variety of Video DJ platforms like VDMX, Arkaos, GrandVJ, which have some of this functionality, and then a lot of free and commercial DMX-512 lighting control software and hardware that can be interfaced to these show control systems. Q-Lab is widely used in the stage show industry. Chataigne is one I hadn't heard of.
[flagged]
It seems OK to me, although good documentation would help much better, I think. It says the documentation is currently in-progress, so if it is not good enough now then hopefully will be good enough later.
I'm only a dabbler in this space, but none of those things were incomprehensible to me.
I remember stumbling across this a couple of years ago and, despite being interested in this field, finding the website confusing.
The GitHub readme seems fairly clear now, though.