Singularity – Introduction

Here’s a post I’ve written some time ago on the Unity forums, detailing my ideas for Singularity. In it I make some generalisations and definitely leave out a lot of details and subtleties, but it makes a nice introduction to what I’m doing.

The State of Game Audio

The traditional sample-based approach to game audio is old and dated.

Over the course of the last two decades, game graphics have evolved from bitmap sprites to near photo-realistic imagery running at a solid 60 frames per second. We have shaders, massively parallel calculations running on dedicated hardware, and much more. With today’s and tomorrow’s hardware you can literally trace a ray of light as it bounces from surface to surface (and even through them!) towards the camera, creating crystal clear pictures with ever-increasing fidelity.

Some of these developments are slowly starting to transfer to game audio, but not nearly enough! Games across the entire spectrum, from AAA to Indie, still resort to using ancient sample-based approaches for audio. Middleware packages such as WWise or FMOD offer real-time effects processing, which is a step forward, but they don’t offer you the possibility to create your own synthesis model and generate sound from scratch on-the-fly. Furthermore, these packages seem to be mostly aimed at AAA first-person-shooter titles, making it difficult to do something radically different with them. And lastly, only the latter of those packages is available for use if you are a small-time developer.

This inhibits development of game audio as a more integral part of game design. The result is that audio in most games is still mostly, and sometimes even literally, an afterthought. In my opinion game audio is at least 10 years behind on game graphics, both in terms of technological capabilities and their usage.

Audio Design Process

The huge gap in technology means that audio development is a parallel process.

A typical game designer writes a design document with little attention to how the game is supposed to sound. With tools such as Unity, a studio can start prototyping a game within weeks, allowing very agile development methodologies. Sound designers and composers typically get none of these benefits. They are called in late in the process, get their assignment, produce the end result, and that’s it. Game developers embraced agile development a long time ago, while audio designers are still stuck with the waterfall approach. This is not by choice, but by necessity.

Game audio is still mostly linear content, applied in a non-linear context. Situations in games change continuously, and all the time! The audio and music in games should be able to instantly adapt to this.

All this means that audio is never central to a game design, while audio and music are actually a great area for innovation! There is a huge amount of creative potential here, completely untapped.

Laptop Performance & Live Coding

Stepping outside of the world of game development for a moment, let’s look at what musicians can do with technology.

The world of electronic music has plenty of tools that enable rapid, on-the-fly development of audio. Think of Ableton Live and Reason. There are also plenty of packages that enable a programmer’s approach to audio, such as Pure DataMax/MSPSuperCollider and ChucK. Performers can even take these tools on stage, start from a blank slate, and entertain crowds within minutes! That’s how powerful they are.

So I’m left to think: Why have we not integrated these tools into our game development environments?

Check some of these video’s out for an idea of what this technology can do:
(Long videos, scroll through them if you’re impatient Wink )

Singularity – A flexible, real-time, general purpose audio engine

So I’ve been looking at my options for integrating that kind of technology directly into Unity.

My first thought was Pure Data. PD, with its visual programming paradigm is very easy to get into, and the software is quite mature. However, PD does not support object oriented programming, which means that its architecture does not map well onto a game engine.

Next I looked at ChucK. ChucK has great ideas about managing and playing with time, and its language contains very simple, yet very powerful semantics. ChucK’s implementation however is still very immature, resulting in very instable performance.

It appears SuperCollider is probably the most suitable for integration with Unity. Its language is well-defined, its implementation seems very robust and fast, and it is very feature-rich.

Regardless of the eventual implementation: The idea is that you get full control over the in-game audio.


SuperCollider is multiplatform, so builds for both Windows, Mac and Linux should be possible. Support for exotic platforms such as mobile devices and game consoles will likely require significant modifications to the SuperCollider sourcecode but should nevertheless be possible. SuperCollider uses a client-server model, meaning that the server could run alongside both a game and the editor, and unity could provide an integrated graphical front-end. Communication with the server is possible through Open Sound Control.

Several things need to be built:

Essentials components:

  • Open Sound Control implementation in dotNET/Mono
  • Client-side (Unity) representations of server objects and their composition. These should provide easy to use access to the SuperCollider server, straight from Unity scripts.

These features should be optimised and introduce as little overhead and latency as possible.

Future components:

  • Fully featured SuperCollider client inside the Unity editor. This includes providing a code editor, a port of the sc_lang interpreter, and possibly a GUI for editing synthdefs.
  • SuperCollider server compiled as dynamic link library for each supported platform, for complete integration into any game.

As a Unity Basic user this package would enable you to freely experiment with SuperCollider running externally. For Pro users, this would mean you can seamlessly integrate SuperCollider into your builds, while the end-user is non the wiser.


If you’re still with me, I’m very interested in hearing what you think! Already have an idea for a game using this technology? See some pitfalls in the implementation? Think all of this is nonsense? Do tell!

Example Ideas

Modeling wind and airflow: For my current game, Volo, I need to model the complex noise you hear when your head moves through air at very high speeds (say, like sticking your head outside the window while driving at 80mph). The thing is, the resulting sound is entirely dependent on your head’s orientation in relation to the airflow! This is not something you can do effectively with samples, as even changing your orientation by a couple of degrees causes dramatic changes in the character of the sound.

Binaural mixing: Using filtering systems based on the human ear (HRTF), you can process sounds such that the signal contains spatial cues that the brain can understand. This results in highly realistic sound localisation which has to be heard to be believed: Virtual Barbershop Demo – Use Headphones

How about a music game where the music doesn’t remain static but actually changes as the player plays? I’ve recently done a project in which a player could use a Guitar Hero controller to actually play guitar, and produce feedback-fuelled Jimi Hendrix solos.

  • Robin

    Your thoughts on implementation of SuperCollider in Unity is very exiting!

    I had/have some ideas of implementing VST usage (instruments specifically) in Unity but i think it’s a bit farfetched and hard to do… Don’t know what’s possible with the pro version apart from basic.

    Let me know the progress on this thing and if you need any testing and/or ideas.


    • Thanks for the support!

      Sounds like you want the same thing. VST’s allow you to create anything you want, how you want, and when you want. The problem with VST’s would be that you can’t really separate the audio processing logic from their GUI’s, meaning you have a lot of overhead. Also, they can only receive MIDI data, which makes it hard to set up complicated logic. I can think of many cases where an audio engine will have to make some decisions based a game’s current state, which is very hard to communicate through MIDI.

      SuperCollider gives you a lean, mean, efficient audio processing server. It can do most — if not all — the audio processing that you normally do with VSTs, and more. It is completely open, so you can write your own VST-like plugins if there is anything missing. As for GUI-based interaction: I plan to create a VST-like GUI system in the Unity editor with knobs and sliders, so you can tweaks synth settings visually. With any luck you can have the VST-like power right from within Unity.

      Ambitious and very long-term? Definitely! That makes me all the more happy that you’d like to do some testing or brainstorming. So yeah, if you think of any cool game concepts or features you’d like to see: drop me a line! 🙂

      P.S. I’m quite curious how you found this post so fast. I’ve only just put up this new blog, and I didn’t really post anything on the old one. Magic? 😉

  • Pingback: Volo – The Story So Far - Synthode()

  • joel

    great comment on the lack of sophistication in game audio. good luck with your project it looks fantastic. ping me if you want some feedback or help with your audio projects

  • Hey. Fantastic idea. How far did you get implementing supercollider in unity?

  • Hey David.

    I’m still working on it. Right now I have Open Sound Control communication set up, and have representations of SC server objects (synths, busses, etc.) in Unity Script. Several things to do now:

    – Handle OSC messages from SC back to Unity with a thread-safe queueing system.
    – Build a sequencer/clock system
    – Build tools that make development more efficient (graphical editors for sound logic)

    I’ve encountered one major problem; a bit of a show-stopper. On windows you really want to use ASIO drivers in order to keep audio latency very low. Asio drives want exclusive access to your sound card. Unity uses FMOD as its default audio engine, which when initialised means that asio will not work. I’ve contacted Unity Technologies to see if they will eventually support turning off FMOD; they say they will, but not soon.

    So my goals for complete integration are pushed back quite a bit, but at least I can still develop tools and proof of concepts.

    I’ll post a more detailed report soon. Thanks for your interest. 🙂

  • I’d be extremely interested in being an early alpha tester for whatever tools you develop at this point, as it would mean the world to me to have access to a synthesis engine from within Unity, to prototype a couple of projects in my immediate pipeline.

    Let me know if that’d be ok with you, or whether I can help test, debug, etc. 😉 (I have experience with debugging and I’m a good reporter)



  • That’s very good to hear! I need to make the framework a bit more robust for it to be really usable, but as soon as it is I’ll let you know.

  • Great. you can get me on

  • Nek

    My reaction – time to learn Unity 3D. I’m familiar with the tools you mention and supercollider is a great choice! I can see multitude of use cases. Imaging a physical playground game (think Little Big Planet) where the objects play notes on collisions/actions and you can build a chain reaction construction which plays a melody.

    • Yep, that’s exactly the kind of stuff I hope to enable. Glad to see it sparked your imagination. 🙂

  • David Plans

    Hey again. I was speaking to Samantha Kalman @ Unity in Copehagen a couple of weeks ago, and mentioned the difficulty in building sequencer/clock systems in Unity. Since this is one of the things you mentioned you want to accomplish in building the Supercollider integration, I though I’d say, she said to look carefully at the Time.deltaTime function, which I now have.

    But I’m wondering, since that function essentially works inside the normal update function (and is therefore frame-based), it might be difficult to create a sequencing/clock engine out of it.

    Did you find that’s the case? Did you have to go elsewhere for it?

    Just wondering how it’s going…excited to try it out!


  • admin

    Heya David,

    Your hunch is correct, I’m sorry to say. While you can use Unity’s built in Time class to find out the time in any given frame with great precision, you really can’t act on it.

    If your game is running at around 60 frames per seconds, then you only have 60 moments in a second in which you can *do* something (like triggering sound). Adding to that imprecision is the fact that the time between each frame is varying all the time. You could use FixedUpdate to mitigate that effect a bit, but setting that to a high frequency would make any physics calculations mega-expensive. During a frame all you can use Time.deltaTime for is determining that yes, you are indeed 35 milliseconds late.

    Unity’s Update functionality is fully geared to graphics and physics, not real-time audio control. You don’t have any way to tell it to ‘wake up’ after *exactly* 12 milliseconds have passed. You can only tell it to ‘wake up’ in the *first frame* after those 12 milliseconds. Your event is thus not triggered after 12 millisecond, but 12 milliseconds + whatever time until the next closest frame.

    SuperCollider doesn’t have this problem for the simple reason that is has its own clock, running in its own thread. That clock has been carefully designed for audio processing, and has been refined to perfection over many years. It can do any kinds of sequencing you would ever want. Well, that, and you can actually use it to generate sounds in real-time. 😉

    Err, hope that makes sense. I’ll draw some fancy graphs soon to help explain it a little better.

  • David Plans

    Ah-ha! Yup that makes a lot of sense now. Again, whenever you have a codebase that’s ready for alpha (or whatever) testing, let me know! 🙂

  • David Plans

    Hey. Sorry to nag you again, but just thought of a couple of things: I ported a bunch of c++ code meant to do live spectral matching to a c# plugin for Unity. Because I really didn’t want to deal with Unity’s own audio engine, I bypassed it, using libsndfile instead, which worked fine. Now, if I remember correctly, Unity is now bundled with libsndfile, right? So other than timed triggers (which you can do at the c# level), couldn’t we just go for that?

    Also, just wondered…do you think you’d be up for making Singularity an open project? If you sourceforged it or githubbed it, you’d certainly get help. I know it’s a bit of a worry having other people messing, but you can decide what level of control you want?

    I just really want to see this get farther…


  • Hi,
    David Plans made me aware of your project. Let me chime in and say that i’m also very interested in your undertaking which would be of great value for a research project of mine starting in early 2011.
    I’m not sure in which way i can be of help at the moment but i’d certainly test things as soon as it’s feasible.
    thanks for your efforts, Thomas

  • Hi Tinus. If this work comes to fruition, a lot of people will have lots of fun : ) I think you’ve chosen the right sound engine as the sc-server is super slick, good sounding, and effective. There are quite a few systems that have successfully implemented their own mechanism to control the sc-server.

    I’m not sure about the possibility of your idea of bundling sc-lang (the interpreter and the language) in Unity, as it’s GPL licensed. Unity would have to be open as well then, no?

    Good luck with the good work!

  • davide

    This is a great idea! having a sound engine in unity would offer a lot of opportunities!
    In my opinion a lot of nice things could be done just with the sc-server even without low-latency and without sc-lang!

  • @David Plans: Heya, cool stuff! I don’t know if it still comes bundled with libsndfile, they seem to be using FMOD for the output, currently. How’s the latency on consumer-grade soundcards with your implementation? Timed triggers and sequencing is something we’ll have to do in C# for Singularity too, since SuperCollider’s timing mechanisms are in its client, not the audio server.

    I’ve not worked on Singularity for a while now. I just can’t manage to work on it and Volo at the same time, and Volo has my preference right now. The problem is that there’s only a hand-full of people interested in this audio technology; most people will not be convinced of its merit until demonstrated in a mind-blowing way. I’ll probably dive back into it once Volo has matured.

    That said, open-source is the way to go. I’ll put it all online at some point. 🙂

    @Thomas Grill: Thanks for your interest! I’ll keep that in mind.

    @Thor: Hope so! Yeah, I’m not sure about the licencing either. All I know is that something like this just needs to be made, regardless of whatever. I’m hoping that licencing can be worked out with some common sense. Singularity will be opened up, there’s no doubt about that. 🙂

    @Davide: Thanks! Always nice to hear that other folks share your way of thinking. 🙂

  • David Plans

    @Tinus: you can bypass whatever Unity uses at the C++ level, so as long as you write a plugin that includes libsndfile header files, it’s fine. No noticeable latency on fairly crap pc, and none in macbook pro…

    Do you think it’s really too early to open Singularity to the rest of us? Whilst you work on Volo, we could take a look and take this forward with you…while the interest is there.

  • Hey again, David!

    Apologies for getting back to you so late! That seems like a neat trick, I’ll have to try that out soon. A potential dynamic library containing the supercollider server could just include that header as a temporary fix.

    I’m currently preparing for my final exams and finishing my thesis (which details the design goals for Singularity, I’ll post it here soon.) Unfortunately, yes, I do think it is a bit too early to put the code online (read: it is a huge broken mess right now.) However, I’m getting some breathing room in about a week, in which I’m planning to get stuff up and running online. 🙂