Here’s a post I’ve written some time ago on the Unity forums, detailing my ideas for Singularity. In it I make some generalisations and definitely leave out a lot of details and subtleties, but it makes a nice introduction to what I’m doing.
The State of Game Audio
The traditional sample-based approach to game audio is old and dated.
Over the course of the last two decades, game graphics have evolved from bitmap sprites to near photo-realistic imagery running at a solid 60 frames per second. We have shaders, massively parallel calculations running on dedicated hardware, and much more. With today’s and tomorrow’s hardware you can literally trace a ray of light as it bounces from surface to surface (and even through them!) towards the camera, creating crystal clear pictures with ever-increasing fidelity.
Some of these developments are slowly starting to transfer to game audio, but not nearly enough! Games across the entire spectrum, from AAA to Indie, still resort to using ancient sample-based approaches for audio. Middleware packages such as WWise or FMOD offer real-time effects processing, which is a step forward, but they don’t offer you the possibility to create your own synthesis model and generate sound from scratch on-the-fly. Furthermore, these packages seem to be mostly aimed at AAA first-person-shooter titles, making it difficult to do something radically different with them. And lastly, only the latter of those packages is available for use if you are a small-time developer.
This inhibits development of game audio as a more integral part of game design. The result is that audio in most games is still mostly, and sometimes even literally, an afterthought. In my opinion game audio is at least 10 years behind on game graphics, both in terms of technological capabilities and their usage.