Mixing and encoding multiple audio streams at the same time #615
Replies: 3 comments 7 replies
-
The TL;DR: I guess it is a question of what your goal is. If you want to DIY for the joy of making audio software (say for doing podcasts or live DJ mixing), have fun. Enjoy the journey. Questions: This sounds like the sort of thing that is handled by audio docks (such as something like Komplete Audio 6 or one of many equivalents) and some free or bundled Digital Audio Workstation software, such as Ableton Live Lite. This depends on what computer platform you are targeting of course, as well as how you are sourcing the audio. |
Beta Was this translation helpful? Give feedback.
-
@orcmid My goal is to be able to obtain a single .wav file that live captures and merge both microphone and system sound using miniaudio. While obtaining system audio and microphone (using Blackhole or Wasapi) into two different .wav files using two different devices is clear, it is not when it's the case of merging two different live audio sources, as the header files in the node examples, just manage already present on disk audio files. Both microphone and system audio can be mono or stereo. |
Beta Was this translation helpful? Give feedback.
-
There's no built-in support for your specific scenario, but you can certainly do it. Create your two capture devices, output them to their own buffers, when both buffers go beyond a certain threshold, mix them and write them out to the wav file. The annoying part with all of that is the synchronization of each of the devices. They'll both be running on their own threads so you'll need to account for subtle timing differences. Also, starting and stopping won't be atomic so you might need to think of a way to mitigate timing differences in your start/stop times (maybe start and stop your devices, but only start writing data when a global variable is flagged or something). |
Beta Was this translation helpful? Give feedback.
-
I'm looking inside how to master miniaudio library and I'm quite impressed by both implementation and documentation in the header file.
Nonetheless, I have this use case in which I should export and encode a single .wav audio file starting from two input audio sources (2 different devices). I started looking inside the node example for getting my head around it, but I've no file to load in memory and initialize the ma_decoder object hence the audio captured is coming from the device itself.
Might someone point me out at some method for achieving this or else?
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions