During real-time communications, you can pre- and post-process the audio data and modify it for desired playback effects
The RTC Native SDK uses the IAudioFrameObserver
class to provide raw data functions. You can pre-process the data before sending it to the encoder and modify the captured audio frames. You can also post-process the data after sending it to the decoder and modify the received audio frames.
This article describes how to use raw audio data with the IAudioFrameObserver
class.
Refer to the sample project on GitHub to learn how to use raw audio data in your project.
Before using the raw data functions, ensure that you have implemented the basic real-time communication functions in your project. For details, see Start a Call or Start Live Interactive Streaming.
Follow these steps to implement the raw data functions in your project:
registerAudioFrameObserver
to register an audio frame observer.onRecordAudioFrame
, onPlaybackAudioFrame
, onPlaybackAudioFrameBeforeMixing
, or onMixedAudioFrame
.The following diagram shows how to implement the raw data functions in your project:
The following diagram shows the data transfer with the IAudioFrameObserver
class:
With onRecordAudioFrame
, onPlaybackAudioFrame
, onPlaybackAudioFrameBeforeMixing
, or onMixedAudioFrame
, you can:
AudioFrame
.AudioFrame
and return to the SDK or the custom renderer.registerAudioFrameObserver
to register an audio frame observer.// Register audio frame observer
BOOL CAgoraOriginalAudioDlg::RegisterAudioFrameObserver(BOOL bEnable, IAudioFrameObserver *audioFrameObserver)
{
// Create an AutoPtr instance using the IMediaEngine as template
agora::util::AutoPtr<agora::media::IMediaEngine> mediaEngine;
// See AgoraBase.h in the SDK for the implementation of the AutoPtr class
// The AutoPtr instance calls queryInterface and gets a pointer to the IMediaEngine instance via IID.
// The AutoPtr instance accesses the pointer to the IMediaEngine instance and calls registerAudioFrameObserver via IMediaEngine.
mediaEngine.queryInterface(m_rtcEngine, agora::AGORA_IID_MEDIA_ENGINE);
int nRet = 0;
if (mediaEngine.get() == NULL)
return FALSE;
if (bEnable)
// Register audio frame observer
nRet = mediaEngine->registerAudioFrameObserver(audioFrameObserver);
else
// Unregister audio frame observer
nRet = mediaEngine->registerAudioFrameObserver(NULL);
return nRet == 0 ? TRUE : FALSE;
}
Once you obtain the raw audio data, you can pre-process or post-process it.
Get the recorded raw audio data, amplify the volume, and send the audio data to the SDK
// Get the recorded raw audio data, amplify the volume, and send the audio data to the SDK
bool COriginalAudioProcFrameObserver::onRecordAudioFrame(AudioFrame& audioFrame)
{
SIZE_T nSize = audioFrame.channels * audioFrame.samples * 2;
unsigned int readByte = 0;
int timestamp = GetTickCount();
short *pBuffer = (short *)audioFrame.buffer;
for (SIZE_T i = 0; i < nSize / 2; i++)
{
if (pBuffer[i] * 2 > 32767) {
pBuffer[i] = 32767;
}
else if (pBuffer[i] * 2 < -32768) {
pBuffer[i] = -32768;
}
else {
pBuffer[i] *= 2;
}
}
return true;
}
Get the playback raw audio data
// Get the playback raw audio data
bool COriginalAudioProcFrameObserver::onPlaybackAudioFrame(AudioFrame& audioFrame)
{
return true;
}
Get the mixed recorded and playback audio frame
// Get the mixed recorded and playback audio frame
bool COriginalAudioProcFrameObserver::onMixedAudioFrame(AudioFrame& audioFrame)
{
return true;
}
Get the audio frame of a specified user before mixing
// Get the audio frame of a specified user before mixing
bool COriginalAudioProcFrameObserver::onPlaybackAudioFrameBeforeMixing(unsigned int uid, AudioFrame& audioFrame)
{
return true;
}
To update the sampling rate of the audio data in the callbacks, you can call the following methods: