Generally, Agora SDKs use default audio modules for capturing and rendering in real-time communications.
However, these default modules might not meet your development requirements, such as in the following scenarios:
Agora provides a solution to enable a custom audio source and/or renderer in the above scenarios. This article describes how to do so using the Agora Native SDK.
Agora provides an open-source demo project on GitHub. You can view the source code on Github or download the project to try it out.
Before proceeding, ensure that you have implemented the basic real-time communication functions in your project. For details, see Start a Voice Call or Start Interactive Live Audio Streaming.
Refer to the following steps to implement a custom audio source in your project:
joinChannel
, call setExternalAudioSource
to specify the custom audio source.pushExternalAudioFrame
to send the audio frames to the SDK for later use.Refer to the following diagram to implement the custom audio source:
The following diagram shows how the audio data is transferred when you customize the audio source:
pushExternalAudioFrame
to send the captured audio frames to the SDK.Refer to the following code samples to implement the custom audio source in your project.
// Specifies the custom audio source
engine.setExternalAudioSource(true, DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_COUNT);
// The local user joins the channel
int res = engine.joinChannel(accessToken, channelId, "Extra Optional Data", 0);
public class RecordThread extends Thread
{
private AudioRecord audioRecord;
public static final int DEFAULT_SAMPLE_RATE = 16000;
public static final int DEFAULT_CHANNEL_COUNT = 1, DEFAULT_CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO;
private byte[] buffer;
RecordThread()
{
int bufferSize = AudioRecord.getMinBufferSize(DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_CONFIG,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, DEFAULT_SAMPLE_RATE, DEFAULT_CHANNEL_COUNT,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
buffer = new byte[bufferSize];
}
// Starts audio capture. Reads and sends the captured frames until audio capture stops.
@Override
public void run()
{
try
{
// Start audio recording
audioRecord.startRecording();
while (!stopped)
{
// Reads the captured audio frames
int result = audioRecord.read(buffer, 0, buffer.length);
if (result >= 0)
{
// Sends the captured audio frames to the SDK
CustomAudioSource.engine.pushExternalAudioFrame(
buffer, System.currentTimeMillis());
}
else
{
logRecordError(result);
}
Log.e(TAG, "Data size:" + result);
}
release();
}
catch (Exception e)
{e.printStackTrace();}
}
...
}
Refer to the following steps to implement a custom audio renderer in your project:
joinChannel
, call setExternalAudioSink
to enable and configure the external audio renderer.pullPlaybackAudioFrame
to retrieve the audio data sent by a remote user.Refer to the following diagram to implement the custom audio renderer in your project:
The following diagram shows how the audio data is transferred when you customize the audio renderer:
pullPlaybackAudioFrame
to retrieve the audio data sent by a remote user.Refer to the following code samples to implement the custom audio renderer in your project:
// Enables the custom audio renderer
rtcEngine.setExternalAudioSink(
true, // Enables external audio rendering
44100, // Sampling rate (Hz). You can set this value as 8000, 16000, 32000, 441000, or 48000
1 // The number of channels of the external audio source. This value must not exceed 2
);
// Retrieves remote audio frames for playback
rtcEngine.pullPlaybackAudioFrame(
data, // The data type is byte[]
lengthInByte // The size of the audio data (byte)
);
Performing the following operations requires you to use methods from outside the Agora SDK: