iOS开发-AudioToolBox之AudioQueue录音


在这里插入图片描述
关于 AudioQueue的录音和播放,都使用了 3缓存机制,这样避免资源开销,内存分配,循环使用该 3个buffer即可。
对于AudioQueue的录音,一般只需要了解如下:

  1. 如何初始化
  2. 如何开始,暂停,结束
  3. 如何配置录音参数
  4. 如何获取录音数据
  5. 如何处理异常状况

一般来说一个模块都需要初始化,音频初始化则需要配置
采样率 通道数 以及格式,使用AudioStreamBasicDescription来配置

思维流程

  1. 设置好参数信息,这样AudioQueue就知道我们需要什么样的数据了
  2. 创建AudioQueue实例并设置一个Record回调,Record回调会在录制到一帧声音之后返回
  3. 创建三个空buffer,并加入到AudioQueue,我们重复利用这个三个buffer,在录制的Record回调返回后,会返回一个buffer,同时这个buffer会自动移出队列,我们拿到buffer中的数据后再讲其加入到队列中。
  4. 开始我们的录制,收到Record回调

参数配置以及初始化

    _aqc.mDataFormat.mSampleRate = 16000.0; //采样率
    _aqc.mDataFormat.mBitsPerChannel = 16; //在一个数据帧中,每个通道的样本数据的位数。
    _aqc.mDataFormat.mChannelsPerFrame = 1; //每帧数据通道数
    _aqc.mDataFormat.mFormatID = kAudioFormatLinearPCM; //数据格式 PCM or AAC ...
    _aqc.mDataFormat.mFramesPerPacket = 1; //每包数据帧数
    _aqc.mDataFormat.mBytesPerFrame = (_aqc.mDataFormat.mBitsPerChannel / 8) * _aqc.mDataFormat.mChannelsPerFrame;
    _aqc.mDataFormat.mBytesPerPacket = _aqc.mDataFormat.mBytesPerFrame * _aqc.mDataFormat.mFramesPerPacket;
    _aqc.mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
    _aqc.frameSize = kFrameSize; //这里设置2048
    //初始化,分别是参数、回调、self指针or this ,Null,Runloop模式,自动创建的队列
    AudioQueueNewInput(&_aqc.mDataFormat, AQInputCallback, (__bridge void *)(self), NULL, kCFRunLoopCommonModes, 0, &_aqc.queue);
    //这里分配三个缓存的内存并加入队列
    for (int i=0; i<kNumberBuffers; i++)
    {
        AudioQueueAllocateBuffer(_aqc.queue, _aqc.frameSize, &_aqc.mBuffers[i]);
        AudioQueueEnqueueBuffer(_aqc.queue, _aqc.mBuffers[i], 0, NULL);
    }
    
    _aqc.run = 1;
    AudioQueueStart(_aqc.queue, NULL); //这里开始queue

这里或许有如下疑惑:
1.采样率是什么?
首先需要理解音频的量化:模拟信号数字信号的过程
在这里插入图片描述
模拟信号就是波形线,我们需要将时间t=1s内的波形全部转为数字(数字信号),这样才能存储在内存中。举个栗子:即0.0001秒的波形值为1.4,0.0002秒的波形值为3.8,这个波形值的存储位数即为采样位深 mBitsPerChannel。那么采样率就是采样的频率,即将1s划分为多少份来记录波形值。通常的采样率有16khz,44khz等等。

2.一帧数据是什么?

一帧数据,即将源音频文件划分多少帧,取其一帧或者几帧,通过包发出,所以一个数据包中可能有几帧。
一帧音频,可能是双通道,单通道,双通道对比单通道数据大小要乘以2
一帧PCM数据大小计算: PCM Buffersize=采样率 * 采样时间 * 采样位深/8 * 通道数(Bytes)
对于16khz采样率,单通道,采样位深16Bit的1s音频大小为
16000x1x16/8 = 32000字节
如果1s为30帧,即20ms一帧
则每帧的数据大小为32000/20=1600字节。

所以在上方代码中,设置每帧大小kFrameSize=2048,即32000/2048=15.625帧,每秒至少要有16帧,才能满足。

开始暂停结束

AudioQueueStart(_aqc.queue, NULL); //这里开始queue
AudioQueuePause(_aqc.queue); //暂停
AudioQueueStop(_aqc.queue, true); //停止

结束的时候要注意释放相关资源:
if (status != noErr) {
    for(int i = 0; i < kNumberBuffers; i++) {
        AudioQueueFreeBuffer(_aqc.queue, _aqc.mBuffers[i]);
    }
}
AudioQueueDispose(_aqc.queue, true);

数据处理

static void AQInputCallback (void                   * inUserData,
                             AudioQueueRef          inAudioQueue,
                             AudioQueueBufferRef    inBuffer,
                             const AudioTimeStamp   * inStartTime,
                             UInt32          inNumPackets,
                             const AudioStreamPacketDescription * inPacketDesc)
{
    GSAudioSendEngine *engine = (__bridge GSAudioSendEngine *) inUserData;
    
    if (!engine) {
        ALog(@"engine is dealloc");
        return;
    }
    //这里是实际播放时间,暂停停止都不计算在内,是真实的播放数据时间
    NSTimeInterval playedTime = inStartTime->mSampleTime / engine.aqc.mDataFormat.mSampleRate;
    printf("inNumPackets %d record time %f\n",inNumPackets,playedTime);
    if (inNumPackets > 0) {
    //这里就是数据回调出去
        [engine processAudioBuffer:inBuffer withQueue:inAudioQueue];
    }
    
    if (engine.aqc.run) {
        AudioQueueEnqueueBuffer(engine.aqc.queue, inBuffer, 0, NULL);
    }
    
}

音频打断,前后台切换以及Route改变处理

  1. 对于app来说,如果知道自己是否采用后台播放模式,则不用判断是否处于后台播放模式。这里开发SDK才使用了_isBackgroundAudioMode判断
  2. 仅仅监听音频打断通知并不够,当各个代码互相影响时,会触发Route改变通知,同样需要处理。
  3. 前后进入也需要做适当的处理逻辑。对于非后台模式的app,进入后台时不关闭AudioSession的话,在其再次激活时,会收到一个打断通知。见此文章
  4. 更加严谨的应用需要处理AVAudioSessionMediaServicesWereResetNotification通知。见此文章
- (void)handleRouteChange:(NSNotification *)notification {
    //    AVAudioSession *session = [AVAudioSession sharedInstance];
    NSString *seccReason = @"";
    NSInteger reason = [[[notification userInfo] objectForKey:AVAudioSessionRouteChangeReasonKey] integerValue];
    //  AVAudioSessionRouteDescription* prevRoute = [[notification userInfo] objectForKey:AVAudioSessionRouteChangePreviousRouteKey];
    switch (reason) {
        case AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory:
            seccReason = @"The route changed because no suitable route is now available for the specified category.";
            break;
        case AVAudioSessionRouteChangeReasonWakeFromSleep:
        case AVAudioSessionRouteChangeReasonOverride:
        case AVAudioSessionRouteChangeReasonCategoryChange:
        case AVAudioSessionRouteChangeReasonOldDeviceUnavailable:
        case AVAudioSessionRouteChangeReasonNewDeviceAvailable: {
            seccReason = [NSString stringWithFormat:@"AVAudioSession Route change Reason is %ld (wakesleep:6,override:4,change:3,oldUnavailiable:2,newDevice:1)",(long)reason];
            //这里当route改变时(例如其他Audio Unit的三方类初始化,Session重置Mode,Category),可能会导致队列停止,没有数据输出,这里重置
            [self resetAudio];
        }
            
            break;
        case AVAudioSessionRouteChangeReasonUnknown:
        default:
            seccReason = @"The reason for the change is unknown.";
            break;
    }
    ALog(@" handleRouteChange reason is %@", seccReason);
}

- (void)resetAudio {
    if (_aqc.run) {
        [self stop];
        [self start];
    }else {
        [self start];
    }
}

- (void)handleAudioSessionInterruption:(NSNotification*)notification {
    if (!self.isRunning) return;
    ALog(@"handleAudioSessionInterruption:%@",notification);
    NSNumber *interruptionType = [[notification userInfo] objectForKey:AVAudioSessionInterruptionTypeKey];
    NSNumber *interruptionOption = [[notification userInfo] objectForKey:AVAudioSessionInterruptionOptionKey];
    
    switch (interruptionType.unsignedIntegerValue) {
        case AVAudioSessionInterruptionTypeBegan:{
            // • Audio has stopped, already inactive
            // • Change state of UI, etc., to reflect non-playing state
            if (_isBackground) {
                if (_isBackgroundAudioMode) {
                    [self stop]; //should stop whatever in background or app is not background audio type
                }
            }else{
                [self stop];
            }
            
            _isAudioInteruptBegan = YES;
        } break;
        case AVAudioSessionInterruptionTypeEnded:{
            // • Make session active
            // • Update user interface
            // • AVAudioSessionInterruptionOptionShouldResume option
            if (interruptionOption.unsignedIntegerValue == AVAudioSessionInterruptionOptionShouldResume) {
                // Here you should continue playback.
                // Apps that do not require user input to begin audio playback (such as games) can ignore this flag and always resume playback when an interruption ends.
                NSError *error;
                [[AVAudioSession sharedInstance] setActive:YES withOptions:kAudioSessionSetActiveFlag_NotifyOthersOnDeactivation error:&error];
                if (error) {
                    NSLog(@"error: %@", error.description);
                }
                if (_isBackground) { //如果是后台
                    //如果app适配后台播放音频模式,并且没有第二个app在占用音频
                    if (_isBackgroundAudioMode && ![AVAudioSession sharedInstance].secondaryAudioShouldBeSilencedHint) {
                        [self resetAudio];
                        _audioCookedValue = 1;
                    }else{
                        //没有重置的情况下,我们放在进入前台时重置
                        _audioCookedValue = 2;
                    }
                }else{ //if UITextView / UITextField audio entry interruped , need reset openal
                    [self resetAudio];
                    _audioCookedValue = 1;
                }
                
                ALog(@"AVAudioSessionInterruptionOptionShouldResume");
            }else {
                if (![AVAudioSession sharedInstance].secondaryAudioShouldBeSilencedHint) {
                    if (_isBackground) {
                        if (_isBackgroundAudioMode) {
                            [self resetAudio];
                            _audioCookedValue = 1;
                        }else{
                            _audioCookedValue = 2;
                        }
                    }else{ //if UITextView / UITextField audio entry interruped , need reset openal
                        [self resetAudio];
                        _audioCookedValue = 1;
                    }
                    ALog(@"AVAudioSessionInterruptionOptionKey is 0 - UnknowError");
                }else {
                    ALog(@"secondaryAudioShouldBeSilencedHint is YES");
                }
            }
        }
            break;
        default:
            break;
    }
}

- (void)appDidBecomeActive {
    if (!self.isRunning) return;
    _isBackground = NO;
    _isAudioInteruptBegan = NO;//进入前台就认为没有音频打断了
    
    if (_audioCookedValue == 2) { //当需要进入前台重置时
        [self resetAudio];
    }
    _audioCookedValue = 0;
    if (!_isBackgroundAudioMode) {
        [[AVAudioSession sharedInstance] setActive:YES error:nil];
        //to avoid receive notification interrupt start , and resume audio session
    }else {
        
    }
}

- (void)appWillResignActive {
    if (!self.isRunning) return;
    
    _isBackground = YES;
    if (!_isBackgroundAudioMode) {
        [self stop];
        NSError *error = nil;
        if (![AVAudioSession sharedInstance].secondaryAudioShouldBeSilencedHint) {
            //关闭音频session,避免打断通知混乱的问题
            [[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error];
        }
        
        if (error) {
            ALog(@"inactive session error %@",error.description);
        }
        //to avoid receive notification interrupt start,option kAudioSessionSetActiveFlag_NotifyOthersOnDeactivation is need
    }
}

这里附上demo地址 https://github.com/shengpeng3344/AudioQueueRecordSample/

学无止境,互相学习吧


版权声明:本文为shengpeng3344原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。