Help with AudioQueueServices!

Apprentice
Posts: 16
Joined: 2010.01
Post: #1
I need to set certain sounds in my app to play after a delay. I think I need to use AudioQueueServices — does anyone have any general info on how to implement this? I have gone through Apple's Documentation but it is still a little hard to understand how to code this in my file.
Quote this message in a reply
Moderator
Posts: 3,579
Joined: 2003.06
Post: #2
You don't want to mess with low(er)-level stuff like Audio Queues. Use something like AudioServicesPlaySystemSound and AVAudioPlayer instead. In the iPhone Application Programming Guide, under Multimedia Support, read the sections about AudioServicesPlaySystemSound and AVAudioPlayer. Ignore the Audio Queue stuff.
Quote this message in a reply
Member
Posts: 27
Joined: 2010.01
Post: #3
AnotherJake Wrote:You don't want to mess with low(er)-level stuff like Audio Queues. Use something like AudioServicesPlaySystemSound and AVAudioPlayer instead. In the iPhone Application Programming Guide, under Multimedia Support, read the sections about AudioServicesPlaySystemSound and AVAudioPlayer. Ignore the Audio Queue stuff.

Looking over your gmbtrack code it would seen you're the best person here to ask a few questions on iphone audio.

If you remember from my other post, I am trying to implement getting the audio to play realtime from video from libavcodec. (ffmpeg).

I tried using audioQueues but am not getting very far. Although I think from the really lousy documentation I could find it seems like I would need an additonal callback that listens for the audioStream.

You mentioned trying remoteio, do you know of any good resources.. books anything on remoteIO , I can't seem to find much.

Also any idea (or where to look) for info on how sdl generates audio, they have no problem with the audio and on the iphone they must (most likely) are using audioqueues.

Any help would be appreciated.

Here's what the app does so far.

discovers and shows content from upnp servers (including playon)
allows user to pick video and passes it to ffmpegViewController.

Video wise it can play any stream, if only I can figure out how to do the audio.
Quote this message in a reply
Moderator
Posts: 3,579
Joined: 2003.06
Post: #4
A quick glance at the SDL source code suggests to me that they're using RemoteIO.

Figuring out RemoteIO is not particularly easy because, again as usual, the documentation is nearly non-existent or rather poor -- at least the last time I looked. I put together a tiny demo to maybe get you started. Make a new OpenGL ES app from the template in Xcode. Add the AudioToolbox framework to the project. Then copy the code below and paste it over what's in the <myAppName>AppDelegate.m file. I called mine iPhoneAUTest, so it wound up being iPhoneAUTestAppDelegate.m. You can simply change that to whatever you want. Here's the code:

Code:
#import "iPhoneAUTestAppDelegate.h"
#import "EAGLView.h"
#import <AudioUnit/AudioUnit.h>

static bool            silent = false;
static float        volume = 0.25f;
static AudioUnit    audioUnit;

static void RenderAudio(void *left, void *right, unsigned numSamples, unsigned dataSize)
{
    // early out
    if (silent)
    {
        memset(left, 0, dataSize);
        memset(right, 0, dataSize);
        return;
    }
    
    // do something like this to generate your own audio
    unsigned i;
    for (i = 0; i < numSamples; i++)
    {
        // do a cheesy sine wave in the left channel so we have something to hear,
        // and just copy it over to the right channel for demonstration
        static unsigned long sampleIndex = 0;
        float myFloatSampleLeft = sinf((float)sampleIndex++ * 0.1f) * volume;
        float myFloatSampleRight = myFloatSampleLeft;
        
        // output your audio here, in this case, sample by sample, converting from float to int
        ((AudioSampleType *)left)[i] = (AudioSampleType)(myFloatSampleLeft * 32000.0f);
        ((AudioSampleType *)right)[i] = (AudioSampleType)(myFloatSampleRight * 32000.0f);
    }
    
    /*
        do something like this instead for decoding audio using some codec
    
    for (i = 0; i < numSamples; i++)
    {
        if (myStreamIndex >= myNumSamplesInBuffer)
            myDecodeAnotherBufferWorthOfSamples();
        
        // could do memcpy from your buffer here instead for better efficiency
        ((AudioSampleType *)left)[i] = (AudioSampleType)myBuffer[myStreamIndex++]];
        ((AudioSampleType *)right)[i] = (AudioSampleType)myBuffer[myStreamIndex++]];
    }*/
}

static OSStatus AudioOutputCallback(void *inRefCon,
                                  AudioUnitRenderActionFlags *ioActionFlags,
                                  const AudioTimeStamp *inTimeStamp,
                                  UInt32 inBusNumber,
                                  UInt32 inNumberFrames,
                                  AudioBufferList *ioData)
{
    unsigned    dataSize = ioData->mBuffers[0].mDataByteSize;
    unsigned    numSamples = dataSize / sizeof(AudioSampleType); // 16-bit
    
    RenderAudio(ioData->mBuffers[0].mData, ioData->mBuffers[1].mData, numSamples, dataSize);
    return noErr;
}

static void DisableAudioOutputiPhone(void)
{
    AudioOutputUnitStop(audioUnit);
    
    AURenderCallbackStruct        callback;
    callback.inputProc = NULL;
    callback.inputProcRefCon = NULL;
    AudioUnitSetProperty(audioUnit,
                        kAudioUnitProperty_SetRenderCallback,
                        kAudioUnitScope_Global,
                        0,
                        &callback,
                        sizeof(AURenderCallbackStruct));
    
    AudioUnitUninitialize(audioUnit);
    AudioComponentInstanceDispose(audioUnit);
}

static void EnableAudioOutputiPhone(void)
{
    OSStatus                    status;
    AudioStreamBasicDescription    audioFormat;
    AudioComponentDescription    desc;
    
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  0,
                                  &flag,
                                  sizeof(flag));
    
    // 16-bit int output
    //audioFormat.mSampleRate = 44100.0;
    audioFormat.mSampleRate = 22050.0;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFormatFlags |= kAudioFormatFlagIsNonInterleaved; // split into left/right for the callback
    UInt32 sampleSize = sizeof(AudioSampleType); // 16-bit signed int
    audioFormat.mBytesPerPacket = sampleSize;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mBytesPerFrame = sampleSize;
    audioFormat.mChannelsPerFrame = 2;
    audioFormat.mBitsPerChannel = 8 * sampleSize;
    
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  0,
                                  &audioFormat,
                                  sizeof(audioFormat));
    
    AURenderCallbackStruct    callbackStruct;
    callbackStruct.inputProc = AudioOutputCallback;
    callbackStruct.inputProcRefCon = NULL;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  0,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    
    status = AudioUnitInitialize(audioUnit);
    if (status != noErr)
    {
        printf("%s ERROR: AudioUnitInitialize failed\n", __FUNCTION__);
    }
    
    AudioOutputUnitStart(audioUnit);
}

@implementation iPhoneAUTestAppDelegate

@synthesize window;
@synthesize glView;

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
    EnableAudioOutputiPhone();
    [glView startAnimation];
    return YES;
}

- (void)applicationWillResignActive:(UIApplication *)application
{
    [glView stopAnimation];
}

- (void)applicationDidBecomeActive:(UIApplication *)application
{
    [glView startAnimation];
}

- (void)applicationWillTerminate:(UIApplication *)application
{
    [glView stopAnimation];
}

- (void)dealloc
{
    [window release];
    [glView release];

    [super dealloc];
}

@end
Quote this message in a reply
Post Reply