Using Sound in iPhone OS
iPhone OS offers a rich set of tools for working with sound in your application. These tools are arranged into frameworks according to the features they provide, as follows:
■ Use the Media Player framework to play songs, audio books, or audio podcasts from a user’s iPod library.
■ Use the AV Foundation framework to play and record audio using a simple Objective-C interface.
■ Use the Media Player framework to play songs, audio books, or audio podcasts from a user’s iPod library.
■ Use the AV Foundation framework to play and record audio using a simple Objective-C interface.
■ Use the Audio Toolbox framework to play audio with synchronization capabilities, access packets of incoming audio, parse audio streams, convert audio formats, and record audio with access to individual packets.
■ Use the Audio Unit framework to connect to and use audio processing plug-ins.
■ Use the OpenAL framework to provide positional audio playback in games and other applications.
Support for OpenAL 1.1 in iPhone OS is built on top of Core Audio.
Support for OpenAL 1.1 in iPhone OS is built on top of Core Audio.
To allow your code to use the features of an audio framework, add that framework to your Xcode project, link against it in any relevant targets, and add an appropriate #import statement near the top of relevant source files. For example, to provide access to the AV Foundation framework in a source file, add a #import
<AVFoundation/AVFoundation.h> statement near the top of the file. For detailed information on how to add frameworks to your project.
<AVFoundation/AVFoundation.h> statement near the top of the file. For detailed information on how to add frameworks to your project.
This section on sound provides a quick introduction to implementing iPhone OS audio features, as listed here:
■ To play songs, audio podcasts, and audio books from a user’s iPod library, see “Playing Media Items with iPod Library Access”
■ To play songs, audio podcasts, and audio books from a user’s iPod library, see “Playing Media Items with iPod Library Access”
■ To play and record audio in the fewest lines of code, use the AV Foundation framework.
■ To provide full-featured audio playback including stereo positioning, level control, and simultaneous sounds, use OpenAL.
■ To provide lowest latency audio, especially when doing simultaneous input and output (such as for a VoIP application), use the I/O unit or the Voice Processing I/O unit.
■ To play sounds with the highest degree of control, including support for synchronization, use Audio Queue Services.
■ To parse audio streamed from a network connection, use Audio File Stream Services.
■ To play user-interface sound effects, or to invoke vibration on devices that provide that feature, use
System Sound Services.
System Sound Services.
Be sure to read the next section, “The Basics: Hardware-Assisted Codecs, Audio Formats, and Audio Sessions”, for critical information on how audio works on iPhone. Also read “Best Practices for iPhone Audio” , which offers guidelines and lists the audio and file formats to use for best performance and best user experience.
When you’re ready to dig deeper, the iPhone Dev Center contains guides, reference books, sample code, and more. For tips on how to perform common audio tasks, see Audio & Video Coding How-To's. For in-depth explanations of audio development in iPhone OS, see Core Audio Overview, Audio Session Programming Guide, Audio Queue Services Programming Guide, System Audio Unit Access Guide, and iPod Library Access Programming Guide.
When you’re ready to dig deeper, the iPhone Dev Center contains guides, reference books, sample code, and more. For tips on how to perform common audio tasks, see Audio & Video Coding How-To's. For in-depth explanations of audio development in iPhone OS, see Core Audio Overview, Audio Session Programming Guide, Audio Queue Services Programming Guide, System Audio Unit Access Guide, and iPod Library Access Programming Guide.
=> The Basics: Hardware-Assisted Codecs, Audio Formats, and Audio Sessions
To get oriented toward iPhone audio development, it’s important to understand a few things about the hardware and software architecture of iPhone OS devices.
iPhone Hardware and Software Audio Codecs
iPhone OS applications can use a wide range of audio data formats. Starting in iPhone OS 3.0, most of these formats can use software-based encoding and decoding. Software decoding supports simultaneous playback of multiple sounds in all formats although, for performance reasons, you should consider which format is best in a given scenario, as described in “Preferred Audio Formats in iPhone OS”.
Hardware-assisted decoding generally entails less of a performance impact than does software decoding. If you need to maximize the video frame rate in your application, use uncompressed audio, or use hardware-assisted decoding of your compressed audio content.
The following iPhone OS compressed audio formats can employ hardware-assisted decoding for playback:
■ AAC
■ ALAC (Apple Lossless)
■ MP3
The device can play only a single instance of one of these formats at a time using hardware-assisted decoding.
For example, if you are playing a stereo MP3 sound, a second simultaneous MP3 sound will use software decoding. Similarly, you cannot simultaneously play an AAC and an ALAC sound using hardware. If the iPod application is playing an AAC sound in the background, your application plays AAC, ALAC, and MP3 audio using software decoding.
To play multiple sounds with best performance, or to efficiently play sounds while the iPod is playing in the background, use linear PCM (uncompressed) or IMA4 (compressed) audio.
To learn how to check at runtime which hardware and software codecs are available on a device, read the discussion for the kAudioFormatProperty_HardwareCodecCapabilities constant in Audio Format Services Reference and read Technical Q&A QA1663, “Determining the availability of the AAC hardware encoder at runtime.”
Audio Playback and Recording Formats
The audio playback formats supported in iPhone OS are the following:
■ AAC
■ HE-AAC
■ AMR (Adaptive Multi-Rate, a format for speech)
■ ALAC (Apple Lossless)
■ iLBC (internet Low Bitrate Codec, another format for speech)
■ IMA4 (IMA/ADPCM)
■ linear PCM (uncompressed)
■ μ-law and a-law
■ MP3 (MPEG-1 audio layer 3
The audio recording formats supported in iPhone OS are the following:
■ AAC (on supported devices only)
■ ALAC (Apple Lossless)
■ iLBC (internet Low Bitrate Codec, for speech)
■ IMA4 (IMA/ADPCM)
■ linear PCM
■ μ-law and a-law
Here is a summary of how iPhone OS supports audio formats for single or multiple playback:
■ Linear PCM and IMA4 (IMA/ADPCM You can play multiple linear PCM or IMA4 sounds simultaneously in iPhone OS without incurring CPU resource problems. The same is true for the AMR and iLBC speech-quality formats, and for the μ-law and a-law compressed formats. When using compressed formats, check the sound quality to ensure it meets your needs.
■ AAC, MP3, and ALAC (Apple Lossless) Playback for AAC, MP3, and ALAC sounds can use efficient hardware-assisted decoding on iPhone OS devices, but these codecs all share a single hardware path.
The device can play only a single instance of one of these formats at a time using hardware-assisted decoding.
The single hardware path for AAC, MP3, and ALAC playback has implications for “play along” style applications, such as a virtual piano. If the user is playing a song in one of these three formats in the iPod application, then your application to play along over that audio will employ software decoding.
Audio Sessions
Core Audio’s audio session APIs let you define your application’s general audio behavior and design it to work well within the larger audio context of the device it’s running on. These APIs are described in Audio Session Services Reference and AVAudioSession Class Reference. Using these APIs, you can specify such behaviors as:
■ Whether or not your audio should be silenced by the Ring/Silent switch
■ Whether or not your audio should stop upon screen lock
■ Whether other audio, such as from the iPod, should continue playing or be silenced when your audio starts
The audio session APIs also let you respond to user actions, such as the plugging in or unplugging of headsets, and to events that use the device’s sound hardware, such as Clock and Calendar alarms and incoming phone calls.
The audio session APIs provide three programmatic features, described in below Table.
Table: Features provided by the audio session APIs
There are two interfaces for working with the audio session:
■ A streamlined, objective-C interface that gives you access to the core features and is described in AVAudioSession Class Reference and AVAudioSessionDelegate Protocol Reference.
■ A C-based interface that provides comprehensive access to all basic and advanced audio session features and is described in Audio Session Services Reference.
You can mix and match audio session code from AV Foundation and Audio Session Services the interfaces
are completely compatible.
An audio session comes with some default behavior that you can use to get started in development. However, except for certain special cases, the default behavior is unsuitable for a shipping application that uses audio.
By configuring and using the audio session, you can express your audio intentions and respond to OS-level audio decisions.
For example, when using the default audio session, audio in your application stops when the Auto-Lock period times out and the screen locks. If you want to ensure that playback continues with the screen locked, include the following lines in your application’s initialization code:
NSError *setCategoryErr = nil;
NSError *activationErr = nil;
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryPlayback
error: &setCategoryErr];
[[AVAudioSession sharedInstance]
setActive: YES
error: &activationErr];
The AVAudioSessionCategoryPlayback category ensures that playback continues when the screen locks. Activating the audio session puts the specified category into effect.
How you handle the interruption caused by an incoming phone call or Clock or Calendar alarm depends on the audio technology you are using, as shown in Below Table.
Table: Handling audio interruptions
Every iPhone OS application with rare exception should actively manage its audio session. To learn how, read Audio Session Programming Guide. To ensure that your application conforms to Apple recommendations for audio session behavior, learn about those recommendations in “Using Sound” in iPhone Human Interface Guidelines.
=> Playing Audio
This section introduces you to playing sounds in iPhone OS using iPod library access, System Sound Services, Audio Queue Services, the AV Foundation framework, and OpenAL.
Playing Media Items with iPod Library Access
Starting in iPhone OS 3.0, iPod library access lets your application play a user’s songs, audio books, and audio podcasts. The API design makes basic playback very simple while also supporting advanced searching and playback control.
As shown in Below Figure, your application has two ways to retrieve items. The media item picker, shown on the left, is an easy-to-use, pre-packaged view controller that behaves like the built-in iPod application’s music selection interface. For many applications, this is sufficient. If the picker doesn’t provide the specialized access control you want, the media query interface will. It supports predicate-based specification of items from the iPod library.
Figure: Using iPod library access
Playing Media Items with iPod Library Access
Starting in iPhone OS 3.0, iPod library access lets your application play a user’s songs, audio books, and audio podcasts. The API design makes basic playback very simple while also supporting advanced searching and playback control.
As shown in Below Figure, your application has two ways to retrieve items. The media item picker, shown on the left, is an easy-to-use, pre-packaged view controller that behaves like the built-in iPod application’s music selection interface. For many applications, this is sufficient. If the picker doesn’t provide the specialized access control you want, the media query interface will. It supports predicate-based specification of items from the iPod library.
Figure: Using iPod library access
As depicted in the figure to the right of your application, you then play the retrieved media items using the
music player provided by this API.
=> Playing UI Sound Effects or Invoking Vibration Using System Sound Services
To play user-interface sound effects (such as button clicks), or to invoke vibration on devices that support it, use System Sound Services. This compact interface is described in System Sound Services Reference. You can find sample code in the SysSound sample in the iPhone Dev Center.
Note: Sounds played with System Sound Services are not subject to configuration using your audio session. As a result, you cannot keep the behavior of System Sound Services audio in line with other audio behavior in your application. This is the most important reason to avoid using System Sound Services for any audio apart from its intended uses.
The AudioServicesPlaySystemSound function lets you very simply play short sound files. The simplicity carries with it a few restrictions. Your sound files must be:
■ No longer than 30 seconds in duration
■ In linear PCM or IMA4 (IMA/ADPCM) format
■ Packaged in a .caf, .aif, or .wav file
In addition, when you use the AudioServicesPlaySystemSound function:
■ Sounds play at the current system audio volume, with no programmatic volume control available
■ Sounds play immediately
■ Looping and stereo positioning are unavailable
The similar AudioServicesPlayAlertSound function plays a short sound as an alert. If a user has configured their device to vibrate in Ring Settings, calling this function invokes vibration in addition to playing the sound file.
Note: System-supplied alert sounds and system-supplied user-interface sound effects are not available to your application. For example, using the kSystemSoundID_UserPreferredAlert constant as a parameter to the AudioServicesPlayAlertSound function will not play anything.
To play a sound with the AudioServicesPlaySystemSound or AudioServicesPlayAlertSound function, first create a sound ID object, as shown in programing.
Creating a sound ID object
// Get the main bundle for the app
CFBundleRef mainBundle = CFBundleGetMainBundle ();
// Get the URL to the sound file to play. The file in this case
// is "tap.aiff"
soundFileURLRef = CFBundleCopyResourceURL (
mainBundle,
CFSTR ("tap"),
CFSTR ("aif"),
NULL
);
// Create a system sound object representing the sound file
AudioServicesCreateSystemSoundID (
soundFileURLRef,
&soundFileObject
);
Note: Sounds played with System Sound Services are not subject to configuration using your audio session. As a result, you cannot keep the behavior of System Sound Services audio in line with other audio behavior in your application. This is the most important reason to avoid using System Sound Services for any audio apart from its intended uses.
The AudioServicesPlaySystemSound function lets you very simply play short sound files. The simplicity carries with it a few restrictions. Your sound files must be:
■ No longer than 30 seconds in duration
■ In linear PCM or IMA4 (IMA/ADPCM) format
■ Packaged in a .caf, .aif, or .wav file
In addition, when you use the AudioServicesPlaySystemSound function:
■ Sounds play at the current system audio volume, with no programmatic volume control available
■ Sounds play immediately
■ Looping and stereo positioning are unavailable
The similar AudioServicesPlayAlertSound function plays a short sound as an alert. If a user has configured their device to vibrate in Ring Settings, calling this function invokes vibration in addition to playing the sound file.
Note: System-supplied alert sounds and system-supplied user-interface sound effects are not available to your application. For example, using the kSystemSoundID_UserPreferredAlert constant as a parameter to the AudioServicesPlayAlertSound function will not play anything.
To play a sound with the AudioServicesPlaySystemSound or AudioServicesPlayAlertSound function, first create a sound ID object, as shown in programing.
Creating a sound ID object
// Get the main bundle for the app
CFBundleRef mainBundle = CFBundleGetMainBundle ();
// Get the URL to the sound file to play. The file in this case
// is "tap.aiff"
soundFileURLRef = CFBundleCopyResourceURL (
mainBundle,
CFSTR ("tap"),
CFSTR ("aif"),
NULL
);
// Create a system sound object representing the sound file
AudioServicesCreateSystemSoundID (
soundFileURLRef,
&soundFileObject
);
Then play the sound, as shown in code.
Playing a system sound
- (IBAction) playSystemSound {
AudioServicesPlaySystemSound (self.soundFileObject);
}
In typical use, which includes playing a sound occasionally or repeatedly, retain the sound ID object until your application quits. If you know that you will use a sound only once for example, in the case of a startup sound you can destroy the sound ID object immediately after playing the sound, freeing memory.
Applications running on iPhone OS devices that support vibration can trigger that feature using System Sound Services. You specify the vibrate option with the kSystemSoundID_Vibrate identifier. To trigger it, use the AudioServicesPlaySystemSound function, as shown in code.
Triggering vibration
#import <AudioToolbox/AudioToolbox.h>
#import <UIKit/UIKit.h>
- (void) vibratePhone {
AudioServicesPlaySystemSound (kSystemSoundID_Vibrate);
}
Applications running on iPhone OS devices that support vibration can trigger that feature using System Sound Services. You specify the vibrate option with the kSystemSoundID_Vibrate identifier. To trigger it, use the AudioServicesPlaySystemSound function, as shown in code.
Triggering vibration
#import <AudioToolbox/AudioToolbox.h>
#import <UIKit/UIKit.h>
- (void) vibratePhone {
AudioServicesPlaySystemSound (kSystemSoundID_Vibrate);
}
If your application is running on an iPod touch, this code does nothing.
=> Playing Sounds Easily with the AVAudioPlayer Class
The AVAudioPlayer class provides a simple Objective-C interface for playing sounds. If your application does not require stereo positioning or precise synchronization, and if you are not playing audio captured from a network stream, Apple recommends that you use this class for playback.
Using an audio player you can:
■ Play sounds of any duration
■ Play sounds from files or memory buffers
■ Loop sounds
■ Play multiple sounds simultaneously (although not with precise synchronization)
■ Control relative playback level for each sound you are playing
■ Seek to a particular point in a sound file, which supports application features such as fast forward and rewind
■ Obtain audio power data that you can use for audio level metering
The AVAudioPlayer class lets you play sound in any audio format available in iPhone OS, as described in “Audio Playback and Recording Formats”
The AVAudioPlayer class provides a simple Objective-C interface for playing sounds. If your application does not require stereo positioning or precise synchronization, and if you are not playing audio captured from a network stream, Apple recommends that you use this class for playback.
Using an audio player you can:
■ Play sounds of any duration
■ Play sounds from files or memory buffers
■ Loop sounds
■ Play multiple sounds simultaneously (although not with precise synchronization)
■ Control relative playback level for each sound you are playing
■ Seek to a particular point in a sound file, which supports application features such as fast forward and rewind
■ Obtain audio power data that you can use for audio level metering
The AVAudioPlayer class lets you play sound in any audio format available in iPhone OS, as described in “Audio Playback and Recording Formats”
To configure an audio player:
1. Assign a sound file to the audio player.
2. Prepare the audio player for playback, which acquires the hardware resources it needs.
3. Designate an audio player delegate object, which handles interruptions as well as the playback-completed event.
The code in illustrates these steps. It would typically go into an initialization method of the controller class for your application. (In production code, you’d include appropriate error handling.)
Configuring an AVAudioPlayer object
// in the corresponding .h file:
// @property (nonatomic, retain) AVAudioPlayer *player;
// in the .m file:
@synthesize player; // the player object
NSString *soundFilePath =
[[NSBundle mainBundle] pathForResource: @"sound"
ofType: @"wav"];
NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: soundFilePath];
AVAudioPlayer *newPlayer =
[[AVAudioPlayer alloc] initWithContentsOfURL: fileURL
error: nil];
[fileURL release];
self.player = newPlayer;
[newPlayer release];
[player prepareToPlay];
[player setDelegate: self];
1. Assign a sound file to the audio player.
2. Prepare the audio player for playback, which acquires the hardware resources it needs.
3. Designate an audio player delegate object, which handles interruptions as well as the playback-completed event.
The code in illustrates these steps. It would typically go into an initialization method of the controller class for your application. (In production code, you’d include appropriate error handling.)
Configuring an AVAudioPlayer object
// in the corresponding .h file:
// @property (nonatomic, retain) AVAudioPlayer *player;
// in the .m file:
@synthesize player; // the player object
NSString *soundFilePath =
[[NSBundle mainBundle] pathForResource: @"sound"
ofType: @"wav"];
NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: soundFilePath];
AVAudioPlayer *newPlayer =
[[AVAudioPlayer alloc] initWithContentsOfURL: fileURL
error: nil];
[fileURL release];
self.player = newPlayer;
[newPlayer release];
[player prepareToPlay];
[player setDelegate: self];
The delegate (which can be your controller object) handle interruptions and updates the user interface when a sound has finished playing. The delegate methods for the AVAudioPlayer class are described in AVAudioPlayerDelegate Protocol Reference. Listing 7-5 shows a simple implementation of one delegate method.
This code updates the title of a Play/Pause toggle button when a sound has finished playing.
Implementing an AVAudioPlayer delegate method
- (void) audioPlayerDidFinishPlaying: (AVAudioPlayer *) player
successfully: (BOOL) completed {
if (completed == YES) {
[self.button setTitle: @"Play" forState: UIControlStateNormal];
}
}
To play, pause, or stop an AVAudioPlayer object, call one of its playback control methods. You can test whether or not playback is in progress by using the playing property. Code shows a basic play/pause
toggle method that controls playback and updates the title of a UIButton object.
Controlling an AVAudioPlayer object
- (IBAction) playOrPause: (id) sender {
// if already playing, then pause
if (self.player.playing) {
[self.button setTitle: @"Play" forState: UIControlStateHighlighted];
[self.button setTitle: @"Play" forState: UIControlStateNormal];
[self.player pause];
// if stopped or paused, start playing
} else {
[self.button setTitle: @"Pause" forState: UIControlStateHighlighted];
[self.button setTitle: @"Pause" forState: UIControlStateNormal];
[self.player play];
}
}
toggle method that controls playback and updates the title of a UIButton object.
Controlling an AVAudioPlayer object
- (IBAction) playOrPause: (id) sender {
// if already playing, then pause
if (self.player.playing) {
[self.button setTitle: @"Play" forState: UIControlStateHighlighted];
[self.button setTitle: @"Play" forState: UIControlStateNormal];
[self.player pause];
// if stopped or paused, start playing
} else {
[self.button setTitle: @"Pause" forState: UIControlStateHighlighted];
[self.button setTitle: @"Pause" forState: UIControlStateNormal];
[self.player play];
}
}
The AVAudioPlayer class uses the Objective-C declared properties feature for managing information about a sound such as the playback point within the sound’s timeline, and for accessing playback options such as volume and looping. For example, you can set the playback volume for an audio player as shown here:
[self.player setVolume: 1.0]; // available range is 0.0 through 1.0
=> Playing Sounds with Control Using Audio Queue Services
Audio Queue Services adds playback capabilities beyond those available with the AVAudioPlayer class.
Using Audio Queue Services for playback lets you:
■ Precisely schedule when a sound plays, allowing synchronization
■ Precisely control volume on a buffer-by-buffer basis
■ Play audio that you have captured from a stream using Audio File Stream Services
Audio Queue Services lets you play sound in any audio format available in iPhone OS, as described in “Audio Playback and Recording Formats” (page 145). You also use this technology for recording, as explained in “Recording Audio” Queue Services Reference. For sample code, see the SpeakHere sample.
[self.player setVolume: 1.0]; // available range is 0.0 through 1.0
=> Playing Sounds with Control Using Audio Queue Services
Audio Queue Services adds playback capabilities beyond those available with the AVAudioPlayer class.
Using Audio Queue Services for playback lets you:
■ Precisely schedule when a sound plays, allowing synchronization
■ Precisely control volume on a buffer-by-buffer basis
■ Play audio that you have captured from a stream using Audio File Stream Services
Audio Queue Services lets you play sound in any audio format available in iPhone OS, as described in “Audio Playback and Recording Formats” (page 145). You also use this technology for recording, as explained in “Recording Audio” Queue Services Reference. For sample code, see the SpeakHere sample.
=> Creating an Audio Queue Object
To create an audio queue object for playback, perform these three steps:
1. Create a data structure to manage information needed by the audio queue, such as the audio format for the data you want to play.
2. Define a callback function for managing audio queue buffers. The callback uses Audio File Services to read the file you want to play. (In iPhone OS 2.1 and later, you can also use Extended Audio File Services to read the file.)
3. Instantiate the playback audio queue using the AudioQueueNewOutput function.
illustrates these steps using ANSI C. (In production code, you’d include appropriate error handling.)
The SpeakHere sample project shows these same steps in the context of a C++ program.
Creating an audio queue object
static const int kNumberBuffers = 3;
// Create a data structure to manage information needed by the audio queue
struct myAQStruct {
AudioFileID mAudioFile;
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kNumberBuffers];
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mDone;
};
// Define a playback audio queue callback function
static void AQTestBufferCallback(
void *inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inCompleteAQBuffer
) {
myAQStruct *myInfo = (myAQStruct *)inUserData;
if (myInfo->mDone) return;
UInt32 numBytes;
UInt32 nPackets = myInfo->mNumPacketsToRead;
AudioFileReadPackets (
myInfo->mAudioFile,
false,
&numBytes,
myInfo->mPacketDescs,
myInfo->mCurrentPacket,
&nPackets,
inCompleteAQBuffer->mAudioData
);
if (nPackets > 0) {
inCompleteAQBuffer->mAudioDataByteSize = numBytes;
AudioQueueEnqueueBuffer (
inAQ,
inCompleteAQBuffer,
(myInfo->mPacketDescs ? nPackets : 0),
myInfo->mPacketDescs
);
myInfo->mCurrentPacket += nPackets;
} else {
AudioQueueStop (
myInfo->mQueue,
false
);
myInfo->mDone = true;
}
}
// Instantiate an audio queue object
AudioQueueNewOutput (
&myInfo.mDataFormat,
AQTestBufferCallback,
&myInfo,
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&myInfo.mQueue
);
=> Controlling the Playback Level
Audio queue objects give you two ways to control playback level.
To set playback level directly, use the AudioQueueSetParameter function with the kAudioQueueParam_Volume parameter, as shown in code. Level change takes effect immediately.
Setting the playback level directly
Float32 volume = 1; // linear scale, range from 0.0 through 1.0
AudioQueueSetParameter (
myAQstruct.audioQueueObject,
kAudioQueueParam_Volume,
volume
);
Audio queue objects give you two ways to control playback level.
To set playback level directly, use the AudioQueueSetParameter function with the kAudioQueueParam_Volume parameter, as shown in code. Level change takes effect immediately.
Setting the playback level directly
Float32 volume = 1; // linear scale, range from 0.0 through 1.0
AudioQueueSetParameter (
myAQstruct.audioQueueObject,
kAudioQueueParam_Volume,
volume
);
You can also set playback level for an audio queue buffer by using the AudioQueueEnqueueBufferWithParameters function. This lets you assign audio queue settings that are, in effect, carried by an audio queue buffer as you enqueue it. Such changes take effect when the buffer begins playing.
In both cases, level changes for an audio queue remain in effect until you change them again.
Indicating Playback Level
You can obtain the current playback level from an audio queue object by:
1. Enabling metering for the audio queue object by setting its kAudioQueueProperty_EnableLevelMetering property to true
2. Querying the audio queue object’s kAudioQueueProperty_CurrentLevelMeter property
The value of this property is an array of AudioQueueLevelMeterState structures, one per channel. shows this structure:
The AudioQueueLevelMeterState structure
typedef struct AudioQueueLevelMeterState {
Float32 mAveragePower;
Float32 mPeakPower;
}; AudioQueueLevelMeterState;
In both cases, level changes for an audio queue remain in effect until you change them again.
Indicating Playback Level
You can obtain the current playback level from an audio queue object by:
1. Enabling metering for the audio queue object by setting its kAudioQueueProperty_EnableLevelMetering property to true
2. Querying the audio queue object’s kAudioQueueProperty_CurrentLevelMeter property
The value of this property is an array of AudioQueueLevelMeterState structures, one per channel. shows this structure:
The AudioQueueLevelMeterState structure
typedef struct AudioQueueLevelMeterState {
Float32 mAveragePower;
Float32 mPeakPower;
}; AudioQueueLevelMeterState;
Playing Multiple Sounds Simultaneously
To play multiple sounds simultaneously, create one playback audio queue object for each sound. For each audio queue, schedule the first buffer of audio to start at the same time using the AudioQueueEnqueueBufferWithParameters function.
Audio format is critical when you play sounds simultaneously on iPhone. To play simultaneous sounds, use the linear PCM (uncompressed) audio format or certain compressed audio formats, as described in “Audio Playback and Recording Formats”.
Playing Sounds with Positioning Using OpenAL
The open-sourced OpenAL audio API, available in iPhone OS in the OpenAL framework, provides an interface optimized for positioning sounds in a stereo field during playback. Playing, positioning, and moving sounds is simple when you use OpenAL working the same way as it does on other platforms. OpenAL also lets you mix sounds. OpenAL uses Core Audio’s I/O unit for playback, resulting in the lowest latency.
For all of these reasons, OpenAL is your best choice for playing sound effects in game applications on iPhone OS–based devices. However, OpenAL is also a good choice for general iPhone OS application audio playback needs.
The open-sourced OpenAL audio API, available in iPhone OS in the OpenAL framework, provides an interface optimized for positioning sounds in a stereo field during playback. Playing, positioning, and moving sounds is simple when you use OpenAL working the same way as it does on other platforms. OpenAL also lets you mix sounds. OpenAL uses Core Audio’s I/O unit for playback, resulting in the lowest latency.
For all of these reasons, OpenAL is your best choice for playing sound effects in game applications on iPhone OS–based devices. However, OpenAL is also a good choice for general iPhone OS application audio playback needs.
=> Recording Audio
Core Audio provides support in iPhone OS for recording audio using the AVAudioRecorder class and Audio Queue Services. These interfaces do the work of connecting to the audio hardware, managing memory, and employing codecs as needed. You can record audio in any of the formats listed in “Audio Playback and Recording Formats” .
Recording with the AVAudioRecorder Class
The easiest way to record sound in iPhone OS is with the AVAudioRecorder class, described in AVAudioRecorder Class Reference. This class provides a highly-streamlined, Objective-C interface that makes it easy to provide sophisticated features like pausing/resuming recording and handling audio interruptions.
At the same time, you retain complete control over recording format.
To prepare for recording using an audio recorder:
1. Specify a sound file URL.
2. Set up the audio session.
3. Configure the audio recorder’s initial state.
Application launch is a good time to do this part of the setup, as shown in Coding. Variables such as soundFileURL and recording are declared in the class interface. (In production code, you would include appropriate error handling.)
Setting up the audio session and the sound file URL
- (void) viewDidLoad {
[super viewDidLoad];
NSString *tempDir = NSTemporaryDirectory ();
NSString *soundFilePath =
[tempDir stringByAppendingString: @"sound.caf"];
NSURL *newURL = [[NSURL alloc] initFileURLWithPath: soundFilePath];
self.soundFileURL = newURL;
[newURL release];
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
audioSession.delegate = self;
[audioSession setActive: YES error: nil];
recording = NO;
playing = NO;
}
The easiest way to record sound in iPhone OS is with the AVAudioRecorder class, described in AVAudioRecorder Class Reference. This class provides a highly-streamlined, Objective-C interface that makes it easy to provide sophisticated features like pausing/resuming recording and handling audio interruptions.
At the same time, you retain complete control over recording format.
To prepare for recording using an audio recorder:
1. Specify a sound file URL.
2. Set up the audio session.
3. Configure the audio recorder’s initial state.
Application launch is a good time to do this part of the setup, as shown in Coding. Variables such as soundFileURL and recording are declared in the class interface. (In production code, you would include appropriate error handling.)
Setting up the audio session and the sound file URL
- (void) viewDidLoad {
[super viewDidLoad];
NSString *tempDir = NSTemporaryDirectory ();
NSString *soundFilePath =
[tempDir stringByAppendingString: @"sound.caf"];
NSURL *newURL = [[NSURL alloc] initFileURLWithPath: soundFilePath];
self.soundFileURL = newURL;
[newURL release];
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
audioSession.delegate = self;
[audioSession setActive: YES error: nil];
recording = NO;
playing = NO;
}
Add the AVAudioSessionDelegate, AVAudioRecorderDelegate, and (if also supporting playback) AVAudioPlayerDelegate Protocol Reference protocol names to the interface declaration for your implementation.
Then, you could implement a record method as shown in Code. (In production code, you would include appropriate error handling.)
A record/stop method using the AVAudioRecorder class
-(IBAction) recordOrStop: (id) sender {
if (recording) {
[soundRecorder stop];
recording = NO;
self.soundRecorder = nil;
[recordOrStopButton setTitle: @"Record" forState: UIControlStateNormal];
[recordOrStopButton setTitle: @"Record" forState:
UIControlStateHighlighted];
[[AVAudioSession sharedInstance] setActive: NO error: nil];
} else {
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryRecord
error: nil];
NSDictionary *recordSettings =
[[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0], AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityMax],
AVEncoderAudioQualityKey,
nil];
AVAudioRecorder *newRecorder =
[[AVAudioRecorder alloc] initWithURL: soundFileURL
settings: recordSettings
error: nil];
[recordSettings release];
self.soundRecorder = newRecorder;
[newRecorder release];
soundRecorder.delegate = self;
[soundRecorder prepareToRecord];
[soundRecorder record];
[recordOrStopButton setTitle: @"Stop" forState: UIControlStateNormal];
[recordOrStopButton setTitle: @"Stop" forState: UIControlStateHighlighted];
recording = YES;
}
}
Then, you could implement a record method as shown in Code. (In production code, you would include appropriate error handling.)
A record/stop method using the AVAudioRecorder class
-(IBAction) recordOrStop: (id) sender {
if (recording) {
[soundRecorder stop];
recording = NO;
self.soundRecorder = nil;
[recordOrStopButton setTitle: @"Record" forState: UIControlStateNormal];
[recordOrStopButton setTitle: @"Record" forState:
UIControlStateHighlighted];
[[AVAudioSession sharedInstance] setActive: NO error: nil];
} else {
[[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryRecord
error: nil];
NSDictionary *recordSettings =
[[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithFloat: 44100.0], AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityMax],
AVEncoderAudioQualityKey,
nil];
AVAudioRecorder *newRecorder =
[[AVAudioRecorder alloc] initWithURL: soundFileURL
settings: recordSettings
error: nil];
[recordSettings release];
self.soundRecorder = newRecorder;
[newRecorder release];
soundRecorder.delegate = self;
[soundRecorder prepareToRecord];
[soundRecorder record];
[recordOrStopButton setTitle: @"Stop" forState: UIControlStateNormal];
[recordOrStopButton setTitle: @"Stop" forState: UIControlStateHighlighted];
recording = YES;
}
}
For more information on the AVAudioRecorder class, see AVAudioRecorder Class Reference.
Recording with Audio Queue Services
To record audio with Audio Queue Services, your application configures the audio session, instantiates a recording audio queue object, and provides a callback function. The callback stores the audio data in memory for immediate use or writes it to a file for long-term storage.
Recording takes place at a system-defined input level in iPhone OS. The system takes input from the audio source that the user has chosen—the built-in microphone or, if connected, the headset microphone or other input source.
Just as with playback, you can obtain the current recording audio level from an audio queue object by querying its kAudioQueueProperty_CurrentLevelMeter property, as described in “Indicating Playback Level”.
=> Parsing Streamed Audio
To play streamed audio content, such as from a network connection, use Audio File Stream Services in concert with Audio Queue Services. Audio File Stream Services parses audio packets and metadata from common audio file container formats in a network bitstream. You can also use it to parse packets and metadata from on-disk files.
In iPhone OS, you can parse the same audio file and bitstream formats that you can in Mac OS X, as follows:
■ MPEG-1 Audio Layer 3, used for .mp3 files
■ MPEG-2 ADTS, used for the .aac audio data format
■ AIFC
■ AIFF
■ CAF
■ MPEG-4, used for .m4a, .mp4, and .3gp files
■ NeXT
■ WAVE
Having retrieved audio packets, you can play back the recovered sound in any of the formats supported in iPhone OS, as listed in “Audio Playback and Recording Formats”.
For best performance, network streaming applications should use data from Wi-Fi connections. iPhone OS lets you determine which networks are reachable and available through its System Configuration framework and its SCNetworkReachabilityRef opaque type, described in SCNetworkReachability Reference. For sample code, see the Reachability sample in the iPhone Dev Center.
To connect to a network stream, use interfaces from Core Foundation, such as the one described in CFHTTPMessage Reference. Parse the network packets to recover audio packets using Audio File Stream
Services. Then buffer the audio packets and send them to a playback audio queue object.
Audio File Stream Services relies on interfaces from Audio File Services, such as the AudioFramePacketTranslation structure and the AudioFilePacketTableInfo structure. These are described in Audio File Services Reference.
For more information on using streams, refer to Audio File Stream Services Reference.
In iPhone OS, you can parse the same audio file and bitstream formats that you can in Mac OS X, as follows:
■ MPEG-1 Audio Layer 3, used for .mp3 files
■ MPEG-2 ADTS, used for the .aac audio data format
■ AIFC
■ AIFF
■ CAF
■ MPEG-4, used for .m4a, .mp4, and .3gp files
■ NeXT
■ WAVE
Having retrieved audio packets, you can play back the recovered sound in any of the formats supported in iPhone OS, as listed in “Audio Playback and Recording Formats”.
For best performance, network streaming applications should use data from Wi-Fi connections. iPhone OS lets you determine which networks are reachable and available through its System Configuration framework and its SCNetworkReachabilityRef opaque type, described in SCNetworkReachability Reference. For sample code, see the Reachability sample in the iPhone Dev Center.
To connect to a network stream, use interfaces from Core Foundation, such as the one described in CFHTTPMessage Reference. Parse the network packets to recover audio packets using Audio File Stream
Services. Then buffer the audio packets and send them to a playback audio queue object.
Audio File Stream Services relies on interfaces from Audio File Services, such as the AudioFramePacketTranslation structure and the AudioFilePacketTableInfo structure. These are described in Audio File Services Reference.
For more information on using streams, refer to Audio File Stream Services Reference.
=> Audio Unit Support in iPhone OS
iPhone OS provides a set of audio processing plug-ins, known as audio units, that you can use in any application. The interfaces in the Audio Unit framework let you open, connect, and use these audio units.
You can also define custom audio units. Because you must statically link custom audio unit code into your application, audio units that you develop cannot be used by other applications in iPhone OS.
Table: lists the audio units provided in iPhone OS.
Table: System-supplied audio units
You can also define custom audio units. Because you must statically link custom audio unit code into your application, audio units that you develop cannot be used by other applications in iPhone OS.
Table: lists the audio units provided in iPhone OS.
Table: System-supplied audio units
Best Practices for iPhone Audio
This section lists some important tips for using audio in iPhone OS and describes the best audio data formats
for various uses.
Tips for Using Audio
Table: lists some important tips to keep in mind when using audio in iPhone OS.
Preferred Audio Formats in iPhone OS
For uncompressed (highest quality) audio, use 16-bit, little endian, linear PCM audio data packaged in a CAF file. You can convert an audio file to this format in Mac OS X using the afconvert command-line tool, as shown here:
/usr/bin/afconvert -f caff -d LEI16 {INPUT} {OUTPUT}
The afconvert tool lets you convert to a wide range of audio data formats and file types. See the afconvert man page, and enter afconvert -h at a shell prompt, for more information. For compressed audio when playing one sound at a time, and when you don’t need to play audio simultaneously with the iPod application, use the AAC format packaged in a CAF or m4a file.
For less memory usage when you need to play multiple sounds simultaneously, use IMA4 (IMA/ADPCM) compression. This reduces file size but entails minimal CPU impact during decompression. As with linear PCM data, package IMA4 data in a CAF file.
For uncompressed (highest quality) audio, use 16-bit, little endian, linear PCM audio data packaged in a CAF file. You can convert an audio file to this format in Mac OS X using the afconvert command-line tool, as shown here:
/usr/bin/afconvert -f caff -d LEI16 {INPUT} {OUTPUT}
The afconvert tool lets you convert to a wide range of audio data formats and file types. See the afconvert man page, and enter afconvert -h at a shell prompt, for more information. For compressed audio when playing one sound at a time, and when you don’t need to play audio simultaneously with the iPod application, use the AAC format packaged in a CAF or m4a file.
For less memory usage when you need to play multiple sounds simultaneously, use IMA4 (IMA/ADPCM) compression. This reduces file size but entails minimal CPU impact during decompression. As with linear PCM data, package IMA4 data in a CAF file.





No comments:
Post a Comment