Yes it can be done, but not with a single function call.
The functionality you want is not in fact CoreAudio, but rather in ExtendedAudioFile.h - part of the AudioToolbox framework. This is available for both iOS and MacOSX. I can attest for this being rather hard to find.
Functions of interest in this header are ExtAudioFileOpenURL(), ExtAudioFileRead() and ExtAudioFileWrite().
In outline what you do:
- Use
ExtAudioFileOpenURL() to open the input file
Use ExtAudioFileGetProperty() with propertyId kExtAudioFileProperty_FileDataFormat to obtain an AudioStreamBasicDescription describing the file.
Possibly set the ASBD to get the format you want. AudioToolBox on MacOSX seems rather more amenable to this than on iOS.
Calculate an allocate a buffer large enough to hold the entire audio file
Read the entire file with ExtAudioFileRead() - NB: this call might not read it all in one go - operating in much the same was as POSIX read()
Perform normalisation
- Use
ExtAudioFileCreateWithURL() to create the output file
- Use
ExtAudioFileWrite() to write the normalised samples out.
- Dispose of both audio files.
The documentation links to several example projects that can act as donors of working code. You'll find doing normalisation much easier with the samples as floats, but in iOS, I could never get the conversion to work automatically, so you might have to format convert yourself.