Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where TDAudioFileStream gets audio format? #4

Open
Geri-Borbas opened this issue Apr 29, 2014 · 1 comment
Open

Where TDAudioFileStream gets audio format? #4

Geri-Borbas opened this issue Apr 29, 2014 · 1 comment

Comments

@Geri-Borbas
Copy link

Hi, great piece of work!

I'm working on a (temporary) replacement for GKVoiceChatService, so I feed the output stream with samples collected from microphone manually whenever it has space available.

But I can't see how can I set [TDAudioFileStream basicDescription], in the current code I just can't figure out how it is initialized to a given format.

I have the AudioStreamBasicDescription of every CMSampleBuffer I got from AVCaptureSession, but can't see exactly how to inject this information into the processing (otherwise the buffers just traversing fine as I simply NSLogged out).

If I'm right, the underlying audio queue also gets the format from TDAudioFileStream.

Can you give me some hint on this?

@Geri-Borbas
Copy link
Author

Ahh, I suppose I have to mimic an audio file header, and send it as the first packet. A bit hurts that you did not answered anyway. Feels like anti-socail coding. :)

As AudioFileStreamParseBytes documentation says:

Streamed audio file data is expected to be passed to the parser in the same sequence in which it appears in the audio file, from the beginning of the audio file stream, without gaps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant