Expo: expo-av play new audio file in the background? - expo

Is it possible for Expo (managed workflow) to play new audio file in the background using expo-av?
According to this https://expo.canny.io/feature-requests/p/audio-playback-in-background feature request it is not possible.
However, looking at this post https://levelup.gitconnected.com/lessons-learned-building-multiple-apps-with-expo-and-react-native-28bd43b72b84 It states that the issue has been fixed:
"for example at one point it didn’t allow playing audio in the background, this was resolved in an SDK upgrade"
(unless he is talking about same audio file finishing playing after the screen is locked).
(unless he meant just for the same file to finish playing).
The question is for a single audio file to finish and for the next audio file to be picked up while the app is in the background.
Not just to finish the audio file, which can be done with following configuration:
const AUDIO_CONFIG = {
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DUCK_OTHERS,
playsInSilentModeIOS: true,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DUCK_OTHERS,
shouldDuckAndroid: true,
staysActiveInBackground: true,
};
await Audio.setAudioModeAsync(AUDIO_CONFIG);

Related

Gstreamer: cannot play the first file produced by mulitfilesink

Did anybody encounter the above? I was doing dynamic pipeline involving attach/removing a bin containing a queue and a multifilesink plugin. Was trying to record and split the video/audio file in 60 seconds segment. video/audio stream was of H264 video and G711 audio and muxed by matroskamux, which has the property streamable set to be true
Understand that multifilesink could only take streaming stream and there was no timing indexes. But that was fine with me and I could open the file to play them using VLC. However there is the problem of playing the first file produced
It seem to missing header or metadata. Have used mediaInfo to query them (as below)
first file video0.mp4 (cannot be played)
The 2nd-last file (can be played)
The properties i have set for multifilesink
g_object_set(G_OBJECT(multifilesink),
"aggregate-gops", TRUE,
"location", "video%d",
"max-file-duration", 60000000000,
"next-file", 5,
"index", 0,
"post-messages", TRUE,
NULL);
Have tried
adjusting "next-file" and "aggregate-gops", it does not seem to change the matters much
changing file format(cross-finger, though I do not believe it will help). It doesnt help
Have tried with a static pipeline... It seems that all the file can be played properly
Just wonder what did i do wrongly?

Adding 10 second wav file to gstreamer pipeline that is already playing

I have a gstreamer pipeline created from the python gst bindings, which is set up to play a headset's microphone back to the headset's speaker. This works fine and is playing in a pipeline like this:
JackAudioSrc -> GstAudioMixer -> Queue -> GstJackAudioSink
Then many seconds later I want to play a short 10 second .wav file into the pipeline so the wav file is mixed with the microphone and heard on the headset. To do this, a GstFileSrc is dynamically added to the GstAudioMixer to mix in a short 10 second wav file to the headset's speaker, which gives pipeline like this:
GstJackAudioSrc -> GstAudioMixer -> Queue -> GstJackAudioSink
/
Gstfilesrc -> Gstwavparse ->/
When the Gstfilesrc and Gstwavparse file is dynamically added to a sink pad of the mixer, at a time say 6 seconds since the start of the pipeline, only the last 4 seconds of the wav is heard.
The problem seems to be that the wav file seeks to the time relative to when the pipeline started PLAYING.
I have tried changing "do-timestamp" in a multifilesrc, and GstIndentity "sync"=True, and can't find a way to set "live" on a filesrc, and many others but to no avail.
However, the whole 10 second wav file will play nicely if the pipeline is set to Gst.State.NULL then back to Gst.State.PLAYING when the filesrc is added at 6 seconds. This works as the pipeline time gets set back to zero, but this produces a click on the headset, which is unacceptable.
How can I ensure that the wav file starts playing from the start of the wav file, so that the whole 10 seconds is heard on the headset, if added to the pipeline at any random time?
An Update:
I can now get the timing of the wave file correct by adding a clocksync and setting its timestamp offset, before the wavparse:
nanosecs = pipeline.query_position(Gst.Format.TIME)[1]
clocksync.set_property("ts-offset", nanosecs)
Although the start/stop times are now correct, the wav audio is corrupted and heard as nothing but clicks and blips, but at least it starts playing at the correct time and finishes at the correct time. Note that without the clocksync the wav file audio is perfectly clear, it just starts and stops at the wrong time. So the ts-offset is somehow corrupting the audio.
Why is the audio being corrupted?
So I got this working and the answer is not to use the clocksync, but instead request a mixer sink pad, then call set_offset(nanosecs) on the mixer sink pad, before linking the wavparse to the mixer:
sink_pad = audio_mixer.get_request_pad("sink_%u")
nanosecs = pipeline.query_position(Gst.Format.TIME)[1]
sink_pad.set_offset(nanosecs)
sink_pad.add_probe(GstPadProbeType.IDLE, wav_callback)
def wav_callback(pad, pad_probe_info, userdata):
wavparse.link(audio_mixer)
wav_bin.set_state(Gst.State.PLAYING)
return Gst.PadProbeReturn.REMOVE
Then if the wav file needs to be rewound/replayed:
def replay_wav():
global wav_bin
global sink_pad
wav_bin.seek_simple(Gst.Format.TIME, Gst.SeekFlags.FLUSH, 0)
nanosecs = pipeline.query_position(Gst.Format.TIME)[1]
sink_pad.set_offset(nanosecs)

Use Source Reader to get H264 samples from webcam source

When using the Source Reader I can use it to get decoded YUV samples using an mp4 file source (example code).
How can I do the opposite with a webcam source? Use the Source Reader to provide encoded H264 samples? My webcam supports RGB24 and I420 pixel formats and I can get H264 samples if I manually wire up the H264 MFT transform. But it seems as is the Source Reader should be able to take care of the transform for me. I get an error whenever I attempt to set MF_MT_SUBTYPE of MFVideoFormat_H264 on the Source Reader.
Sample snippet is shown below and the full example is here.
// Get the first available webcam.
CHECK_HR(MFCreateAttributes(&videoConfig, 1), "Error creating video configuration.");
// Request video capture devices.
CHECK_HR(videoConfig->SetGUID(
MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE,
MF_DEVSOURCE_ATTRIBUTE_SOURCE_TYPE_VIDCAP_GUID), "Error initialising video configuration object.");
CHECK_HR(videoConfig->SetGUID(MF_MT_SUBTYPE, WMMEDIASUBTYPE_I420),
"Failed to set video sub type to I420.");
CHECK_HR(MFEnumDeviceSources(videoConfig, &videoDevices, &videoDeviceCount), "Error enumerating video devices.");
CHECK_HR(videoDevices[WEBCAM_DEVICE_INDEX]->GetAllocatedString(MF_DEVSOURCE_ATTRIBUTE_FRIENDLY_NAME, &webcamFriendlyName, &nameLength),
"Error retrieving video device friendly name.\n");
wprintf(L"First available webcam: %s\n", webcamFriendlyName);
CHECK_HR(videoDevices[WEBCAM_DEVICE_INDEX]->ActivateObject(IID_PPV_ARGS(&pVideoSource)),
"Error activating video device.");
CHECK_HR(MFCreateAttributes(&pAttributes, 1),
"Failed to create attributes.");
// Adding this attribute creates a video source reader that will handle
// colour conversion and avoid the need to manually convert between RGB24 and RGB32 etc.
CHECK_HR(pAttributes->SetUINT32(MF_SOURCE_READER_ENABLE_VIDEO_PROCESSING, 1),
"Failed to set enable video processing attribute.");
CHECK_HR(pAttributes->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video), "Failed to set major video type.");
// Create a source reader.
CHECK_HR(MFCreateSourceReaderFromMediaSource(
pVideoSource,
pAttributes,
&pVideoReader), "Error creating video source reader.");
MFCreateMediaType(&pSrcOutMediaType);
CHECK_HR(pSrcOutMediaType->SetGUID(MF_MT_MAJOR_TYPE, MFMediaType_Video), "Failed to set major video type.");
CHECK_HR(pSrcOutMediaType->SetGUID(MF_MT_SUBTYPE, MFVideoFormat_H264), "Error setting video sub type.");
CHECK_HR(pSrcOutMediaType->SetUINT32(MF_MT_AVG_BITRATE, 240000), "Error setting average bit rate.");
CHECK_HR(pSrcOutMediaType->SetUINT32(MF_MT_INTERLACE_MODE, 2), "Error setting interlace mode.");
CHECK_HR(pVideoReader->SetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, NULL, pSrcOutMediaType),
"Failed to set media type on source reader.");
CHECK_HR(pVideoReader->GetCurrentMediaType((DWORD)MF_SOURCE_READER_FIRST_VIDEO_STREAM, &pFirstOutputType),
"Error retrieving current media type from first video stream.");
std::cout << "Source reader output media type: " << GetMediaTypeDescription(pFirstOutputType) << std::endl << std::endl;
Output:
bind returned success
First available webcam: Logitech QuickCam Pro 9000
Failed to set media type on source reader. Error: C00D5212.
finished.
Source Reader does not look like suitable API here. It is API to implement "half of pipeline" which includes necessary decoding but not encoding. The other half is Sink Writer API which is capable to handle encoding, and which can encode H.264.
Or your another option, unless you are developing a UWP project, is Media Session API which implements a pipeline end to end.
Even though technically (in theory) you could have an encoding MFT as a part of Source Reader pipeline, Source Reader API itself is insufficiently flexible to add encoding style tansforms based on requested media types.
So, one solution could be to have Source Reader to read with necessary decoding (such as up to having RGB32 or NV12 video frames), then Sink Writer to manage encoding with respectively appropriate media sink on its end (or Sample Grabber as media sink). Another solution is to put Media Foundation primitives into Media Session pipeline which can manage both decoding and encoding parts, connected together.
Now, your use case is clearer.
For me, your MFWebCamRtp is the best optimized way of doing : WebCam Source Reader -> Encoding -> RTP Streaming.
But you are experiencing presentation clock issues, synchronization issues, or unsynchronized audio video issues. Am I right ?
So you tried Sample Grabber Sink, and now Source Reader, like I suggested to you. Of course, you can think that a Media Session will be able to do it better.
I think so, but extra work will be needed.
Here is what I would do in your case :
Code a custom RTP Sink
Create a topology with webcam source, h264 encoder, your custom RTP Sink
Add your topology to a MediaSession
Use the MediaSession to play the process
If you want a networkstream sink sample, see this : MFSkJpegHttpStreamer
This is old, but it's a good start. This program also uses winsock, like your.
You should be aware that RTP protocol uses UDP. A very good way to have synchronization issues... Definitely your main problem, as I guess.
What I think. You are trying to compensate for the weaknesses of the RTP protocol (UDP), with a management of the audio / video synchronization of MediaFoundation. I think you will just fail with this approach.
I think your main problem is RTP protocol.
EDIT
No I'm not having synchronisation issues. The Source Reader and Sample Grabber both provide correct timestamps which I can use in the RTP header. Likewise no problems with RTP/UDP etc. that's the bit I do know about. My questions are originating from a desire to understand the most efficient (least amount of plumbing code) and flexible solution. And yes it does look like a custom sink writer is the optimal solution.
Again things are clearer. If you need help with a custom RTP sink, I'll be there.

Live555MediaServer restarts the stream at every new connection. Why setting "reuseSource" to true is not working as expected?

Live555MediaServer can be used to stream video files as rtsp streams. I have 2 clients (vlc) that connect to the server, A and B. I want to see the exact video stream in both the clients. Here is the problem: I connect A and after 10 seconds I connect B. When B is connected the video that I see starts over from the beginning, while A keeps streaming as it was.
I would like the 2 concurrent streams to be synchronized.
The live555 doc says that setting reuseFirstSource to True should work. So I tried to set reuseSource to true at DynamicRTSSPServer:121 but it didn't work. When I connect to the server using client B the video restarts from the beginning.
Boolean const reuseSource = True;
I expect to see the 2 concurrent streams synchronized even if one starts with a delay with respect to the other one.
I finally found a workaround and why there was this 'bug'.
Quick answer: set if condition at line 67 to false, i.e.
if (smsExists && isFirstLookupInSession) {
becomes
if (false) {
Explaination: Every time a new session is starting, the isFirstLookupInSession variable is set to true and the session is removed and recreated.
I wrote to the support of live555 and Finlayson told me and I quote
“LIVE555 Media Server” code was always intended to work this way, and was intended to be a ‘stand-alone appliance’ that does not have its code modified (e.g., by changing the value of “reuseFirstSource”).
Thus the only solution for creating a RTSP server through Live555 is to create your own server starting from the testProgs examples.
The workaround proposed here could generate unwanted behaviors, but for a simple rtsp server with multiple streams it's fine.

How to play an audio file with continously updated QBuffer (for audio file) with Phonon?

I’m using Phonon player to play the audio files.
Scenarios:
Files played from local drive : Plays properly.
Files played from remote drive : As the audio files are on a USB device I have to keep updating the buffer (QBuffer) and simultaneously play the file. But for some reason the file is not playing in Phonon player. Can anyone please tell me the right way to play the audio file while the buffer is still getting updated?
//Code
Phonon::MediaObject* m_pMediaObject = new Phonon::MediaObject(this);
Phonon::AudioOutput* audioOutput = new Phonon::AudioOutput(Phonon::MusicCategory, this);
Phonon::Path path = Phonon::createPath(m_pMediaObject, audioOutput);
QBuffer m_pBufferLoop = new QBuffer(this);
m_pBufferLoop->open(QIODevice::Append || | QIODevice::ReadWrite);
functionToUpdateBuffer();//updates the buffer dynamically.
m_pMediaObject->setCurrentSource(m_pBufferLoop);
m_pMediaObject->play();
Nothing happens after I call play(). But if I give the complete buffer then the same code works fine.