I built an application using speechRecognition.
startListening() {
let options = {
language: 'en-US'
}
this.speechRecognition.startListening().subscribe(matches => {
this.matches = matches;
this.cd.detectChanges();
});
this.isRecording = true;
}
I would like to know if there is a possibility to listen to the audio I recorded again?
That's very much possible. You can use cordova-file-transfer plugin and save the audio to the device. You can then load it from the device to play the audio.
Read this documentation which says more about cordovas file transfer plugin
Related
Is it possible for Expo (managed workflow) to play new audio file in the background using expo-av?
According to this https://expo.canny.io/feature-requests/p/audio-playback-in-background feature request it is not possible.
However, looking at this post https://levelup.gitconnected.com/lessons-learned-building-multiple-apps-with-expo-and-react-native-28bd43b72b84 It states that the issue has been fixed:
"for example at one point it didn’t allow playing audio in the background, this was resolved in an SDK upgrade"
(unless he is talking about same audio file finishing playing after the screen is locked).
(unless he meant just for the same file to finish playing).
The question is for a single audio file to finish and for the next audio file to be picked up while the app is in the background.
Not just to finish the audio file, which can be done with following configuration:
const AUDIO_CONFIG = {
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DUCK_OTHERS,
playsInSilentModeIOS: true,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DUCK_OTHERS,
shouldDuckAndroid: true,
staysActiveInBackground: true,
};
await Audio.setAudioModeAsync(AUDIO_CONFIG);
I have c++ code running on a raspberry pi using OpenCV to process the camera input (form and color detection). Here is the thread where i capture my images from my pi cam:
(variables names are in french, sorry about that)
Mat imgOriginal;
VideoCapture camera;
int largeur = camPartage->getLargeur();
int hauteur = camPartage->getHauteur();
camera.open(0);
if ( !camera.isOpened() )
{
screen->dispStr(10,1,"Cannot open the web cam");
}
else
{
screen->dispStr(10,1,"Open the web cam");
camera.set(CV_CAP_PROP_FRAME_WIDTH,largeur);
camera.set(CV_CAP_PROP_FRAME_HEIGHT,hauteur);
camera.set(CV_CAP_PROP_FPS,30);
}
while(1)
{
if(largeur != camPartage->getLargeur() || hauteur != camPartage->getHauteur())
{
largeur = camPartage->getLargeur();
hauteur = camPartage->getHauteur();
camera.set(CV_CAP_PROP_FRAME_WIDTH,largeur);
camera.set(CV_CAP_PROP_FRAME_HEIGHT,hauteur);
}
camera.grab();
camera.retrieve(imgOriginal);
camPartage->setImageCam(imgOriginal); //shared object
if(thread.destruction == DESTRUCTION_SYNCHRONE)
{
pthread_testcancel();
}
usleep(20000);
}
Now, i want to stream those images to my website hosted on another raspberry pi. I have looked into gstreamer, ffmpeg and sockets but i didn't find any good example in c++ that worked for me. Im trying to get the lowest latency possible.
Some people suggested to use raspistill but i can't open the camera in another program since its already open by OpenCV.
If you need more information let me know, any help is appreciated.
If you need to stream your camera images from a RPi on the network, There are many approaches to do that, based on your needs.
One approach is to use high-level applications like MJPG streamer, RPi IP Camera, etc.
Another approach is, you can stream camera images throw a network (by RTP, UDP, etc) with GStreamer, FFmpeg, Raspistill, etc. With this approach, you need to have a receiver app to get streams (e.g FFmpeg).
There is also another approach which you already stated in your question and that is directly accessing the camera and capture images then transfer them manually throw network. With this approach, you have more freedom to modify the design (like adding your own compression, encryption, etc) but you should take care of the network protocol by yourself.
In your example, you can transfer each frame in network with a simple TCP/IP socket or you can build up a simple web server. It is obvious that you can't access the cam with two apps at the same time. You can use v4l2loopback to create multiple camera interfaces and access them by multiple apps but it won't solve your problem.
There are good projects like rpi-webrtc-streamer and streameye which uses low-level protocols to transfer images.
I am trying to create a client client server application to stream and then receive video using rtsp using ffmpeg libraries. I am done with the client part which is streaming the video and i can receive the video on ffplay using following command
ffplay -rtsp_flags listen rtsp://127.0.0.1:8556/live.sdp
My problem is that i want receive the video in a c code and i need to set rtsp_flags option in it. Can anyone plz help??
P.S. i cannot use ffserver because i am working on windows and ffserver is not available for windows as far as i knw
You need to add the option when opening the stream:
AVDictionary *d = NULL; // "create" an empty dictionary
av_dict_set(&d, "rtsp_flags", "listen", 0); // add an entry
//open rtsp
if ( avformat_open_input( &ifcx, sFileInput, NULL, &d) != 0 ) {
printf( "ERROR: Cannot open input file\n" );
return EXIT_FAILURE;
}
I try without success to plot waveform using qMediaPlayer and QaudioProbe object to get the QAudioBuffer but it's always fails when I try:
player = new QMediaPlayer;
audio = new QAudioProbe ;
QAudioRecorder *recorder = new QAudioRecorder();
if (audio->setSource(player))
{
// Probing succeeded, audioProbe->isValid() should be true.
std::cout << "probing succed"<< std::endl;
connect(audio, SIGNAL(audioBufferProbed(QAudioBuffer)), this,
SLOT(processBuffer(QAudioBuffer)));
}
this line:
if (audio->setSource(player))
always return false!
when I replace QMediaPlayer by QAudioRecorder the setSource function works well.
do you have any idea to do that, or m'I in a wrong direction?
otherwise is there other way to split audio from video file.
thanks a lot
From the documentation on QMediaPlayer, I would gather that since the property audioAvailable can change, the default is that audioAvailable is false.
If there is no audio available, the documentation of setSource states that
"If the media object does not support monitoring audio, this function
will return false."
Try loading an actual piece of media, that has audio available (check that first) before trying to set the source
I am trying to create a webcam chat program in C++, and while I have been able to get the images to be captured sent and played, I am having trouble with doing the same with the audio: the audio lags and very quickly goes out of sync with the video, even when I just played it to myself.
I found this answer and sample code to be really useful.
Are there any modifications I can make to this code to get it to be nearly lag free, or is OpenAL not right for this? I am using Windows, but I plan on making a linux version later.
From the code linked:
ALCdevice* inputDevice = alcCaptureOpenDevice(NULL,FREQ,AL_FORMAT_MONO16,FREQ/2);
Try using a larger buffer:
ALCdevice* inputDevice = alcCaptureOpenDevice(NULL,FREQ,AL_FORMAT_MONO16,FREQ*4);
The polling is very aggressive. Try sleeping in the loop:
while (!done) {
...
}
To:
int sleepSeconds = 1;
while (!done) {
...
Sleep(sleepSeconds/10) //windows, miliseconds
//sleep(sleepSeconds) //linux, seconds
}