Sound capture hang/freeze during v4l2 VIDIOC_QBUF calls - c++

I'm trying to capture sound with ALSA asound + video via v4l2 on raspberry pi, and separately it works fine. But on same time some audio frames loosing during VIDIOC_QBUF ioctl call:
ioctl(fd, VIDIOC_QBUF, &bufferinfo[i])
and capturing audio with
snd_pcm_readi (capture_handle, input_buffer_audio, audio_frames)
On every VIDIOC_QBUF snd_pcm_readi loose/hang for around ~300 audio frames. I'm also tried calling audio capture and video on separated test apps running on RPi, the problem also reproduced in this case.
I don't see and CPU overload or something indicating the problem (the load bellow < 12% on my RPi 3b). And the problem not reproduced on PC with linux with ALSA+v4l2 stack with the same camera (logitech c270).
On camera with 30 fps it's a big problem, because such big frames loose make sound laggy.

Related

Getting frame timestamp on RTSP playback

I'm making simple camera playback videoplayer on qt with gstreamer, it is possible to get frame date/time or only time from OSD? example of timestamp on picture, currently work with hikvision, tried to find it on RTP packets dumped with wireshark, but there is only relative to first frame timestamps.

Too high bandwidth in capturing multiple STILL images from multiple webcams with OpenCV

Now, I'm doing a project in which many webcams are used for capturing a still image for each webcam using OpenCV in C++.
Like other questions, multiple HD webcams may use too much bandwidth and exceed the limit.
Unlike the others, what I need is only a still image (only one frame) from each webcam. Let's say I have 15 webcams connecting to PC and every 10 seconds I would like to get still images (one image per webcam (total 15 images)) within 5 seconds. The images are then analysed and send a result to an arduino.
Approach 1: Open all webcams all the time and capture images every 10 seconds.
Problem: The bandwidth of USB is not enough.
Approach 2: Open all webcams but only one webcam at a time, and then close it and open the next one.
Problem: Switching the webcams from one to another takes at least 5 seconds for each switching.
What I need is only a single frame of an image from each webcam and not a video.
Is there any suggestions for this problem, besides loadbalancing of USB bus and adding USB PCI cards?
Thank you.
In opencv you can deal with the WebCam as stream which mean you have run as video. However, I think this kind of problem should be solved using the Webcam API if it is available. There should a way or another to take still image and return it to your program as data. So, you may search for this in website of the Camera.

Capturing H264 with logitech C920 to OpenCV

I’ve been trying to capture a H264 stream from my two C920 Logitech camera with OpenCV (On a Raspberry Pi 2). I have come to the conclusion that this is not possible because it is not yet implemented. I’ve looked a little in OpenCV/modules/highgui/cap_libv4l.cpp and found that the “Videocapture-function” always convert the pixelformat to BGR24. I tried to change this to h264, but only got a black screen. I guess this is because it is not being decoded the right way.
So I made a workaround using:
V4l2loopback
h264_v4l2_rtspserver
Gstreamer-0.10
(You can find the loopback and rtspserver on github)
First I setup a virtual device using v4l2loopback. Then the rtspserver captures in h264 then streams rtsp to my localhost(127.0.0.1). Then I catch it again with gstreamer and pipe it to my virtual v4l2 video device made by loopback using the “v4l2sink” option in gst-launch-0.10.
This solution works and I can actually connect to the virtual device with the opencv videocapture and get a full HD picture without overloading the cpu, but this is nowhere near a good enough solution. I get a roughly 3 second delay which is too high for my stereo vision application and it uses a ton of bandwidth.
So I was wondering if anybody knew a way that I could use the v4l2 capture program from Derek Molloys boneCV/capture program (which i know works) to capture in h264 then maybe pipe it to gst-launche-0.10 and then again pipe it to the v4l2sink for my virtual device?
(You can find the capture program here: https://github.com/derekmolloy/boneCV)
The gstreamer command I use is:
“gst-launch-0.10 rtspsrc location=rtsp://admin:pi#127.0.0.1:8554/unicast ! decodebin ! v4l2sink device=/dev/video4”
OR maybe in fact you know what I would change in the opencv highgui code to be able to capture h264 directly from my device without having to use the virtual device? That would be amazingly awesome!
Here is the links to loopback and the rtspserver that I use:
github.com/mpromonet/h264_v4l2_rtspserver
github.com/umlaeute/v4l2loopback
Sorry about the wierd links I don't have enough reputation yet to poste more links..
I don't know exactly where you need to change in the OpenCV, but very recently I started to code using video on Raspberry PI.
I'll share my findings with you.
I got this so far:
can read the C920 h264 stream directly from the camera using V4L2 API at 30 FPS (if you try to read YUYV buffers the driver has a limit of 10 fps, 5 fps or 2 fps from USB...)
can decode the stream to YUV 4:2:0 buffers using the broadcom chip from raspberry using OpenMax IL API
My Work In Progress code is at: GitHub.
Sorry about the code organization. But I think the abstraction I made is more readable than the plain V4L2 or OpenMAX code.
Some code examples:
Reading camera h264 using V4L2 Wrapper:
device.streamON();
v4l2_buffer bufferQueue;
while (!exit_requested){
//capture code
device.dequeueBuffer(&bufferQueue);
// use the h264 buffer inside bufferPtr[bufferQueue.index]
...
device.queueBuffer(bufferQueue.index, &bufferQueue);
}
device.streamOFF();
Decoding h264 using OpenMax IL:
BroadcomVideoDecode decoder;
while (!exit_requested) {
//capture code start
...
//decoding code
decoder.writeH264Buffer(bufferPtr[bufferQueue.index],bufferQueue.bytesused);
//capture code end
...
}
check out Derek Molloy on youtube. He's using a Beaglebone, but presumably ticks this box
https://www.youtube.com/watch?v=8QouvYMfmQo

Programmatically capturing video with the FFmpeg libaries (not the libav fork) with variable frame rate in c++

I am working on a simulator in c++ and OpenGL and I wanted to add some video capture capabilities (cross platform would be a requirement here). I decided to work with FFmpeg since I can directly put my rendered frames into a video. So far so good, but in a 3D rendering engine you are usually far from having a constant frame rate and I think that it is not a good idea to go constant there. Therefore I am trying to figure out how to capture a variable frame rate video with FFmpeg or how to get from my variable frame rate of the simulator to a constant frame rate for the video in FFmpeg. Can anybody help me out here? How are videos usually captured in variable frame rate environments?
Variable frame rate is mostly an issue in the muxing stage, since your container (e.g. good ol' AVI) might not support VFR. As long as you're muxing into a format that supports per-frame timestamps, you should be OK. Good examples of this are mkv (matroska) or mp4. Then, as long as the AVPacket.dts is set correctly during encoding/muxing, you should be fine and your video should be VFR.

WASAPI lagging playback

I'm writing a program to windows store in c++ which plays back the microphone. I have to modify the bits before sending that to the speakers. Firstly I wanted to play back the microphone without any effect bit it is lagging. The frequency and the bit rate is the same (24 bit, 192000Hz) but I also tried with (24 bit, 96000Hz). I debugged it and it seems that the speaker is faster therefore it has to wait for the data from the microphone like the squeakers would work in a higher frequency but according to the settings it doesn't. Dose anyone have a sightliest idea what is the problem here?
When you say that there are some 'lag', do you mean that there are some delay between when you feed the audio capture device with data and when the playback device renders the data or do you mean that the audio stream is 'chopped' with small pauses in between each sample being rendered?
If there's delay in playback I would take a look at with what latency value you've initialized the audio capture client.
If there are small pauses then I would recommend you using double buffering of sample data so that one buffer is being rendered while the other is being re-fetched from the audio capture device.