OpenCV VideoCapture using H.264 Camera - c++

I've got an embedded linux device with a USB Camera attached which provides an MJPEG as well as an H.264 stream. Until now, I used the MJPEG stream (device index 0) to initialize my VideoCapture object and to read frames from the camera. The problem is that the CPU of the device isn't exactly powerful, so decoding utilizes a lot of it, which is why I want to use the H.264 stream (device index 2) that the camera already provides. However, OpenCV refuses to accept this format at all.
Aug 23 08:08:17: VIDEOIO ERROR: V4L2: Pixel format of incoming image is unsupported by OpenCV
Aug 23 08:08:17: VIDEOIO ERROR: V4L: can't open camera by index 2
After several online searches I found that you can hint OpenCV to accept H.264 streams by setting the FourCC value of the object cap.set(CAP_PROP_FOURCC, VideoWriter::fourcc('H', '2', '6', '4'));, but this didn't do the trick.
I also tried "avc1", which didn't work as well.
Is there any way how I can use the native H.264 stream of the camera using OpenCV or am I stuck with MJPEG?

Related

Encoding uncompressed avi using RAWVIDEO codec and RGB24

I coded an encoder using FFMPEG (c++). The requirements for this encoder are:
The output format should be uncompressed avi,
Preferably using RGB24/YUV444 pixel format since we do not want chroma subsampling.
Most standard players should support the format (windows media player (WMP), VLC)
Using the encoder I wrote, I can write a number of file types right now:
Lossless H.264 encoded video using the YUV420p pixel format and AVI container. (Obviously not uncompressed and chroma subsampled, however both WMP and VLC play without any problem.)
MPEG4 encoded video using the YUV420p pixel format and AVI container.(Obviously not uncompressed and chroma subsampled, however both WMP and VLC play without any problem.)
AYUV encoded video using the YUVA444P pixel format. (uncompressed as far as I understand and not chroma subsampled. However, VLC does not play this.)
FFV1 encoded video using the YUV444P pixel format. (lossless, and not chroma subsampled. However, WMP does not play this.)
The above is derived from this very usefull post.
So I am now looking into the RAWVIDEO encoder from FFMPEG. I can't get this to work and neither can I find an example in the FFMPEG documentation on how to use this encoder for writing video. Can somebody point me in the right direction or supply sample code for this?
Also, if there is another direction I should follow to meet my requirements feel free to point me to it.
Thanks in advance

Capturing H264 with logitech C920 to OpenCV

I’ve been trying to capture a H264 stream from my two C920 Logitech camera with OpenCV (On a Raspberry Pi 2). I have come to the conclusion that this is not possible because it is not yet implemented. I’ve looked a little in OpenCV/modules/highgui/cap_libv4l.cpp and found that the “Videocapture-function” always convert the pixelformat to BGR24. I tried to change this to h264, but only got a black screen. I guess this is because it is not being decoded the right way.
So I made a workaround using:
V4l2loopback
h264_v4l2_rtspserver
Gstreamer-0.10
(You can find the loopback and rtspserver on github)
First I setup a virtual device using v4l2loopback. Then the rtspserver captures in h264 then streams rtsp to my localhost(127.0.0.1). Then I catch it again with gstreamer and pipe it to my virtual v4l2 video device made by loopback using the “v4l2sink” option in gst-launch-0.10.
This solution works and I can actually connect to the virtual device with the opencv videocapture and get a full HD picture without overloading the cpu, but this is nowhere near a good enough solution. I get a roughly 3 second delay which is too high for my stereo vision application and it uses a ton of bandwidth.
So I was wondering if anybody knew a way that I could use the v4l2 capture program from Derek Molloys boneCV/capture program (which i know works) to capture in h264 then maybe pipe it to gst-launche-0.10 and then again pipe it to the v4l2sink for my virtual device?
(You can find the capture program here: https://github.com/derekmolloy/boneCV)
The gstreamer command I use is:
“gst-launch-0.10 rtspsrc location=rtsp://admin:pi#127.0.0.1:8554/unicast ! decodebin ! v4l2sink device=/dev/video4”
OR maybe in fact you know what I would change in the opencv highgui code to be able to capture h264 directly from my device without having to use the virtual device? That would be amazingly awesome!
Here is the links to loopback and the rtspserver that I use:
github.com/mpromonet/h264_v4l2_rtspserver
github.com/umlaeute/v4l2loopback
Sorry about the wierd links I don't have enough reputation yet to poste more links..
I don't know exactly where you need to change in the OpenCV, but very recently I started to code using video on Raspberry PI.
I'll share my findings with you.
I got this so far:
can read the C920 h264 stream directly from the camera using V4L2 API at 30 FPS (if you try to read YUYV buffers the driver has a limit of 10 fps, 5 fps or 2 fps from USB...)
can decode the stream to YUV 4:2:0 buffers using the broadcom chip from raspberry using OpenMax IL API
My Work In Progress code is at: GitHub.
Sorry about the code organization. But I think the abstraction I made is more readable than the plain V4L2 or OpenMAX code.
Some code examples:
Reading camera h264 using V4L2 Wrapper:
device.streamON();
v4l2_buffer bufferQueue;
while (!exit_requested){
//capture code
device.dequeueBuffer(&bufferQueue);
// use the h264 buffer inside bufferPtr[bufferQueue.index]
...
device.queueBuffer(bufferQueue.index, &bufferQueue);
}
device.streamOFF();
Decoding h264 using OpenMax IL:
BroadcomVideoDecode decoder;
while (!exit_requested) {
//capture code start
...
//decoding code
decoder.writeH264Buffer(bufferPtr[bufferQueue.index],bufferQueue.bytesused);
//capture code end
...
}
check out Derek Molloy on youtube. He's using a Beaglebone, but presumably ticks this box
https://www.youtube.com/watch?v=8QouvYMfmQo

Decode mJPEG to a format usable by libx264

I am compressing frames coming from webcam with libx264. So far I used YUY2 raw frames and swscale to transcode the frames to I420, which is usable by x264.
Anyway I would like to add support for mJPEG webcams (usually webcam provides both, but mJPEG allows higher frame rates and resolutions). What can I use to transcode mJPEG to some format, that can be used by x264?
If you already use swscale why not to use ffmpeg/libav (libavcodec) for decoding mjpeg?

How to decode mpeg motion vector using OpenCV in C++?

I want to decode the MPEG motion vectors using OpenCV in C++.
Is there any function in OpenCV through which we can get this?
Brightness may not be constant through out the video in my case.
I am referring paper Efficient camera motion characterization for MPEG video indexing
It says use partial decoding to get motion vectors from MPEG-compressed video sequence.
But I am unable to determine how to do this using OpenCV.
How to proceed?
OpenCV uses ffmpeg, v4linux or QuickTime as backend video encoder/decoder. It cannot access internal data or partial decoding results, because it is just a wrapper over other libraries. All it does is to handle frames from the backend and convert them to IplImage or cv::Mat.
If you want to access internal data, you should play with the ffmpeg code.

How to write an opencv image (IplImage) to a V4L2 loopback device?

As the title says I am not sure how to write an IplImage to a V4l2 loopback device. I know how to write to a device as I have posted here How to write/pipe to a virtual webcam created by V4L2loopback module?
But now I am not sure how I can exactly write IplImage objects in to the device. If I just write the image->imageData where image is an IplImage*, when I view the device using "luvcview" the malformed frames show up for about a second then it throws the following error.
luvcview 0.2.6
SDL information:
Video driver: x11
A window manager is available
Device information:
Device path: /dev/video3
Stream settings:
Frame format: YUYV (MJPG is not supported by device)
Frame size: 520x474 (requested size 640x480 is not supported by device)
Frame rate: 30 fps
libv4l2: error dequeuing buf: Invalid argument
Unable to dequeue buffer: Invalid argument
Error grabbing
Cleanup done. Exiting ...
Could it be because I have not converted the opencv images to v4l2 format? or the v4l2 arguments does not match with the IplImage properties? If so how to do it?
If anyone knows what this error means please let me know.
I decided to post this question separately as this question is not about writing to the device its particularly about write an IplImage to the device.
Could anyone please give me a code snippet that shows how to write an IplImage to a V4l2 loopback device?
I worked out that the above issue was not an issue anymore as it seems that luvcview or skype does not fully support v4l2. But if I view the loopback device using VLC player, it works fine.