I have cross compiled Qt 5.5.1 for my ARM board and been trying to play video files using gstreamer and Qt. I have the following pipeline on gstreamer which works fine.
gst-launch-1.0 filesrc location=tracked.mp4 !
qtdemux name=demux demux.video_0! queue ! h264parse ! omxh264dec !
nveglglesink -e
Now I try to play the same video with the video player examples coming with qt multimedia and I get the video being shown in grayscale but replicated 4 times across the screens. I am not sure why but my ARM board does have 4 processors. See the attached screenshot.
Has anyone come across this problem and perhaps have an idea on how to run such gstreamer pipelines with Qt successfully?
Qt sample usually use decodebin or playbin to play video.
So it is not abnormal for Qt play video differently with your pipeline.
Try to play this video in GStreamer with decodebin or playbin, and check whether same phenomenon occur.
One more points is that you use nveglglesink for the pipeline, but Qt always uses its own sink element (qtvideorendersink or somethings).
There is chance that your decoded format is not handled well by qt sink.
("Gray and duplicate images" phenomenon usually happens because sink element not handle the format correctly).
If it is the case, convert to other format before send to Qt sink may solve it.
Related
I have created Video0 device using V4l2loopback and used the following sample Git code V4l2Loopback_cpp as a application to stream jpg images from a folder sequential by altering some conditions in the code.But the code read images as 24Bit RGB image and send it Video0 device which is fine ,because the image run like a proper video on VLC video device capture. As i mentioned earlier thet if i checked the VLC properties of the video its Shows the following content
i need this video0 device to stream rtsp h264 video in vlc using the gstreamer lib .
i have used the following command to check in commandline for testing but its show some internal process error
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,width=590,height=332,framerate=30/1 ! x264enc ! rtph264pay name=pay0 pt=96
i dont know whats the problem here .is it 24bit jpeg format or the gstreamer command i use. I need a proper gstreamer command line to process the video0 devide to stream h264 rtsp video any help is appreciated thank u.
image Format - jpg (sequence image passed)
Video0 recives - 24-bit RGB image
output need - h264 rtsp stream from video0
Not sure this is a solution, but the following may help:
You may try adjusting resolution according to what V4L reports (width=584) :
v4l2src device=/dev/video0 ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96
Note that this error may happen on v4l2loopback receiver side while it may be a sender error. If you're feeding v4l2loopback with a gstreamer pipeline to v4l2sink, you would try adding identity such as:
... ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! identity drop-allocation=1 ! v4l2sink
I am planning on doing VideoCapture from OpenCV for video file stream/live rtsp stream. However, the VideoCapture has alot of latency when used in my program so i decided to use the gstreamer pipeline instead. For example, i used
VideoCapture capVideo("filesrc location=CarsDriving.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ", CAP_GSTREAMER);
My program is able to run but if i were to do something like
capVideo.get(CAP_PROP_FRAME_COUNT)
It always returns -1 because GStreamer has this warnings
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=1, duration=-1
How do i get the frame count in opencv if i use gstreamer for the video pipeline? I need the framecount for exceptions and also video processing techniques.
This is a bug which #alekhin mentioned here and here. Also mentioned how to fix. After changing you should rebuild opencv.
Also you said:
However, the VideoCapture has alot of latency when used in my program
so i decided to use the gstreamer pipeline instead.
rtsp cameras generaly streams as h264/h265 encoded data. If you are trying to decode that data via on CPU but not GPU, it will not give you much increasing about speed. Why don't you choose CAP_FFMPEG flag instead of CAP_GSTREAMER? CAP_FFMPEG will be faster than CAP_GSTREAMER
I have a Logitech QuickCam Chat camera. When, I run the "v4l2-ctl" tool, I can see that uses the spca561 driver.
I try to use "cheese" tool but it said "No device was found". However, If I run the following, if it works:
vlc v4l2:///dev/video0
I want to use "gstreamer" tool. I run the following sentence:
gst-launch-1.0 v4l2src ! xvimagesink
but it's not working.
How can I capture video with gstreamer? Why the tool cheese not capture but if vlc?
I was trying to use a very old Logitech USB Camera 'Logitech, Inc. QuickCam Express' ... Yes the one from 1999 :-O .
the kernel detect it, and seems to be spca561: gspca_main: spca561-2.14.0 probing 046d:0928
vlc show nice live video on capture from /dev/video0 : 352x288. 30fps.
guvcview works very well too.
To make it work from gst-launch you can try this :
gst-launch-1.0 -vvv v4l2src device=/dev/video0 ! video/x-bayer,width=176,height=144 ! bayer2rgb ! videoconvert ! autovideosink
It seems that Gstreamer is wrong on detecting supported formats, but VLC just works... so it's maybe gstreamer problem.
I am using panda board and i have installed opencv and wrote a code for sticking 3 different images from 3 different cams.now this stitched image is stored in a matrix location(pointer).i for that 3 cams images will be continuously captured and stitched.so it becomes a video.so i need to stream that stitched image to iPhone .can any one help me with this.i am really stuck here and need help.its very important for me.
I would suggest you look at constructing either mjpeg stream or better a RTSP (encapsulating mpeg4 - saving bandwidth) stream based on RTP protocol. Say you decide to go with mjpeg stream, then each of your opencv IplImage* can be converted to JPEG Frames using libjpeg compression. See my answer here Compressing IplImage to JPEG using libjpeg in OpenCV. You would compress each frame and then create mjpeg stream. See creating my own MJPEG stream. You would need a webserver to run mjpeg cgi that streams your image stream. You could look at lighttpd web server running on Panda Board. Gstreamer is the package that may be helpful in your situation. On the decoding side (iphone) you can construct gstreamer decoding pipeline as follows - say you are streaming mjpeg gst-launch -v souphttpsrc location="http://<ip>:<port>/cgi_bin/<mjpegcginame>.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! ffmpegcolorspace ! autovideosink
With the Microsoft LifeCam Cinema (on Ubuntu) in guvcview I get 30fps on 1280x720. In my OpenCV program, I only get 10fps (only queryframe and showimage, no image processing is done). I found out that it is a problem in gstreamer. A solution is to set a capsfilter in gstreamer, in terminal I can do it like this:
gst-launch v4l2src device=/dev/video0 !
'video/x-raw-yuv,width=1280,height=720,framerate=30/1' ! xvimagesink
This works! The question is:
How do I implement this in my c++/OpenCV program?
Or is it possible to set gstreamer to always use this capsfilter?
I already found this question Option 3, but I can't get it working with a webcam.
Unfortunately, there is no way to set the format (YUV) of the frames retrieved from the camera, but for the rest of the settings you could try using cvSetCaptureProperty():
cvSetCaptureProperty(capture, CV_CAP_PROP_FPS, 30);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1280);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 720);
If setting the frame size doesn't work, I strongly suggest you read this post: Increasing camera capture resolution in OpenCV
My bad, I was setting my webcam to 1280x800, which forces it to use YUVY with max 10 fps. Setting it back to 1280x720 in my program gave me 30 fps