gstreamer playbin only video - gstreamer

I am using the following pipeline
gst-launch-0.10 playbin2 uri=file:///mnt/hash.mp4 video-sink="imxv4l2sink" flags=0x57
This works fine for the video file ( mp4 ) which doesn't have video in it. But when I pass a mp4 file which has both video and audio it fails to play.
Can you please help me in reconstructing the pipeline to allow it to work on both kind of files: MP4 with only video, MP4 with both audio and video

I was able to solve by changing the value of flags field to disable audio.
gst-launch-0.10 playbin2 uri=file:///mnt/hash.mp4 video-sink="imxv4l2sink" flags=0x51

Related

Trying to stream virtualy created camera video0 device to make rtsp stream for VLC using gstreamer in 24bit RGB format

I have created Video0 device using V4l2loopback and used the following sample Git code V4l2Loopback_cpp as a application to stream jpg images from a folder sequential by altering some conditions in the code.But the code read images as 24Bit RGB image and send it Video0 device which is fine ,because the image run like a proper video on VLC video device capture. As i mentioned earlier thet if i checked the VLC properties of the video its Shows the following content
i need this video0 device to stream rtsp h264 video in vlc using the gstreamer lib .
i have used the following command to check in commandline for testing but its show some internal process error
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,width=590,height=332,framerate=30/1 ! x264enc ! rtph264pay name=pay0 pt=96
i dont know whats the problem here .is it 24bit jpeg format or the gstreamer command i use. I need a proper gstreamer command line to process the video0 devide to stream h264 rtsp video any help is appreciated thank u.
image Format - jpg (sequence image passed)
Video0 recives - 24-bit RGB image
output need - h264 rtsp stream from video0
Not sure this is a solution, but the following may help:
You may try adjusting resolution according to what V4L reports (width=584) :
v4l2src device=/dev/video0 ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96
Note that this error may happen on v4l2loopback receiver side while it may be a sender error. If you're feeding v4l2loopback with a gstreamer pipeline to v4l2sink, you would try adding identity such as:
... ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! identity drop-allocation=1 ! v4l2sink

GStreamer + OpenCV video processing problem

I am planning on doing VideoCapture from OpenCV for video file stream/live rtsp stream. However, the VideoCapture has alot of latency when used in my program so i decided to use the gstreamer pipeline instead. For example, i used
VideoCapture capVideo("filesrc location=CarsDriving.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink ", CAP_GSTREAMER);
My program is able to run but if i were to do something like
capVideo.get(CAP_PROP_FRAME_COUNT)
It always returns -1 because GStreamer has this warnings
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (898) open OpenCV | GStreamer warning: unable to query duration of stream
[ WARN:0] global /home/nvidia/Downloads/opencv-4.4.0/source/modules/videoio/src/cap_gstreamer.cpp (935) open OpenCV | GStreamer warning: Cannot query video position: status=1, value=1, duration=-1
How do i get the frame count in opencv if i use gstreamer for the video pipeline? I need the framecount for exceptions and also video processing techniques.
This is a bug which #alekhin mentioned here and here. Also mentioned how to fix. After changing you should rebuild opencv.
Also you said:
However, the VideoCapture has alot of latency when used in my program
so i decided to use the gstreamer pipeline instead.
rtsp cameras generaly streams as h264/h265 encoded data. If you are trying to decode that data via on CPU but not GPU, it will not give you much increasing about speed. Why don't you choose CAP_FFMPEG flag instead of CAP_GSTREAMER? CAP_FFMPEG will be faster than CAP_GSTREAMER

Gstreamer: How to play video from USB camera with spca561 driver?

I have a Logitech QuickCam Chat camera. When, I run the "v4l2-ctl" tool, I can see that uses the spca561 driver.
I try to use "cheese" tool but it said "No device was found". However, If I run the following, if it works:
vlc v4l2:///dev/video0
I want to use "gstreamer" tool. I run the following sentence:
gst-launch-1.0 v4l2src ! xvimagesink
but it's not working.
How can I capture video with gstreamer? Why the tool cheese not capture but if vlc?
I was trying to use a very old Logitech USB Camera 'Logitech, Inc. QuickCam Express' ... Yes the one from 1999 :-O .
the kernel detect it, and seems to be spca561: gspca_main: spca561-2.14.0 probing 046d:0928
vlc show nice live video on capture from /dev/video0 : 352x288. 30fps.
guvcview works very well too.
To make it work from gst-launch you can try this :
gst-launch-1.0 -vvv v4l2src device=/dev/video0 ! video/x-bayer,width=176,height=144 ! bayer2rgb ! videoconvert ! autovideosink
It seems that Gstreamer is wrong on detecting supported formats, but VLC just works... so it's maybe gstreamer problem.

gstreamer and Qt with nvidia on ARM

I have cross compiled Qt 5.5.1 for my ARM board and been trying to play video files using gstreamer and Qt. I have the following pipeline on gstreamer which works fine.
gst-launch-1.0 filesrc location=tracked.mp4 !
qtdemux name=demux demux.video_0! queue ! h264parse ! omxh264dec !
nveglglesink -e
Now I try to play the same video with the video player examples coming with qt multimedia and I get the video being shown in grayscale but replicated 4 times across the screens. I am not sure why but my ARM board does have 4 processors. See the attached screenshot.
Has anyone come across this problem and perhaps have an idea on how to run such gstreamer pipelines with Qt successfully?
Qt sample usually use decodebin or playbin to play video.
So it is not abnormal for Qt play video differently with your pipeline.
Try to play this video in GStreamer with decodebin or playbin, and check whether same phenomenon occur.
One more points is that you use nveglglesink for the pipeline, but Qt always uses its own sink element (qtvideorendersink or somethings).
There is chance that your decoded format is not handled well by qt sink.
("Gray and duplicate images" phenomenon usually happens because sink element not handle the format correctly).
If it is the case, convert to other format before send to Qt sink may solve it.

stream video from panda board to IPhone

I am using panda board and i have installed opencv and wrote a code for sticking 3 different images from 3 different cams.now this stitched image is stored in a matrix location(pointer).i for that 3 cams images will be continuously captured and stitched.so it becomes a video.so i need to stream that stitched image to iPhone .can any one help me with this.i am really stuck here and need help.its very important for me.
I would suggest you look at constructing either mjpeg stream or better a RTSP (encapsulating mpeg4 - saving bandwidth) stream based on RTP protocol. Say you decide to go with mjpeg stream, then each of your opencv IplImage* can be converted to JPEG Frames using libjpeg compression. See my answer here Compressing IplImage to JPEG using libjpeg in OpenCV. You would compress each frame and then create mjpeg stream. See creating my own MJPEG stream. You would need a webserver to run mjpeg cgi that streams your image stream. You could look at lighttpd web server running on Panda Board. Gstreamer is the package that may be helpful in your situation. On the decoding side (iphone) you can construct gstreamer decoding pipeline as follows - say you are streaming mjpeg gst-launch -v souphttpsrc location="http://<ip>:<port>/cgi_bin/<mjpegcginame>.cgi" do-timestamp=true is_live=true ! multipartdemux ! jpegdec ! ffmpegcolorspace ! autovideosink