The following code is working fine to capture the data, Decode & store the decoded data in a file "out.h364" without performing an encoding operation.
cv::VideoCapture cap("filesrc location=720p.mp4 ! qtdemux name=demux.video_0 ! h264parse ! video/x-h264, alignment=nal ! omxh264dec ! videoconvert ! appsink sync=false async=false",cv::CAP_GSTREAMER);
VideoWriter video("appsrc ! videoconvert ! filesink location=out.h264", cv::CAP_GSTREAMER,0,60,Size(1280,720),true);
cap >> frame;
video.write(frame);
Now I wanted to perform encode operation before storing it into a file. I tried the below code to encode->store it in a file using the "jpegenc" encoder.
VideoWriter video("appsrc ! videoconvert ! video/x-raw, width=1280,height=720, framerate=60/1, format=BGR ! jpegenc ! filesink location=out.mp4", cv::CAP_GSTREAMER,0,60,Size(1280,720),true);
cap >> frame;
video.write(frame);
Problem:: It is writing to the out.mp4 file but I am not able to play it back. It seems it is not storing the data properly.
can anyone guide me to perform the encoding and then store it to the file?
Related
I have one 4k camera which has MJPEG and YUY2 formats. Currently, I can run
$ gst-launch-1.0 v4l2src device=/dev/video1 ! "video/x-raw,format=YUY2,width=640,height=480,framerate=30/1" ! tee name=t ! queue ! v4l2sink device=/dev/video20 t. ! queue ! v4l2sink device=/dev/video21
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
And stream video1 image to two different devices.
Q: How to pass MJPEG image from video1 to both video20 and video21, which are in YUY2 format.
In the MJPEG case you need to add image/jpeg caps to v4l2src. After v4l2src you need to convert it to raw video.
Gstreamer has jpegdec and avdec_mjpeg plugins. In my current version jpegdec does not support YUY2 output, so I would use avdec_mjpeg. Alernatively you can use jpegdec with videoconvert (i.e.... ! jpegdec ! videoconvert ! ...).
The following line should do it:
gst-launch-1.0 v4l2src device=/dev/video1 ! "image/jpeg,width=3840,height=2160,framerate=30/1" ! avdec_mjpeg ! "video/x-raw,format=YUY2,width=3840,height=2160,framerate=30/1" ! tee name=t ! queue ! v4l2sink device=/dev/video20 t. ! queue ! v4l2sink device=/dev/video21
I'm new to gstreamer and am stuck trying to form a gstreamer pipeline to encode mp4 video from tiff files on nvidia Jetson platform. Here is the pipeline I've come up with :
gst-launch-1.0 multifilesrc location=%03d.tiff index=0 start-index=0 stop-index=899 blocksize=720000 num-buffers=900 do-timestamp=true typefind=true ! 'video/x-raw,format=(string)RGB,width=(int)1280,height=(int)720,framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw,format=(string)I420,framerate=(fraction)30/1' ! omxh264enc ! 'video/x-h264,stream-format=(string)byte-stream,framerate=(fraction)30/1' ! h264parse ! filesink sync=true location=test.mp4 -e
With this, the mp4 file gets created successfully and plays but the actual video content is all garbled. Any idea what am I doing wrong ? Thank You
You are not doing any demux/decode of your TIFF data, so you throw random bytes at the encoder.
Also you are doing a lot of things with caps without having proper elements between that could alter the formats correctly.
You should use decodebin to let GStreamer handle most of the things automatically. E.g. something like that:
multifilesrc ! decodebin ! videoconvert ! omxh264enc ! h264parse ! filesink
Depending on your encoder you want to force the color format to be a 4:2:0 so that it does not accidentally encode in 4:4:4 (which is not very common and not supported by many encoders):
multifilesrc ! decodebin ! videoconvert ! video/x-raw, format=I420 ! omxh264enc ! h264parse ! filesink
I have a collection of RGBA png files, and have verified the presence of an alpha layer on each file:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! videomixer background=checker ! videoconvert ! ximagesink
I want to take these files and make them into a video file (in any format that GStreamer will readily handle with a simple decodebin). What would be a good set of encoders, containers, and elements to use for this?
I've tried avimux but no alpha data was saved. I also tried avenc_huffyuv, and that would decode fine as raw data using avenc_huffyuv, but decodebin could not detect it.
Nothing like a good night's sleep to solve an issue..
Apparently the huffyuv encoder and avi muxer work nicely together to preserve tranpsarency:
gst-launch-1.0 multifilesrc location="pics/%d.png" ! decodebin ! videorate ! videoconvert ! video/x-raw,format=BGRA,framerate=60/1 ! avenc_huffyuv ! avimux ! filesink location=/tmp/test.avi
Is there any way to take an interlaced h264 stream and re-mux it into Matroska or QT container form in a way that players will correctly identify it as interlaced? My basic pipeline is:
gst-launch-1.0 -e souphttpsrc location="http://hostname/stream1.sdp" ! application/sdp ! sdpdemux ! rtpjitterbuffer ! rtph264depay ! h264parse ! matroskamux ! filesink location=test.mkv
This saves the file just fine, but in all the players I've tried, the interlaced form of the video was not detected.
I'm quite a newbie on using gstreamer. I want to stream video and audio from my C920 webcam to another PC but I keep getting wrong in combining things..
I can now stream h264 video from my C920 to another PC using:
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-h264,width=1280,height=720,framerate=30/1 ! h264parse ! rtph264pay pt=127 config-interval=4 ! udpsink host=172.19.3.103
And view it with:
gst-launch-1.0 udpsrc port=1234 ! application/x-rtp, payload=127 ! rtph264depay ! avdec_h264 ! xvimagesink sync=false
I can also get the audio from the C920 and record it to a file together with a test-image:
gst-launch videotestsrc ! videorate ! video/x-raw-yuv,framerate=5/1 ! queue ! theoraenc ! queue ! mux. pulsesrc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_F1894590-02-C920.analog-stereo" ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! queue ! audioconvert ! queue ! vorbisenc ! queue ! mux. oggmux name=mux ! filesink location=stream.ogv
But I' trying to get something like this (below) to work.. This one is not working, presumably it's even a very bad combi I made!
gst-launch v4l2src device=/dev/video1 ! video/x-h264,width=1280,height=720,framerate=30/1 ! queue ! mux. pulsesrc device="alsa_input.usb-046d_HD_Pro_Webcam_C920_F1894590-02-C920.analog-stereo" ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! queue ! audioconvert ! queue ! x264enc ! queue ! udpsink host=127.0.0.1 port=1234
You should encode your video before linking it against the mux. Also, I do not see you declaring the type of muxer you are using and you do not put the audio in the mux.
I am not sure it is even possible to send audio AND video over the same rtp stream in this manner in gstreamer. I know that the rtsp server implementation in gstreamer allows audio and video together but even in it I am not sure if it is still two streams just being abstracted away from implementation.
You may want to just use to separate streams and pass them through to a gstrtpbin element.