I am capturing and processing video frames with OpenCV, and I would like to write them as a h265 video file. I am struggling to get a proper Gstreamer pipeline to work from OpenCV.
Gstreamer works fine by itself. In particular, I am able to run this command, which encodes video very quickly (thanks to GPU acceleration) and saves it to a mkv file:
gst-launch-1.0 videotestsrc num-buffers=90 ! 'video/x-raw, format=(string)I420, width=(int)640, height=(int)480' ! omxh265enc ! matroskamux ! filesink location=test.mkv
Now I would like to do the same thing from within my OpenCV application. My code is something like:
Mat img_vid = Mat(1024, 1024, CV_8UC3);
VideoWriter video;
video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv", 0, (double)25, cv::Size(1024, 1024), true);
if (!video.isOpened()) {
printf("can't create writer\n");
return -1;
}
while ( ... ) {
// Capture frame into img_vid => That works fine
video.write(img_vid);
...
}
At first sight, this seems to work, but what it does is it creates file named "appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv" and fills it with uncompressed video frames, completely ignoring the fact that this is a Gstreamer pipeline.
I have tried other pipelines, but they result in a variety of errors:
video.open("appsrc ! autovideoconvert ! omxh264enc ! 'video/x-h264, streamformat=(string)byte-stream' ! h264parse ! qtmux ! filesink location=test.mp4 -e", 0, (double)25, cv::Size(1024, 1024), true);
Which results in:
(Test:5533): GStreamer-CRITICAL **: gst_element_make_from_uri:
assertion 'gst_uri_is_valid (uri)' failed OpenCV Error: Unspecified
error (GStreamer: cannot find appsrc in manual pipeline ) in
CvVideoWriter_GStreamer::open, file
/home/ubuntu/opencv/modules/videoio/src/cap_gstreamer.cpp, line 1363
VIDEOIO(cvCreateVideoWriter_GStreamer(filename, fourcc, fps,
frameSize, is_color)): raised OpenCV exception:
/home/ubuntu/opencv/modules/videoio/src/cap_gstreamer.cpp:1363: error:
(-2) GStreamer: cannot find appsrc in manual pipeline in function
CvVideoWriter_GStreamer::open
I also tried the simple:
video.open("appsrc ! autovideosink", 0, (double)25, cv::Size(1024, 1024), true);
which yields:
GStreamer Plugin: Embedded video playback halted; module appsrc0
reported: Internal data flow error.
I am using OpenCV 3.1 with Gstreamer support. The hardware is a Jetson TX1 with L4T 24.2.1.
I encountered a similar problem before. Since the pipe/file name ends with .mkv, OpenCV interprets it as a video file instead of a pipe.
You can try ending it with a dummy spacing like after mkv
video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv ", 0, (double)25, cv::Size(1024, 1024), true);
or with a dummy property like
video.open("appsrc ! autovideoconvert ! omxh265enc ! matroskamux ! filesink location=test.mkv sync=false", 0, (double)25, cv::Size(1024, 1024), true);
Related
I want to send/receive videotestsrc images encoded as JPEG using RTP with GStreamer.
I tried the following commands. It can be sent and received test image.
But, received image is not correct. It is broken. It seems image is dark and misaligned.
How to send and receive images normally?
send command
gst-launch-1.0 videotestsrc ! videoconvert ! video/x-raw, format=YUY2 ! jpegenc ! rtpjpegpay ! udpsink host=127.0.0.1 port=5000
receive command
gst-launch-1.0 udpsrc port=5000 ! application/x-rtp, media=video, encoding-name=JPEG, framerate=30/1, payload=26, clock-rate=90000 ! rtpjpegdepay ! jpegdec ! videoconvert ! autovideosink
result
environment
windows 10
gstreamer 1.18.3
#ken
This works great under linux (ubuntu 20.04). I suppose there's something about Windows and/or direct 3d. It looks like this
so seems good.
The following code is working fine to capture the data, Decode & store the decoded data in a file "out.h364" without performing an encoding operation.
cv::VideoCapture cap("filesrc location=720p.mp4 ! qtdemux name=demux.video_0 ! h264parse ! video/x-h264, alignment=nal ! omxh264dec ! videoconvert ! appsink sync=false async=false",cv::CAP_GSTREAMER);
VideoWriter video("appsrc ! videoconvert ! filesink location=out.h264", cv::CAP_GSTREAMER,0,60,Size(1280,720),true);
cap >> frame;
video.write(frame);
Now I wanted to perform encode operation before storing it into a file. I tried the below code to encode->store it in a file using the "jpegenc" encoder.
VideoWriter video("appsrc ! videoconvert ! video/x-raw, width=1280,height=720, framerate=60/1, format=BGR ! jpegenc ! filesink location=out.mp4", cv::CAP_GSTREAMER,0,60,Size(1280,720),true);
cap >> frame;
video.write(frame);
Problem:: It is writing to the out.mp4 file but I am not able to play it back. It seems it is not storing the data properly.
can anyone guide me to perform the encoding and then store it to the file?
I have one 4k camera which has MJPEG and YUY2 formats. Currently, I can run
$ gst-launch-1.0 v4l2src device=/dev/video1 ! "video/x-raw,format=YUY2,width=640,height=480,framerate=30/1" ! tee name=t ! queue ! v4l2sink device=/dev/video20 t. ! queue ! v4l2sink device=/dev/video21
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
And stream video1 image to two different devices.
Q: How to pass MJPEG image from video1 to both video20 and video21, which are in YUY2 format.
In the MJPEG case you need to add image/jpeg caps to v4l2src. After v4l2src you need to convert it to raw video.
Gstreamer has jpegdec and avdec_mjpeg plugins. In my current version jpegdec does not support YUY2 output, so I would use avdec_mjpeg. Alernatively you can use jpegdec with videoconvert (i.e.... ! jpegdec ! videoconvert ! ...).
The following line should do it:
gst-launch-1.0 v4l2src device=/dev/video1 ! "image/jpeg,width=3840,height=2160,framerate=30/1" ! avdec_mjpeg ! "video/x-raw,format=YUY2,width=3840,height=2160,framerate=30/1" ! tee name=t ! queue ! v4l2sink device=/dev/video20 t. ! queue ! v4l2sink device=/dev/video21
I'm new to gstreamer and am stuck trying to form a gstreamer pipeline to encode mp4 video from tiff files on nvidia Jetson platform. Here is the pipeline I've come up with :
gst-launch-1.0 multifilesrc location=%03d.tiff index=0 start-index=0 stop-index=899 blocksize=720000 num-buffers=900 do-timestamp=true typefind=true ! 'video/x-raw,format=(string)RGB,width=(int)1280,height=(int)720,framerate=(fraction)30/1' ! videoconvert ! 'video/x-raw,format=(string)I420,framerate=(fraction)30/1' ! omxh264enc ! 'video/x-h264,stream-format=(string)byte-stream,framerate=(fraction)30/1' ! h264parse ! filesink sync=true location=test.mp4 -e
With this, the mp4 file gets created successfully and plays but the actual video content is all garbled. Any idea what am I doing wrong ? Thank You
You are not doing any demux/decode of your TIFF data, so you throw random bytes at the encoder.
Also you are doing a lot of things with caps without having proper elements between that could alter the formats correctly.
You should use decodebin to let GStreamer handle most of the things automatically. E.g. something like that:
multifilesrc ! decodebin ! videoconvert ! omxh264enc ! h264parse ! filesink
Depending on your encoder you want to force the color format to be a 4:2:0 so that it does not accidentally encode in 4:4:4 (which is not very common and not supported by many encoders):
multifilesrc ! decodebin ! videoconvert ! video/x-raw, format=I420 ! omxh264enc ! h264parse ! filesink
I've looked through tons of threads on OpenCV and Gstreamer and simply cannot resolve the issue to my error. I am trying to open a Gstreamer pipeline in OpenCV. I have built OpenCV with GStreamer and it says YES at the CMake step indicating that OpenCV built successfully. The command to retrieve the stream works fine from command line, however it just displays a frame and hangs in OpenCV.
My Syntax for Server:
gst-launch-1.0 v4l2src device="/dev/video0" ! video/x-raw,format=I420,width=640,height=480,framerate=15/1 ! jpegenc ! rtpjpegpay ! udpsink host=<IP Address> port=5000
My Syntax in OpenCV for Client (C++):
Mat frame;
//create video capture from video camera
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-
name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! autovideosink ! appsink");
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
for(;;)
{
cap >> frame;
char c = (char)waitKey(1);
//![display]
imshow(window_name, frame);
frame.release();
}
The error:
GStreamer Plugin: Embedded video playback halted; module
autovideosink0-actual-sink-xvimage reported: Output window was closed
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline )
in icvStartPipeline, file
/home/dev/Downloads/OpenCV/opencv-3.0.0/modules/videoio/src/cap_gstreamer.cpp,
line 383 terminate called after throwing an instance of
'cv::Exception' what(): /home/dev/Downloads/OpenCV/opencv-
3.0.0/modules/videoio/src/cap_gstreamer.cpp:383: error: (-2) GStreamer: unable to start pipeline in function icvStartPipeline
Please provide any assistance I've been through at least 20 Stack posts and I am no closer to when I started with the exception of having Gstreamer enabled in OpenCV. I even tried different versions of OpenCV.
Thanks
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink");
After a lot more digging through Gstreamer documentation today I solved the issue. The addition of videoconvert solved the issue. According to the Gstreamer documentation videoconvert automatically converts the data to the appropriate format for appsink. This allows it to be read correctly in OpenCV VideoCapture.