Opencv VideoWriter with Gstreamer backend produce low quality video file - c++

I am trying to write frames (cv::Mat) into a video file using cv::VideoWriter. I need maximum speed (latency should be minimum). So, I was trying to use Gstreamer with x264 encoding. I wrote the following code:
std::string command = "appsrc ! videoconvert ! x264enc ! filesink location=output.mp4";
auto writer_ =
cv::VideoWriter(command, cv::CAP_GSTREAMER, 0, frameRate_, cv::Size(frameWidth_, frameHeight_), true);
// frameRate_ = 25
// frameWidth_ = 1920
// frameHeight_ = 1080
//...
// for example
// auto frame = cv::Mat(cv::Size(1920, 1080), CV_8UC3, 0);
writer_.write(frame);
Everything works fine, but the output video has very low quality (not in terms of resolution, the resolution is still the same). The frames are pixelated. I have searched the Internet but could not find the reason why this is happening.
What can be the reason for this?
Suggestions on a faster video writing method (in OpenCV) are also appreciated!
Edit: I tried #Micka's suggestion and changed the bitrate (to >10000 to achieve the required quality) but the latency increased significantly. Is there a faster way to save videos without losing quality much?

Related

Grey Video frames when using OpenCV Videocapture with GStreamer C++

Hey,
I am new to Gstreamer and want to send a video that is captured from a camera and manipulated with OpenCV over a network to the receiving part. The receiving part then read it and displays it. This shall be done in real-time. It basically works with the code/gstreamer settings below however as soon a frame is dropped (at least I think this is the reason) the video get corrupted in form of grey parts (attached picture).
OpenCV Sending Part:
cv::VideoWriter videoTransmitter("appsrc ! videoconvert ! videoscale ! x264enc ! rtph264pay config-interval=1 pt=96 ! udpsink host=192.168.168.99 port=5000", cv::VideoWriter::fourcc('H', '2', '6', '4'), 10, videoTransmitter_imageSize, true);
OpenCV Receiving part:
cv::VideoCapture videoReceiver("udpsrc port=5000 ! application/x-rtp ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! videoconvert ! appsink", cv::CAP_GSTREAMER);
It basically works but I often get grey parts in the video which then stay for a bit until the video is displayed correctly. I guessed it happens always when a frame is dropped due to the transmission. However, how can I get rid of these grey/corrupted frames? Any Hints? Any Gstreamer parameters I need to set to tune result? Is there a better way to stream a video with opencv over network?
Any help is appreciated!
No, there isn't any mechanism in Gstreamer to detect corrupted frames, because this doesn't make sense.
In most modern video codec, frame aren't sent in full anymore, but split in slices (meaning only a small part of the frame). It can takes multiple intra packets (each containing multiple slices) to build a complete frame, and this is a good thing, because it makes your stream more resilient to errors, and allow multithreaded decoding of the slices (for example).
In order to achieve what you want, you have multiple solutions:
Use RTP/RTCP instead of RTP over UDP only. At least RTP contains a sequence number and "end of frame" markers so it possible to detect some packet drops. Gstreamer doesn't care about those by default unless you have started a RTP/RTCP session. If you set up a session with RTCP, you can have reports when some packets were dropped. I'm not sure there is a pipeline way to be informed when a packet is dropped, so you might still have to write an appsink in your gstreamer pipeline to add some code for detecting this event. However, this will tell you something is wrong, but not when it's ok to resume or how much wrong it is. In Gstreamer speak, it's called RTPSession, and you're interested in the stats::XXX_nack_count properties,
Add some additional protocol to compute the checksum of the encoder's output frame/NAL/packet and transmit out of band. Make sure the decoder also compute the checksum of incoming frame/NAL/packet and if doesn't match, you'll know it'll fail decoding. Beware of packet/frame reordering (typically B frames will be re-ordered after their dependencies) that could disturb your algorithm. Again, you have no way to know when to resume upon an error. Using TCP instead of UDP might be enough to fix it if you only have partial packet drop, but it'll fail to resume if it's a bandwidth issue (if the video bandwidth > network bandwidth, it'll collapse, since TCP can't drop packets to adapt)
Use intra only video codec (like APNG, or JPEG). JPEG can also partially decode, but gstreamer's default software jpeg decoder doesn't output a partial JPEG frame.
Set a closed and shorter GOP in your encoder. Many encoder have a pseudo "gop = group of picture" parameter and count the frames in your decoder when decoding after an error. A GOP ensure that whatever the state of the encoding, after GOP frames, the encoder will emit an non-dependent group of frames (likely enough intra frame/slices to rebuild the complete frame). This will allow resuming after an error by dropping GOP - 1 frames (you must decode them, but you can't use them, they might be corrupted), you'll need a way to detect the error, see point 1 or 2 above. For x264enc the parameter is called key-int-max. You might want to try also intra-refresh=true so the broken frame effect upon error will be shorter. The downside is an increase in bandwidth for the same video quality.
Use a video codec with scalable video coding (SVC instead of AVC for exemple). In that case, in case of decoding error, you'll get a lower quality instead of corrupted frame. There isn't any free SVC encoder I'm aware of in Gstreamer.
Deal with it. Compute a saturation map of the picture with OpenCV and compute its deviation & mean. If it's very different from the previous picture, stop computation until the GOP has elapsed and the saturation is back to expected levels.

Setting bitrate of video in FFmpeg

I use FFmpeg to record videos from a RTSP stream (the codec is H.264). It works. But I face a problem with the bitrate value. First, I set bitrate like below, but it doesn't work:
AVCodecContext *m_c;
m_c->bit_rate = bitrate_value;
Following this question I can set bitrate manually with this command:
av_opt_set(m_c->priv_data, "crf", "39", AV_OPT_SEARCH_CHILDREN);
But I have to test several times to choose value '39', which creates acceptable video quality. It's hard to do it again if I use another camera setting (image width, height, etc). Is there a way to set bitrate more easily, and adaptively?

Reading vector<Mat> or Video ? (Opencv & C++)

I'm currently on a project where I have several picture taken with a camera.
My goal here is to make a video out of those pictures.
The problem is that pictures are not continuous ( there are some pictures missing in between).
And so when I'm trying to use Videowriter functions to create (obviously) a video the result is really messy and very speedy.
So I had an idea about creating an equivalent of a video reader but by reading a vector instead of a video: the display speed would depending on a cooldown between every pictures of my vector.
I would like to know your opinion about my solution and what would be your solution?
Thanking you.
Reduce the FPS in the VideoWriter object,
VideoWriter video(videoname, CV_FOURCC('M','J','P','G'), FPS, Size, true);
Try with FPS = 5 or even lesser, this might work

Synchronizing FFMPEG video frames using PTS

I'm attempting to synchronize the frames decoded from an MP4 video. I'm using the FFMPEG libraries. I've decoded and stored each frame and successfully displayed the video over an OPENGL plane.
I've started a timer just before cycling through the frames; the aim being to synchronize the Video correctly. I then compare the PTS of each frame against this timer. I stored the PTS received from the packet during decoding.
What is displayed within my application does not seem to play at the rate I expect. It plays faster than the original video file would within a media player.
I am inexperienced with FFMPEG and programming video in general. Am I tackling this the wrong way?
Here is an example of what I'm attempting to do
FrameObject frameObject = frameQueue.front();
AVFrame frame = *frameObject.pFrame;
videoClock += dt;
if(videoClock >= globalPTS)
{
//Draw the Frame to a texture
DrawFrame(&frame, frameObject.m_pts);
frameQueue.pop_front();
globalPTS = frameObject.m_pts;
}
Please note I'm using C++, Windows, Opengl, FFMPEG and the VS2010 IDE
First off, Use int64_t pts = av_frame_get_best_effort_timestamp(pFrame) to get the pts. Second you must make sure both streams you are syncing use the same time base. The easiest way to do this is convert everything to AV_TIME_BASE_Q. pts = av_rescale_q ( pts, formatCtx->streams[videoStream]->time_base, AV_TIME_BASE_Q ); In this format, pts is in nanoseconds.

Setting gstreamer capsfilter with OpenCV in c++

With the Microsoft LifeCam Cinema (on Ubuntu) in guvcview I get 30fps on 1280x720. In my OpenCV program, I only get 10fps (only queryframe and showimage, no image processing is done). I found out that it is a problem in gstreamer. A solution is to set a capsfilter in gstreamer, in terminal I can do it like this:
gst-launch v4l2src device=/dev/video0 !
'video/x-raw-yuv,width=1280,height=720,framerate=30/1' ! xvimagesink
This works! The question is:
How do I implement this in my c++/OpenCV program?
Or is it possible to set gstreamer to always use this capsfilter?
I already found this question Option 3, but I can't get it working with a webcam.
Unfortunately, there is no way to set the format (YUV) of the frames retrieved from the camera, but for the rest of the settings you could try using cvSetCaptureProperty():
cvSetCaptureProperty(capture, CV_CAP_PROP_FPS, 30);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1280);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 720);
If setting the frame size doesn't work, I strongly suggest you read this post: Increasing camera capture resolution in OpenCV
My bad, I was setting my webcam to 1280x800, which forces it to use YUVY with max 10 fps. Setting it back to 1280x720 in my program gave me 30 fps