I am trying to simply open a video with openCV, process frames and write the processed frames into a new video file.
My problem is that even if I don't process frames at all (just opening a video, reading frames with VideoCapture and writing them with VideoWriter to a new file), the output file appears more "green" than the input.
The code to do that can be found in any openCV tutorial, nothing special.
I use openCV c++ 4.4.0 on Windows 10.
I use openCV with ffmpeg through opencv_videoio_ffmpeg440_64.dll
The input video is mp4.
I write the output as a .avi with huffyuv codec :
m_video_writer.reset(new cv::VideoWriter(m_save_video_path.toStdString(), cv::VideoWriter::fourcc('H', 'F', 'Y', 'U'), // lossless compression
m_model->getFps(), cv::Size(m_frame_size.width(), m_frame_size.height())));
I tried many other codecs and the problem remains.
The difference in pixels is small, not constant in value but always varying in the same way : blue channel is lower, red and green are higher.
Strange fact : when I open both input or output video with opencv, the matrix are actually exactly the same. So I guess the problem is in the reading ??
Here are the properties of each video file, as exported with Windows Media Playre (MPC-HC).
VS
What should I investigate ?
Thx !!
Full code here (copying the first 100 frames of my video):
VideoCapture original("C:/Users/axelle/Videos/original.MP4");
int frame_height = original.get(CAP_PROP_FRAME_HEIGHT);
int frame_width = original.get(CAP_PROP_FRAME_WIDTH);
int fps = original.get(CAP_PROP_FPS);
VideoWriter output("C:/Users/axelle/Videos/output.avi", VideoWriter::fourcc('H', 'F', 'Y', 'U'),
fps, cv::Size(frame_width, frame_height));
int count = 0;
while (count < 100)
{
count++;
Mat frame;
original >> frame;
if (frame.empty())
{
break;
}
//imshow("test", frame);
//waitKey(0);
output.write(frame);
}
original.release();
output.release();
Note: the difference in colors can be seen in the imshow already.
There is a bug in OpenCV VideoCapture when reading video frames using FFmpeg backend.
The bug results a "color shift" when H.264 video stream is marked as BT.709 color standard.
The subject is too important to leave it unanswered...
The important part of the post, is reproducing the problem, and proving the problem is real.
The solution I found is selecting GStreamer backend instead of FFmpeg backend.
The suggested solution has downsides (like the need to build OpenCV with GStreamer support).
Note:
The problem is reproducible using OpenCV 4.53 under Windows 10.
The problem is also reproducible under Ubuntu 18.04 (using OpenCV in Python).
The issue applies both "full range" and "limited range" of BT.709 color standard.
Building synthetic video pattern for reproducing the problem:
We can use FFmpeg command line tool create a synthetic video to be used as input.
The following command generates an MP4 video file with H.264 codec, and BT.709 color standard:
ffmpeg -y -f lavfi -src_range 1 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -i testsrc=size=192x108:rate=1:duration=5 -vcodec libx264 -crf 17 -pix_fmt yuv444p -dst_range 1 -color_primaries bt709 -color_trc bt709 -colorspace bt709 -bsf:v h264_metadata=video_full_range_flag=1:colour_primaries=1:transfer_characteristics=1:matrix_coefficients=1 bt709_full_range.mp4
The above command uses yuv444p pixel format (instead of yuv420p) for getting more pure colors.
The arguments -bsf:v h264_metadata=video_full_range_flag=1:colour_primaries=1:transfer_characteristics=1:matrix_coefficients=1 use Bitstream Filter for marking the H.264 stream as "full range" BT.709.
Using MediaInfo tool, we can view the following color characteristics:
colour_range: Full
colour_primaries: BT.709
transfer_characteristics: BT.709
matrix_coefficients: BT.709
Capturing the video using OpenCV:
The following C++ code grabs the first frame, and save it to 1.png image file:
#include "opencv2/opencv.hpp"
void main()
{
cv::VideoCapture cap("bt709_full_range.mp4");
cv::Mat frame;
cap >> frame;
cv::imwrite("1.png", frame);
cap.release();
}
We may also use the following Python code:
import cv2
cap = cv2.VideoCapture('bt709_full_range.mp4')
_, frame = cap.read()
cv2.imwrite('1.png', frame)
cap.release()
Converting bt709_full_range.mp4 into images sequence using FFmpeg:
ffmpeg -i bt709_full_range.mp4 -pix_fmt rgb24 %03d.png
The file name of the first "extracted" frame is 001.png.
Comparing the results:
The left side is 1.png (result of OpenCV)
The right side is 001.png (result of FFmpeg command line tool)
As you can see, the colors are different.
The value of the red color pixels of OpenCV are RGB = [232, 0, 3].
The value of the red color pixels of FFmpeg are RGB = [254, 0, 0].
The original RGB value is probably [255, 0, 0] (value is 254 due to colors conversion).
As you can see, the OpenCV colors are wrong!
Solution - selecting GStreamer backend instead of FFmpeg backend:
The default OpenCV release excludes GStreamer support (at least in Windows).
You may use the following instruction for building OpenCV with GStreamer.
Here is a C++ code sample that uses GStreamer backend for grabbing the first frame:
void main()
{
cv::VideoCapture cap("filesrc location=bt709_full_range.mp4 ! decodebin ! videoconvert ! appsink", cv::CAP_GSTREAMER);
cv::Mat frame;
cap >> frame;
cv::imwrite("1g.png", frame);
cap.release();
}
Result:
The left side is 1g.png (result of OpenCV using GStreamer)
The right side is 001.png (result of FFmpeg command line tool)
The value of the red color pixels of OpenCV using GStreamer are RGB = [254, 0, 1]. (blue is 1 and not zero due to colors conversion).
Conclusions:
Using GStreamer backend (instead of FFmpeg) backend seems to solve the "color shifting" problem.
OpenCV users need to be aware of the color shifting problem.
Let's hope that OpenCV developers (or FFmpeg plugin developers) fix the problem.
Related
Background
I have a .webm file (pix_fmt: yuva420p) converted from .mov video file in order to reduce file size and I would like to read the video data using c++, so I followed using this repo as a reference.
This works perfectly on .mov video.
Problem
By using same repo, however, there is no alpha channel data (pure zeros on that channel) for .webm video but I can get the alpha data from .mov video.
Apparently many people already noticed that after the video conversion, ffmpeg somehow detect video as yuv420p + alpha_mode : 1 and thus alpha channel is not used but there is no one discuss about workaround of this.
I tried forcing pixel format during this part to use yuva420p but that just broke the whole program.
// Set up sws scaler
if (!sws_scaler_ctx) {
auto source_pix_fmt = correct_for_deprecated_pixel_format(av_codec_ctx->pix_fmt);
sws_scaler_ctx = sws_getContext(width, height, source_pix_fmt,
width, height, AV_PIX_FMT_RGB0,
SWS_BILINEAR, NULL, NULL, NULL);
}
I also verified my video that it contains alpha channel using other source so I am sure there is alpha channel in my webm video but I cannot fetch the data using ffmpeg.
Is there a way to fetch the alpha data? Other video format or using other libraries work as well as long as it does have some file compression but I need to access the data in c++.
Note: This is the code I used for converting video to webm
ffmpeg -i input.mov -c:v libvpx-vp9 -pix_fmt yuva420p output.webm
You have to force the decoder.
Set the following before avformat_open_input()
AVCodec *vcodec;
vcodec = avcodec_find_decoder_by_name("libvpx-vp9");
av_fmt_ctx->video_codec = vcodec;
av_fmt_ctx->video_codec_id = vcodec->id;
You don't need to set pixel format or any scaler args.
This assumes that your libavcodec is linked with libvpx.
I'm trying to show a video with transparency in a Qt6 application using OpenCV + FFMPEG.
Actually those are tool versions:
Win 11
Qt 6.3.0
OpenCV 4.5.5 (built with CMake)
FFMPEG 2022-04-03-git-1291568c98-full_build-www.gyan.dev
I've used a base .mov video with transparency as test (link provided below).
First of all I've converted .mov video to .webm video (VP9) and I see in output text that alpha channel remains
ffmpeg -i '.\Retro Bars.mov' -c:v libvpx-vp9 -crf 30 -b:v 0 output.webm
Input #0, mov,mp4,m4a,3gp,3g2,mj2,
...
Stream #0:0[0x1](eng): Video: qtrle (rle / 0x20656C72), argb(progressive),
...
Output #0, webm,
...
Stream #0:0(eng): Video: vp9, yuva420p(tv, progressive),
...
But when I show info of output file with ffmpeg it loses alpha channel:
ffmpeg -i .\output.webm
Input #0, matroska,webm,
...
Stream #0:0(eng): Video: vp9 (Profile 0), yuv420p(tv, progressive),
...
If I open output.webm with OBS it is shown correctly without a background, as shown in picture:
If I try to open it with OpenCV + FFMPEG it shows a black background under bars, as shown in picture:
This is how I load video in Qt:
cv::VideoCapture capture;
capture.open(filename, cv::CAP_FFMPEG);
capture.set(cv::CAP_PROP_CONVERT_RGB, false); // try forcing load alpha channel
... //in a thread
while (capture.read(frame)) {
qDebug() << "c" << frame.channels() << "t" << frame.type() << "d" << frame.depth(); // output: c 3 t 16 d 0
cv::cvtColor(frame, frame, cv::COLOR_BGR2RGBA); //useless since no alpha channel is detected
img = QImage(frame.data, frame.cols, frame.rows, QImage::Format_RGBA8888);
emit processedImage(img); // to show image in a QLabel with QPixmap::fromImage(img)
}
I think the problem is when I load the video with OpenCV, it doens't detect alpha channel, since I can load correctly in other player (obs, html5, etc.)
What I'm wrong with all process to show this video in Qt with transparency?
EDIT: Added dropbox link with test video + ffmpeg outputs:
sample items
So. I have been trying to get my Raspberry Pi 2 to capture H264 stream with OpenCV from my Logitech C920 for quite some time now. I have been scavenging the internet for info, but with no luck.
A short system description:
Raspberry Pi 2, running Raspbian, Kernel 3.18
Logitech HD Pro Webcam c920
OpenCV 2.4.11
boneCV - Credits to Derek Molloy (https://github.com/derekmolloy/boneCV)
libx264 and FFMPEG (built with x264 support)
libv4l-dev, v4l-utils, qv4l2, v4l2ucp
I know OpenCV forces format to BGR24 (MJPG). This is specified in cap_libv4l.cpp. It looks like this(line 692->):
/* libv4l will convert from any format to V4L2_PIX_FMT_BGR24 */
CLEAR (capture->form);
capture->form.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
capture->form.fmt.pix.pixelformat = V4L2_PIX_FMT_BGR24;
capture->form.fmt.pix.field = V4L2_FIELD_ANY;
capture->form.fmt.pix.width = capture->width;
capture->form.fmt.pix.height = capture->height;
I can set the pixelformat manualy with v4l2-ctl --set-fmt-video
pi#raspberrypi ~/boneCV$ v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=H264
pi#raspberrypi ~/boneCV$ v4l2-ctl --get-fmt-video
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : 'H264'
Field : None
Bytes per Line: 3840
Size Image : 4147200
Colotspace : SRGB
And if I now run "./boneCV" - A very simple capture program that captures a picture and does a canny edge detection. (I'll add the code in the end). I get this:
pi#raspberrypi ~/boneCV$ ./boneCV
pi#raspberrypi ~/boneCV$ v4l2-ctl --get-fmt-video
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : 'MJPG'
Field : None
Bytes per Line: 0
Size Image : 4147200
Colorspace : SRGB
As you can se the "Pixelformat" and the "Bytes per Line" changes. The "Field" stays at None and the "Colourspace" stays at SRGB.
Then I tried to replace every "V4L2_PIX_FMT_BGR24" with "V4L2_PIX_FMT_H264" in cap_lib4vl.cpp and rebuilded OpenCV. When I then ran the "./boneCV" my two .png images are only black with one or two stripes of white color.
To find out if it is libv4l or OpenCV I ran "./capture" script that follow Derek Molloys boneCV. It uses libv4l directly and captures an H264 video stream with no problems at all. I then have to use "./raw2mpg4" to be able to watch it. The .mp4 file is 1920x1080 at 30 fps with no glitches. And after this I checked "v4l2-ctl --get-fmt-video" again and got this:
pi#raspberrypi ~/boneCV$ v4l2-ctl --get-fmt-video
Format Video Capture:
Width/Height : 1920/1080
Pixel Format : 'H264'
Field : None
Bytes per Line: 3840
Size Image : 4147200
Colotspace : SRGB
Exactly the same as when I did set everything manualy.
I have come to the conclusion that if I want OpenCV to be able to capture raw H264 streams I'll have to change the cap_libv4l.cpp, but I have no idea how. I think it may be because the difference in bits per frame and/or colorspace.
Do anybody know how to do this or how to make an workaround so that I stil can use OpenCVs "VideoCapture" function?
I know alot of Raspberry Pi and BeagleboneBlack users would be ever so gratefull if there was any solution to this problem.
I have tried to cover everything that I think is relevant, if there is anything more I could provide to paint the picture better, please say so.
Her some links to the mentioned scripts and programs:
(edit. I tried to post the links to each of the programs, but I didn't have enough reputation. Go to Derek Molloys github page and you'll find boneCV there.)
And no I can not use the "CV_FOURCC('H','2','6','4');" because this function is not implemented for linux yet.
I'm trying to capture the raw data of the logitech pro 9000 (eg. the so called Bayer pattern). This can be achieved by using the so called bayer application, that can be found floating over the internet. It should return a 8 bit bayer pattern, but the results are quite obviously not such a pattern.
However; The image that is being streamed seems to be quite off. As can be seen in the image below, I get 2 images of the scene in a 3 channel image (meaning 6 channels in total). Each image is 1/4th of the total capture area, so it would seem that there is some kind of YUV data being streamed.
I was unable to convert this data into anything meaningful using the conversions provided by openCV. Any ideas what kind of data is being sent and (more importantly) how to convert this into RGB?
EDIT
As requested; the codesnippet that is used to generate the image.
system("Bayer.exe 1 8"); //Sets the camera to raw mode
// set up camera
VideoCapture capture(0);
if(!capture.isOpened()){
waitKey();
exit(0);
}
Mat capturedFrame;
while(true){
capture>>capturedFrame;
imshow("Raw",capturedFrame);
waitKey(25);
}
How did you get frames from stream using openCV? Can you share some code snippets? There are too many video formats in openCV for getting correct color channel and compressed data.
I think you should be able to obtain correct image frames as mentioned here :
http://forum.openrobotino.org/archive/index.php/t-295.html?s=c33acb1fb91f5916080f8dfd687598ec
This is most likely to happen if the out put data format ( width, height, bit depth, no of channels...) of camera and the data format your program expect is different.
However i could capture of logitec pro cam, simply by using
Mat img;
VideoCapture cap(0);
cap >> img;
I want to read and show a video using opencv. I've recorded with Direct-show, the Video has UYVY (4:2:2) codec, since opencv can't read that format, I want to convert the codec to an RGB color model, I readed about ffmpeg and I want to know if it's possible to get this done with it ? if not if you a suggestion I'll be thankful.
As I explained to you before, OpenCV can read some formats of YUV, including UYVY (thanks to FFmpeg/GStreamer). So I believe the cv::Mat you get from the camera is already converted to the BGR color space which is what OpenCV uses by default.
I modified my previous program to store the first frame of the video as PNG:
cv::Mat frame;
if (!cap.read(frame))
{
return -1;
}
cv::imwrite("mat.png", frame);
for(;;)
{
// ...
And the image is perfect. Executing the command file on mat.png reveals:
mat.png: PNG image data, 1920 x 1080, 8-bit/color RGB, non-interlaced
A more accurate test would be to dump the entire frame.data() to the disk and open it with an image editor. If you do that keep in mind that the R and B channels will be switched.