OpenCV has different RGB values from Matlab? - c++

I am trying to transfer one matlab project to C++ code. However, when I trying to read an mp4 video by frame, the RGB values for each pixels is very different from Matlab. Does it mean OpenCV used a different RGB value representation? If so, how can I change the OpenCV value to Matlab? Otherwise I cannot verify my implementation are right by checking values.
For example:
I am trying to check the point(0,0) values in OpenCV and Matlab. OpenCV gives the following result: blue=106 green=105 red=102
However, in Matlab, the result is: blue=85 green=86 red=83
I tried to get the RGB value in point(0,0) for every 200 frames which is point(1,1) in Matlab.
The C++ code to get RGB value in OpenCV are:
Mat img;
number = 0;
VideoCapture cap(filename_input_video);
if(!cap.isOpened()) {
printf("No video to Read!\n");
return -1;
}
for( ; ; ) {
cap >> img;
if(img.empty())
break;
number++;
for(int i=0; i<img.rows; i++) {
for(int j=0; j<img.cols; j++) {
int blue = img.at<Vec3b>(i, j)[0];
int green = img.at<Vec3b>(i, j)[1];
int red = img.at<Vec3b>(i, j)[2];
if(number == 200 && i==0 && j==0) {
printf("blue=%d green=%d red=%d", blue, green, red);
}
}
}
if(number == 200) {
number = 0;
}
}
The Matlab code is:
OBJ = VideoReader(filename_source);
fBlock = 200;
nFrame = get(OBJ, 'NumberOfFrames');
nBlock = ceil(nFrame / fBlock);
for iBlock = 1:nBlock
display(['Processing video 1 block #' num2str(iBlock) '...']);
start_index = (iBlock-1)*fBlock+1;
end_index = min(iBlock*fBlock, nFrame);
vSource = read(OBJ,[start_index end_index]);
display(['red ' num2str(vSource(1,1,1,200))]);
display(['green ' num2str(vSource(1,1,2,200))]);
display(['blue ' num2str(vSource(1,1,3,200))]);
How should i fix this problem?

To verify the difference, you should compare RGB value of single image read from disc. Reading identical values here shows your code is probably fine and there is a difference in decoding.
What is probably happening : If you read frame/image captured from video, there can be difference as the video decoder can be different for OpenCV(default is ffmpeg) and MATLAB. Different decoder can handle some events/errors differently and there is no guarantee of identical decoding.
Suggested Solution :
1) Same decoder - If you need both tools to be identical in result, use same decoder for both. Either change decoder for OpenCV or for MATLAB. If you google, you will find few article on how to do this. This and this can be helpful.
2) Same video - Use any decoder(I prefer ffmpeg) to convert video to raw format first. Now you can use it on both tool without fear of diff ;). Here is a command to get raw from compresed:
`c:/> ffmpeg -i compressed_or_original_video.avi -vcodec rawvideo raw_converted_video.avi`

No! it does not! you see different result because C++ array indexing starts from zero while matlab/octave indexing starts from 1.

Related

the pixel values changed while using imwrite to jpg files in c++ [duplicate]

This question already has answers here:
OpenCV : imwrite changes the channels pixels values when saving
(1 answer)
Why does loading a jpeg or jpg image changes it upon saving everytime?
(1 answer)
Closed last year.
I'm writing a code like this in c++:
I wish to have a 100% same copy image of test1.jpg.
Unfortunately, I find lots of pixel values change after cv::imwrite.
int main()
{
cv::Mat img1 = cv::imread("./test1.jpg");
cv::imwrite("test2.jpg", img1);
cv::Mat img2 = cv::imread("./test2.jpg");
int count = 0;
for (int i = 0; i < 250; i++) {
for (int j = 0; j < 250; j++) {
if (img1.at<uchar>(i, j) != img2.at<uchar>(i, j)) {
count++;
}
}
}
std::cout << count << std::endl;
return 0;
}
I use count in this program to see how many differences are there between these two images,
although both images (test1.jpg and test2.jpg) have the same size at 46kb, the count's value is as high as 16768!
Is there any method to avoid the change of pixel? I'm only going to use jpg files in the program.
Thanks a lot!
If you want non-lossy compression, you can't use jpg's and have to use a .png (there's .bmp as well but its uncompressed)
jpg = cv.imread("../resources/fisheye/1_1.jpg")
cv.imwrite("1_1.png", jpg)
png = cv.imread("1_1.png")
np.sum(np.where(jpg != png, 1, 0)) # number of differing pixels between images
Output: 0

OpenCV masking/split function

I have a question for us. I'm a newbe of OpenCV and I need to understand if that lib can help me to reach my goals.
I need to use OpenCV to open a Tiff file (big Tiff file) and split it on two different file with a mask like that Mask, in the end the file 1 have pixel black and the file 2 have the negative - pixel white of the original image.
Any ideas or example for me?
Thank you all!
To read the file, you can use the function imread. This stores it in a cv::Mat object. Since your mask is black and white, I would read the mask-image as a grayscale using IMREAD_GRAYSCALE. This gives you each pixel with a value from 0-255. That should cover the first part of your question.
I have to admit I am having trouble understandig your question, but I expect you want to create two images. The first contains all the pixels where your mask has a black pixel. The second one contains an image where in the mask all the pixels are white.
You could look at this thread. Additionally I would like to give you the way that I would do it.
The problem you would run in to is that your .tiff-image has a different type than your chessboard. Tiff is probably CV_8UC3 and chessboard is probably CV_8UC1. But this should be easily solvable.
I think you would probably want to look at each individual pixel and leave the be if, at that same pixel of the chessboard, your color is white. Then if it is not, make that pixel from your original pixel black. I have not tested this, but it would look something like this.
for (int i = 0; i < originalImage.rows; i++) {
for (int j = 0; j < originalImage.cols; j++) {
if (chessboard.at<uchar>(Point(j, i)) != 255) {
originalImage.at<Vec3b>(Point(j, i)) = Scalar(0, 0, 0);
}
else {
// Do nothing.
}
}
}
Scalar is used, since the originalImage has three channels instead of one. I hope this helps!
Try this to create the mask:
cv::Mat tiff;
cv::Mat maskDark = tiff == 0; // comparison like '< 10' also works
cv::Mat maskDark = tiff == 255;

Image is multiplied three times in OpenCV, what causes this?

I have one gray scale image which is just the R channel of a photo, now I'm trying to write that R channel into a new image, which is an RGB image. Ideally, the new image would look just like the old image, but red.
What happens though is that in the new image, the old image appears three times squished next to each other.
Here you can see the gray scale image and the output image.
Here is my code, I think it's pretty straightforward:
Mat img_in = imread("in.png", CV_LOAD_IMAGE_GRAYSCALE);
Mat img_out = Mat::zeros(img_in.size(), CV_8UC3);
for (int i = 0; i < img_in.rows; i++)
{
for (int j = 0; j < img_in.cols; j++)
{
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
}
}
imwrite("test_img_in.png", img_in);
imwrite("test_img_out.png", img_out);
At first I thought it was some kind of indices mixup, but I've tried a lot of combinations, and it always multiplies the output image three times horizontally, never vertically.
Now my thought is that it comes from some OpenCV specification, like the CV_8UC3 type (I've tried others too), which I've chosen because I think it support RGB images. Unfortunately, I don't know too much about OpenCV itself, that's why I'm seeking help here.
PS: This is part of a whole bigger program which wants to generate a color image from three gray scale channel images, but I'm currently stuck on combining the aligned gray scale images, since this happens. The code I posted is isolated from the rest of the program and works like this on its own.
My OpenCV version is 2.4.11.
The problem is here:
img_out.at<Vec3b>(i,j)[2] = img_in.at<Vec3b>(i,j)[2];
As you said the input image is gray. So, just use:
img_out.at<Vec3b>(i,j)[2] = img_in.at<unsigned char>(i,j);
you will get the same result by loading your image as 3 channel and subtract Scalar(255,255,0)
#include <opencv2/opencv.hpp>
using namespace cv;
int main(int argc, char **argv)
{
Mat src = imread(argv[1]);
imshow("src", src );
src -= Scalar(255,255,0);
imshow("Red channel", src );
waitKey();
return 0;
}

OpenCV Sobel Filters resulting in almost completely black images

I am having some issues with my sobel_y (and sobel_x, but I figure they are having the same issue) filter in that it keeps giving me an image that it basically only black and white. I am having to rewrite this function for a class, so no I cannot use the built-in one, and had it working, minus some minor tweaks because the output image looked a little strange with still being black and white even though it was supposed to be converted back. I figured out how to fix that, and in the process I messed with something and broke it and cannot seem to get it back to working even with the black and white image output only. I keep getting a black image, with some white lines here and there near the top. I have tried changing the Mat grayscale type (third parameter) to all different values, as my professor mentioned in the class that we are using 32 bit floating point images, but that did not help either.
Even though the issue occurs after running the Studentfilter2D, I think it is a problem with the grayscaling of the image, although whenever I debug, it seems to work just fine. This is also because I have 2 other filtering functions I had to write that use Studentfilter2D, and they both give me the expected results. My sobel_y function is shown below:
// Convert the image in bgr to grayscale OK to use the OpenCV function.
// Find the coefficients used by the OpenCV function, and give a link where you found it.
// Note: This student function expects the matrix gray to be preallocated with the same width and
// height, but with 1 channel.
void BGR2Gray(Mat& bgr, Mat& gray)
{
// Y = .299 * R + .587 * G + .114 * B, from http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#cvtcolor
// Some extra assistance, for the third parameter for the InputArray, from http://docs.opencv.org/trunk/modules/core/doc/basic_structures.html#inputarray
// Not sure about the fourth parameter, but was just trying it to see if that may be the issue as well
cvtColor(bgr, gray, CV_BGR2GRAY, 1);
return;
}
// Convolve image with kernel - this routine will be called from the other
// subroutines! (gaussian, sobel_x and sobel_y)
// image is single channel. Do not use the OpenCV filter2D!!
// Implementation can be with the .at or similar to the
// basic method found in the Chapter 2 of the OpenCV tutorial in CANVAS,
// or online at the OpenCV documentation here:
// http://docs.opencv.org/doc/tutorials/core/mat-mask-operations/mat-mask operations.html
// In our code the image and the kernel are both floats (so the sample code will need to change)
void Studentfilter2D (Mat& image, Mat& kernel)
{
int kCenterX = kernel.cols / 2;
int kCenterY = kernel.rows / 2;
// Algorithm help from http://www.songho.ca/dsp/convolution/convolution.html
for (int iRows = 0; iRows < image.rows; iRows++)
{
for (int iCols = 0; iCols < image.cols; iCols++)
{
float result = 0.0;
for (int kRows = 0; kRows < kernel.rows; kRows++)
{
// Flip the rows for the convolution
int kRowsFlipped = kernel.rows - 1 - kRows;
for (int kCols = 0; kCols < kernel.cols; kCols++)
{
// Flip the columns for the convolution
int kColsFlipped = kernel.cols - 1 - kCols;
// Indices of shifting around the convolution
int iRowsIndex = iRows + kRows - kCenterY;
int iColsIndex = iCols + kCols - kCenterX;
// Check bounds using the indices
if (iRowsIndex >= 0 && iRowsIndex < image.rows && iColsIndex >= 0 && iColsIndex < image.cols)
{
result += image.at<float>(iRowsIndex, iColsIndex) * kernel.at<float>(kRowsFlipped, kColsFlipped);
}
}
}
image.at<float>(iRows, iCols) = result;
}
}
return;
}
void sobel_y (Mat& image, int)
{
// Note, the filter parameter int is unused.
Mat mask = (Mat_<float>(3, 3) << 1, 2, 1,
0, 0, 0,
-1, -2, -1) / 3;
//Mat grayscale(image.rows, image.cols, CV_32FC1);
BGR2Gray(image, image);
Studentfilter2D(image, mask);
// Here is the documentation on normalize http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#normalize
normalize(image, image, 0, 1, NORM_MINMAX);
cvtColor(image, image, CV_GRAY2BGR);
return;
}
Like I said, I had this working before, just looking for some fresh eyes to look at it and see what I may be missing. I have been looking at this same code so much for the past 4 days that I think I am just missing things. In case anyone is wondering, I have also tried changing the mask values of the filter, but to no avail.
There are two things that are worth mentioning.
The first is that you are not taking proper care of the type of your matrices/images.
The input to Studentfilter2D in sobel_y is an 8-bit grayscale image of type CV_8UC1 meaning that the data is an array of unsigned char.
Your Studentfilter2D function, however, is indexing this input image as though it was of type float. This means it is picking the wrong pixels to work with.
If the above does not immediately solve your problem, you should consider the range of your final derivative image. Since it is a derivative it will no longer be in the range [0, 255]. Instead, it might even contain negative numbers. When you try to visualize this, you will run into problems unless you first normalize your image.
There are built in functions to do this in OpenCV if you look around in the documentation.

How do I update this Neural Net to use image pixel data

I'm learning Neural Networks from this bytefish machine learning guide and code. I understand it well but I would like to update the code at the previous link to use image pixel data instead of random values as the input data. In this section of the aforementioned code:
cv::randu(trainingData,0,1);
cv::randu(testData,0,1);
the training and test matrices are filled with random data. Then label data is added to the classes matrices here:
cv::Mat trainingClasses = labelData(trainingData, eq);
cv::Mat testClasses = labelData(testData, eq);
using this function:
// label data with equation
cv::Mat labelData(cv::Mat points, int equation) {
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++) {
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = f(x, y, equation);
// the f() function used above
//is only a case statement with 5
//switches in it eg on of the switches is:
//case 0:
//return y > sin(x*10) ? -1 : 1;
//break;
}
return labels;
}
Then points are plotted in a window here:
plot_binary(trainingData, trainingClasses, "Training Data");
plot_binary(testData, testClasses, "Test Data");
with this function:
;; Plot Data and Class function
void plot_binary(cv::Mat& data, cv::Mat& classes, string name) {
cv::Mat plot(size, size, CV_8UC3);
plot.setTo(cv::Scalar(255.0,255.0,255.0));
for(int i = 0; i < data.rows; i++) {
float x = data.at<float>(i,0) * size;
float y = data.at<float>(i,1) * size;
if(classes.at<float>(i, 0) > 0) {
cv::circle(plot, Point(x,y), 2, CV_RGB(255,0,0),1);
} else {
cv::circle(plot, Point(x,y), 2, CV_RGB(0,255,0),1);
}
}
imshow(name, plot);
}
The plotted points, as I understand it, represent the input data multiplied by the equations in the f() function and is used by the predict functions to predict which point to plot in the mlp, knn, svm etc. functions. How do I update what is going on here to do something with Image pixel data. Any advice to get me farther would be appreciated.
"How do I update what is going on here to do something with Image pixel data" is a broad and generic question. May I ask in exchange: what do you want to do with "Image pixel data"?
Do you want an answer to: what can be done with "Image pixel data" on machine learning algorithms like ANN, SVM etc. ?
The answer is a loooong list of things encompassing thousands of research papers and hundreds of PhD theses. Some examples include: supervised and/or un-supervised classification of images into labels/tags/categories based on features like image content, objects in image, patterns in image etc. The possibilities are endless. You may perhaps want to take a look at this: http://stuff.mit.edu/afs/athena/course/urop/profit/PDFS/EdwardTolson.pdf
Now, coming back to you original objective: "I would like to update the code at the previous link to use image pixel data instead of random values as the input data"...
The implementation technique would depend largely on what you want to do. I can cite one/two easy techniques for extracting feature vectors from image, which can be fed into any machine learning algorithm of your choice...
Example 1:
You may start with using pixel intensity data as a feature vector. Here's how you may go ahead with it:
Load image using
Mat image = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
Resize image into a smaller area using resize. You may want to begin with small image sizes, like 8x8 or 10x10 pixels.
Loop through the image matrix, somewhat like this:
for(int row = 0; row < img.rows; ++row)
{
uchar* p = img.ptr(row);
for(int col = 0; col < img.cols; ++col)
{
*p++ //points to each pixel value in turn assuming a CV_8UC1 greyscale image
}
}
A collection of all the pixel values will give you a feature vector for that image.
Now suppose you have two classes of image. For each set of feature vector you generate, you'll have to prepare (for supervised classification) a corresponding label Mat (somewhat like the example you've mentioned). It needs to contain the class label (say, 0 and 1) for all the feature vectors present in your feature Mat.
Now feed the feature vectors and label Mat to your machine learning code and see what happens.
However, the ability of image classification based on image pixel data alone is quite limited. There are thousands of techniques for extracting image features, most of which are dependent on the application area.
Example 2:
I'll finish off with one more example for extracting feature vectors, which, in some cases, will prove to be more effective than simple image pixel values.
You may use the Histogram of Oriented Gradients descriptor for slightly better results, use this:
cv::HOGDescriptor hog;
vector<float> descriptors;
hog.compute(mat, descriptors);
The vector descriptors is your feature vector.
HOGDescriptors, when used with SVM, provides a decent classification mechanism.
You can put the pixel data of an image into a Mat called trainingData using something similar to this:
cv::Mat labelData(cv::Mat points, int equation)
{
cv::Mat labels(points.rows, 1, CV_32FC1);
for(int i = 0; i < points.rows; i++)
{
float x = points.at<float>(i,0);
float y = points.at<float>(i,1);
labels.at<float>(i, 0) = f(x, y, equation);
}
return labels;
}
Now, instead of labelData, we're going to return a Mat of pixel data. One obvious way is to use the image itself as a feature vector. However, some machine learning algorithms in openCV, including ANN, SVM etc., required special formatting of input data.
You may try something like this:
cv::Mat trainingData(cv::Mat image)
{
cv::Mat trainingVector(image.rows*image.cols, 1, CV_32FC1);
for(int i = 0; i < image.rows; i++)
{
for(int j = 0; j < image.cols; j++)
{
float valueOfPixel = image.at<float>(i,j);
trainingVector.at<float>((i*image.cols)+j, 0) = valueOfPixel;
}
}
return trainingVector;
}
(Please recheck the syntax of the code before using, I just typed it out here)
So, what the above block effectively does is change the 2D matrix of the image into a 1D array. Now, how and where you use it depends on your requirements.
Please make necessary modifications before invoking the machine learning modules.
Hope this answers your question.
Thanks.