By default cv::imread read data to cv::Mat in BGR order. I would prefer it in RGB order. Every time i read image I do a conversion:
cv::Mat image;
image = cv::imread("...",CV_LOAD_IMAGE_COLOR);
if(!image.data )
...
cvtColor(image, image, CV_BGR2RGB);
is there a way to tell Mat or imread that colors order should be different?
Something like:
Cv::Mat image;
image.setOrder(CV_RGB) // ???
image = cv::imread("...",CV_LOAD_IMAGE_COLOR);
No, as a matter of fact, there is no such configurability of imread() or possibility to define a channel order.
I suggest you wrap your image reading and channel mixing in a small utility function.
Related
I'm currently using OpenCV (in C++) to normalize some data in the form of images. Since I'm planning to train an autoencoder I found some articles and papers suggesting that it's better if the data is normalised in the range from -1 to 1 (tanh vs sin). I managed to do this with the following code:
cv::VideoCapture cap(video);
cv::Mat frame;
cap.read(frame);
//convert to grayscale
cv::cvtColor(*frame, *frame, cv::COLOR_BGR2GRAY);
//normalise
frame->convertTo(*frame, CV_32FC1); //change data type from CV_8UC1 to CV_32FC1
cv::normalize(*frame, *frame, -1, 1, cv::NORM_MINMAX);
//downscale
cv::resize(*frame, *frame, cv::Size(256, 256));
The next thing I want to do is to save the normalised image and for this I'm using cv::imwrite(), so I have the following line: cv::imwrite("normImage.tiff", *frame);.
I'm wondering, however, if writing the image to .TIFF format actually reverses the normalisation and I haven't been able to verify whether that's the case or not. I'd also like to ask if there's a better way/format to write the image using OpenCV?
Cheers
I am trying to make an object tracker using OpenCV 3.1.0 and C++ following this Python example: http://docs.opencv.org/3.1.0/df/d9d/tutorial_py_colorspaces.html#gsc.tab=0.
I have some problems with cvtColor() function, because it changes the colors of my images and not its colorspace. I have this code:
Mat original_image;
original_image = imread(argv[1], CV_LOAD_IMAGE_COLOR); // The image is passed as arg
if (!original_image.data)
{
printf("Problem!\n");
return -1;
}
// From BGR to HSV
Mat hsv_image(original_image.rows, original_image.cols, original_image.type());
cvtColor(original_image, hsv_image, CV_BGR2HSV);
imwrite("hsv_image.png", hsv_image);
original_image is a CV_8UC3, compatible with cvtColor() and should be originally in BGR colorspace.
I made the test image below with GIMP:
And I get this image as result:
I decided to try the conversion from BGR to RGB, changing BGR2HSV to BGR2RGB, and with the same test image, I get this result
Here, it's more clear that the channels of the image are changed directly...
Has anybody any idea about what's happening here?
Function imwrite doesn't care what color space mat has and this information isn't stored. According to documentation it's BGR order.
So before saving image you should be sure it is BGR.
If you really want to save image as HSV use file storages
Try this:
// From BGR to HSV
Mat hsv_image;
cvtColor(original_image, hsv_image, COLOR_BGR2HSV);
imwrite("hsv_image.png", hsv_image);
I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);
I have an image which has 4 channels and is in 4 * UINT8 format.
I am trying to convert it to 3 channel float and I am using this code:
images.convertTo(images,CV_32FC3,1/255.0);
After the conversion, the image is in a float format but still has 4 channels. How can I get rid of 4th (alpha) channel in OpenCV?
As #AldurDisciple said, Mat::convertTo() is intended to be used for changing the data type of a Mat, not for changing the number of channels.
To work out, you should split it into two steps:
cvtColor(image, image, CV_BGRA2BGR); // 1. change the number of channels
image.convertTo(image, CV_32FC3, 1/255.0); // 2. change type to float and scale
The function convertTo is intended to be used to change the data type of a Mat, exclusively. As mentionned in the documentation (link), the number of channels of the output image is always the same as the input image.
If you want to change the datatype and reduce the number of channels, you should use a combination of split, merge, and convertTo:
cv::Mat img_8UC4;
cv::Mat chans[4];
cv::split(img_8UC4,chans);
cv::Mat img_8UC3;
cv::merge(chans,3,img_8UC3);
cv::Mat img_32FC3;
img_8UC3.convertTo(img_32FC3);
Another approach may be to recode the algorithm yourself, which is quite easy and probably more efficient.
OpenCV's cvtColor function allows you to convert the type and number of channels of a Mat.
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0 )
So something like this would convert a colored 4 channel to a colored 3 channel:
cvtColor(image, image, CV_BGRA2BGR, 3);
Or it is probably more efficient to use the mixChannels function, if you check the documentation its example shows how to split a channel out.
Then if you really want to change it to a specific type:
image.convertTo(image,CV_32F);
I want to implement a similar function as shown below using the opencv.
image=double(imread('mask.jpg'));
I have implemented something like this.How to convert this into double.
cv::Mat image= imread(arg[1]);
where arg[1] contains my image which is to be stored in Mat image as double. How to implement this.
You're looking for Mat::convertTo().
For gray-scale image:
image.convertTo(image, CV_64FC1);
For color image:
image.convertTo(image, CV_64FC3); // or CV_64FC4 for 4-channel image