I want to implement a similar function as shown below using the opencv.
image=double(imread('mask.jpg'));
I have implemented something like this.How to convert this into double.
cv::Mat image= imread(arg[1]);
where arg[1] contains my image which is to be stored in Mat image as double. How to implement this.
You're looking for Mat::convertTo().
For gray-scale image:
image.convertTo(image, CV_64FC1);
For color image:
image.convertTo(image, CV_64FC3); // or CV_64FC4 for 4-channel image
Related
I'm currently using OpenCV (in C++) to normalize some data in the form of images. Since I'm planning to train an autoencoder I found some articles and papers suggesting that it's better if the data is normalised in the range from -1 to 1 (tanh vs sin). I managed to do this with the following code:
cv::VideoCapture cap(video);
cv::Mat frame;
cap.read(frame);
//convert to grayscale
cv::cvtColor(*frame, *frame, cv::COLOR_BGR2GRAY);
//normalise
frame->convertTo(*frame, CV_32FC1); //change data type from CV_8UC1 to CV_32FC1
cv::normalize(*frame, *frame, -1, 1, cv::NORM_MINMAX);
//downscale
cv::resize(*frame, *frame, cv::Size(256, 256));
The next thing I want to do is to save the normalised image and for this I'm using cv::imwrite(), so I have the following line: cv::imwrite("normImage.tiff", *frame);.
I'm wondering, however, if writing the image to .TIFF format actually reverses the normalisation and I haven't been able to verify whether that's the case or not. I'd also like to ask if there's a better way/format to write the image using OpenCV?
Cheers
By default cv::imread read data to cv::Mat in BGR order. I would prefer it in RGB order. Every time i read image I do a conversion:
cv::Mat image;
image = cv::imread("...",CV_LOAD_IMAGE_COLOR);
if(!image.data )
...
cvtColor(image, image, CV_BGR2RGB);
is there a way to tell Mat or imread that colors order should be different?
Something like:
Cv::Mat image;
image.setOrder(CV_RGB) // ???
image = cv::imread("...",CV_LOAD_IMAGE_COLOR);
No, as a matter of fact, there is no such configurability of imread() or possibility to define a channel order.
I suggest you wrap your image reading and channel mixing in a small utility function.
I am trying to make an object tracker using OpenCV 3.1.0 and C++ following this Python example: http://docs.opencv.org/3.1.0/df/d9d/tutorial_py_colorspaces.html#gsc.tab=0.
I have some problems with cvtColor() function, because it changes the colors of my images and not its colorspace. I have this code:
Mat original_image;
original_image = imread(argv[1], CV_LOAD_IMAGE_COLOR); // The image is passed as arg
if (!original_image.data)
{
printf("Problem!\n");
return -1;
}
// From BGR to HSV
Mat hsv_image(original_image.rows, original_image.cols, original_image.type());
cvtColor(original_image, hsv_image, CV_BGR2HSV);
imwrite("hsv_image.png", hsv_image);
original_image is a CV_8UC3, compatible with cvtColor() and should be originally in BGR colorspace.
I made the test image below with GIMP:
And I get this image as result:
I decided to try the conversion from BGR to RGB, changing BGR2HSV to BGR2RGB, and with the same test image, I get this result
Here, it's more clear that the channels of the image are changed directly...
Has anybody any idea about what's happening here?
Function imwrite doesn't care what color space mat has and this information isn't stored. According to documentation it's BGR order.
So before saving image you should be sure it is BGR.
If you really want to save image as HSV use file storages
Try this:
// From BGR to HSV
Mat hsv_image;
cvtColor(original_image, hsv_image, COLOR_BGR2HSV);
imwrite("hsv_image.png", hsv_image);
I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);
Hy,
I am using OpenCV2.4.4 to develop the following application:
read an image in Mat format;
transform it to grayscale;
detect the face, if exists;
crop the face, extracting the Region Of Interest;
save the cropped image to file.
My question is: could I possible set a standard dimension for the saved image (width and height)?
I tried to use resize function, but it doesn't actually I want because saves just a part of the face.
cv::cvtColor(croppedImage, greyMat, CV_BGR2GRAY);
greyMat.resize(100);
imwrite("result.jpg", greyMat);
Try to use cv::resize instead:
cv::cvtColor(croppedImage, greyMat, CV_BGR2GRAY);
cv::Mat result;
cv::resize(greyMat, result, cv::Size(100,100));
imwrite("result.jpg", result);
See the documentation for details.