I would like to apply a color map on a grayscale image, with applyColorMap but it seems that it doesn't take into consideration the IplImage pointers, according to :
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/colormaps.html
What would be the right syntax for that?
Something else, is there a link between Camera Calibration and Color Mapping? Is it about the same thing?
You can pass a cv::Mat to applyColorMap according to the ColorMaps page in the OpenCV documentation.
Looking at this OpenCV C++ cheat sheet we see that an IplImage can be converted to a Mat by simply passing the IplImage to the Mat constructor: Mat imgMat(iplimg);
Related
I am trying to make an object tracker using OpenCV 3.1.0 and C++ following this Python example: http://docs.opencv.org/3.1.0/df/d9d/tutorial_py_colorspaces.html#gsc.tab=0.
I have some problems with cvtColor() function, because it changes the colors of my images and not its colorspace. I have this code:
Mat original_image;
original_image = imread(argv[1], CV_LOAD_IMAGE_COLOR); // The image is passed as arg
if (!original_image.data)
{
printf("Problem!\n");
return -1;
}
// From BGR to HSV
Mat hsv_image(original_image.rows, original_image.cols, original_image.type());
cvtColor(original_image, hsv_image, CV_BGR2HSV);
imwrite("hsv_image.png", hsv_image);
original_image is a CV_8UC3, compatible with cvtColor() and should be originally in BGR colorspace.
I made the test image below with GIMP:
And I get this image as result:
I decided to try the conversion from BGR to RGB, changing BGR2HSV to BGR2RGB, and with the same test image, I get this result
Here, it's more clear that the channels of the image are changed directly...
Has anybody any idea about what's happening here?
Function imwrite doesn't care what color space mat has and this information isn't stored. According to documentation it's BGR order.
So before saving image you should be sure it is BGR.
If you really want to save image as HSV use file storages
Try this:
// From BGR to HSV
Mat hsv_image;
cvtColor(original_image, hsv_image, COLOR_BGR2HSV);
imwrite("hsv_image.png", hsv_image);
I know this question can be wired for experts, but I want to access pixels of grayscale image in openCV. I am using the following code:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Now if I want to print any pixel(say at position 255,255) using img1.at<float>(255,255) I get 0, which is not actually black pixel. I thought the image has been read as 2d Matrix.
Actually I want to do some calculation on each pixel, for that I need to access each pixel explicitly.
Any help or suggestion will be appreciated.
You will have to do this
int x = (int)(img.at<uchar>(j,i))
This is because of the way the image is stored and the way img.at works. Grayscale image is stored in CV_8UC1 format and hence it is basically an array of uchar type. So the return value is a uchar which you typecast into an int to operate on it as ints.
Also, this is similar to your questions:
accessing pixel value of gray scale image in OpenCV
float is the wrong type here.
if you read an image like this:
cv::Mat img1 = cv::imread("bauckhage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
then it's type will be CV_8U, (uchar, a single grayscale byte per pixel). so, to access it:
uchar &pixel = img1.at<uchar>(row,col);
I am implementing an algorithm in which i have to use multi resolution analysis. What that paper says is that i have to perform some processing at lower scale, find some pixel locations and then remap the pixels according to the orignal scale. I really dont understand the remapping function in Open cv. If any one could help me that would be great.
Thanks.
If you want to resize picture in OpenCV, you can do this:
Mat img = imread(picturePath, CV_8UC1);
Mat resized(NEW_HEIGH, NEW_WIDTH, CV_32FC1);
img.convertTo(img, CV_32FC1);
resize(img, resized, resized.size());
If you want to acess a specific pixel, use:
img.at<var>(row, col)
in a CV_32FC1 format replace "var" with "float", for CV_8UC1 replace with "int"
Hope this will help.
I have written an android app which uses OpenCV to manipulate with images. I'm using the below code to write a cv::Mat object to JPG file.
cv::imwrite("<sd card path>/img.jpg", <some mat object>);
I do see the image being saved on my sd card, however, the colors are not right. It has some bluish color all over the image.
Does anyone know what I'm missing here?
The above comment by Haris resolved this issue. I modified the code to change the color space as below:
cv::Mat mat;
// Initialize mat
cv::cvtColor(mat, mat, CV_BGR2RGB);
cv::imwrite("<sd card path>/img.jpg", mat);
I was able to see the right image being saved with this.
I'm very new to OpenCV (started using it two days ago), I'm trying to cut a hand image from a depth image got from Kinect, I need the hand image for gesture recognition. I have the image as a cv::Mat type. My questions are:
Is there a way to convert cv::Mat to cvMat so that I can use cvGetSubRect method to get the Region of interest?
Are there any methods in cv::Mat that I can use for getting the part of the image?
I wanted to use IplImage but I read somewhere that cv::Mat is the preferred way now.
You can use the overloaded function call operator on the cv::Mat:
cv::Mat img = ...;
cv::Mat subImg = img(cv::Range(0, 100), cv::Range(0, 100));
Check the OpenCV documentation for more information and for the overloaded function that takes a cv::Rect. Note that using this form of slicing creates a new matrix header, but does not copy the data.
Maybe an other approach could be:
//Create the rectangle
cv::Rect roi(10, 20, 100, 50);
//Create the cv::Mat with the ROI you need, where "image" is the cv::Mat you want to extract the ROI from
cv::Mat image_roi = image(roi)
I hope this can help.