Trouble creating grayscale image from array of pixel intensities with ImageMagick - c++

I'm trying to output a png grayscale image using values from an array of intensity values using ImageMagick.
I've used the Image constructor to try to do this, but the image it is creating does not exactly match the given array.
Image grayscaleImage(256, 256, "I", DoublePixel, inputPtr);
grayscaleImage.write("test.png");
The image that's being created has the correct values for all of the black pixels (intensity of 0) but for the non-zero pixels, I'm getting only white; no gray. How can I correct this issue? Or am I using the constructor incorrectly? Thank you!

As emcconville stated, the numbers in the array of integers need to be scaled to be between 0.0 and 1.0 for Magick::DoublePixel. I achieved this by using the following function:
(b-a)(x - min)
f(x) = -------------- + a
max - min
Where a == 0, b == 1, x == inputPtr[index], min == 0, and max == 255.

Related

Getting mean of an image using a mask

I have a series of concentric rectangles and wish to obtain the means of the outer rectangle excluding the inner rectangle. See the attached diagram , I need to get the mean for the shaded area.
So I am using a mask of the inner rectangle to pass into the cv2.mean method, but I am not sure how to set the mask. I have the following code:
for i in xrange(0,len(wins)-2,1):
means_1 = cv2.mean(wins[i])[0]
msk = cv2.bitwise_and(np.ones_like((wins[i+1]), np.uint8),np.zeros_like((wins[i]), np.uint8))
means_2 = cv2.mean(wins[i+1],mask=msk)
means_3 = cv2.mean(wins[i+1])[0]
print means_1,means_2,means_3
I get this error for the means_2 (means_3 works fine).:
error:
/Users/jenkins/miniconda/0/2.7/conda-bld/work/opencv-2.4.11/modules/core/src/arithm.cpp:1021:
error: (-209) The operation is neither 'array op array' (where arrays
have the same size and type), nor 'array op scalar', nor 'scalar op
array' in function binary_op
The mask here refers to a binary mask which has 0 as background and 255 as foreground, So You need to create an empty mask with default color = 0 and then paint the Region of Interest where you want to find the mean with 255. Suppose I have input image [512 x 512]:
Lets's assume 2 concentric rectangles as:
outer_rect = [100, 100, 400, 400] # top, left, bottom, right
inner_rect = [200, 200, 300, 300]
Now create the binary mask using these rectangles as:
mask = np.zeros(image.shape[:2], dtype=np.uint8)
cv2.rectangle(mask, (outer_rect[0], outer_rect[1]), (outer_rect[2], outer_rect[3]), 255, -1)
cv2.rectangle(mask, (inner_rect[0], inner_rect[1]), (inner_rect[2], inner_rect[3]), 0, -1)
Now you may call the cv2.mean() to get the mean of foreground area, labelled with 255 as:
lena_mean = cv2.mean(image, mask)
>>> (109.98813432835821, 96.60768656716418, 173.57567164179105, 0.0)
In Python/OpenCV or any software, if you have a masked image and the binary mask, then the mean of the non-black pixels in the image (i.e. ROI) is the mean of the masked image divided by the mean of the mask
Input:
Mask:
import cv2
import numpy as np
# load image
img = cv2.imread('lena_g.png', cv2.IMREAD_GRAYSCALE)
# load mask
mask = cv2.imread('lena_mask.png', cv2.IMREAD_GRAYSCALE)
# compute means
mean_img = np.mean(img)
mean_mask = np.mean(mask)
# compute 255*mean_img/mean_mask
mean_roi = 255 * mean_img / mean_mask
# print mean of each
print("mean of image:", mean_img)
print("mean of mask:", mean_mask)
print("mean of roi:", mean_roi)
mean of image: 98.50196838378906
mean of mask: 216.090087890625
mean of roi: 116.23856597522328

How i can take the average of 100 image using opencv?

i have 100 image, each one is 598 * 598 pixels, and i want to remove the pictorial and noise by taking the average of pixels, but if i want to use Adding for "pixel by pixel"then dividing i will write a loop until 596*598 repetitions for one image, and 598*598*100 for hundred of image.
is there a method to help me in this operation?
You need to loop over each image, and accumulate the results. Since this is likely to cause overflow, you can convert each image to a CV_64FC3 image, and accumualate on a CV_64FC3 image. You can use also CV_32FC3 or CV_32SC3 for this, i.e. using float or integer instead of double.
Once you have accumulated all values, you can use convertTo to both:
make the image a CV_8UC3
divide each value by the number of image, to get the actual mean.
This is a sample code that creates 100 random images, and computes and shows the
mean:
#include <opencv2\opencv.hpp>
using namespace cv;
Mat3b getMean(const vector<Mat3b>& images)
{
if (images.empty()) return Mat3b();
// Create a 0 initialized image to use as accumulator
Mat m(images[0].rows, images[0].cols, CV_64FC3);
m.setTo(Scalar(0,0,0,0));
// Use a temp image to hold the conversion of each input image to CV_64FC3
// This will be allocated just the first time, since all your images have
// the same size.
Mat temp;
for (int i = 0; i < images.size(); ++i)
{
// Convert the input images to CV_64FC3 ...
images[i].convertTo(temp, CV_64FC3);
// ... so you can accumulate
m += temp;
}
// Convert back to CV_8UC3 type, applying the division to get the actual mean
m.convertTo(m, CV_8U, 1. / images.size());
return m;
}
int main()
{
// Create a vector of 100 random images
vector<Mat3b> images;
for (int i = 0; i < 100; ++i)
{
Mat3b img(598, 598);
randu(img, Scalar(0), Scalar(256));
images.push_back(img);
}
// Compute the mean
Mat3b meanImage = getMean(images);
// Show result
imshow("Mean image", meanImage);
waitKey();
return 0;
}
Suppose that the images will not need to undergo transformations (gamma, color space, or alignment). The numpy package lets you do this quickly and succinctly.
# List of images, all must be the same size and data type.
images=[img0, img1, ...]
avg_img = np.mean(images, axis=0)
This will auto-promote the elements to float. If you want the as BGR888, then:
avg_img = avg_img.astype(np.uint8)
Could also do uint16 for 16 bits per channel. If you are dealing with 8 bits per channel, you almost certainly won't need 100 images.
Firstly- convert images to floats. You have N=100 images. Imagine that a single image is an array of average pixel values of 1 image. You need to calculate an array of average pixel values of N images.
Let A- array of average pixel values of X images, B - array of average pixel values of Y images. Then C = (A * X + B * Y) / (X + Y) - array of average pixel values of X + Y images. To get better accuracy in floating point operations X and Y should be approximately equal
You may merge all you images like subarrays in merge sort. In you case merge operation is C = (A * X + B * Y) / (X + Y) where A and B are arrays of average pixel values of X and Y images

OpenCV double mat shows up as all white

I have a type 6 (double-valued, single channel) mat with data ranging from 0 to 255. I can print out the data using the following code:
double* data = result.ptr<double>();
for(int i = 0; i < rows; i++)
for(int j = 0; j < cols; j++)
std::cout<<data[i*step+j]<<"\t";
And this appears perfectly normal--in the range from 0 to 255 and the size that I'd expect. However, when I try to show the image:
imshow(window_name, result);
waitKey();
I just get a white image. Just white pixels. Nothing else.
Loading other images from files and displaying in the window works fine.
Using Windows 7, OpenCV 233
cv::imshow works in following ways -
If the image is 8-bit unsigned, it is displayed as is.
If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256. That is, the value range [0,255*256] is mapped to [0,255].
If the image is 32-bit floating-point, the pixel values are multiplied by 255. That is, the value range [0,1] is mapped to [0,255].
Your matrix lies in the 3rd category where imshow is expecting the values to be between 0 and 1 and so it multiplies it by 255. Since your values are already between 0 and 255, you are getting unwanted result. So normalizing the pixels between 0 and 1 will work.
You need to normalize your floating point image so that the values are between 0.0 - 1.0 if you're using imshow. I bet your values are over 1.0 and thus those pixels are all set to 255, giving you the white image.

Taking the integral image of a Color Distance result (logical error)

I'm debugging a robot's project and have found an error which I'm not quite sure how to fix it theoretically.
I must calculate a color distance map, and following this I must take the integral image of the result and do some calculation with it.
Using the A and B channel from a Lab colorspaced image I obtain the color distance for example color red(pA = 255, pB = 127) using formula sqrt([A-pA]^2+[B-pB]^2)
subtract(mA, Scalar(pA), tA);
subtract(mB, Scalar(pB), tB);
tA.convertTo(t32A, CV_32SC1);
tB.convertTo(t32B, CV_32SC1);
pow(t32A, 2.0, powA);
pow(t32B, 2.0, powB);
add(powA, powB, sq);
pow(sq, 0.5, res);
//res.convertTo(result, CV_8UC1);
I needed the conversion to CV_32S because of the limitations of CV_8U handling values above 255.
Now I must feed the result in to the integral image, this expects only an image of CV_8UC1.
The problem I'm facing, is that the aforementioned color distance function might produce pixels with values above 255.
For example:
distance between (0,0) to red (255,127)
sqrt( (0-255)^2 + (0-127)^2) = 285
Or between (0,255) to red (255,127)
sqrt( (0-255)^2 + (255-127)^2) = 285
Does anybody have any suggestions how I can feed the result in to the integral image, without any loss of information.
Thank you
How about using sqrt(2) as a normalization factor ?

Converting byte number to grayscale and vica versa

I got a number between 0-255 and need to convert it to a RGB grayscale color. And how do I convert a RGB-color to a grayscale value between 0-255?
The common formula is luminosity = 0.30 * red + 0.59 * green + 0.11 * blue. Matches the human eye's color perception, doesn't otherwise correct for display device gamma.
If you have a number 0 <= x <= 255 representing a grayscale value, the corresponding RGB tuple is simply (x,x,x).