Opencv Histogram Data from Mat - c++

So I'm trying to get the actual data from the histogram I generated in OpencCV. I'm using the code located here and can be seen below. However, I don't exactly know how to get the data from this Mat. I saw this post, but the post uses hist.get(i, 0) to get the histogram data. However, my histogram Mat only contains 1 row... and 16384 cols. The pertinent code is below.
static Mat spatial_histogram(InputArray _src, int numPatterns,
int grid_x, int grid_y, bool /*normed*/)
{
Mat src = _src.getMat();
// calculate LBP patch size
int width = src.cols/grid_x;
int height = src.rows/grid_y;
// allocate memory for the spatial histogram
Mat result = Mat::zeros(grid_x * grid_y, numPatterns, CV_32FC1);
// return matrix with zeros if no data was given
if(src.empty())
return result.reshape(1,1);
// initial result_row
int resultRowIdx = 0;
// iterate through grid
for(int i = 0; i < grid_y; i++) {
for(int j = 0; j < grid_x; j++) {
Mat src_cell = Mat(src, Range(i*height,(i+1)*height), Range(j*width,(j+1)*width));
Mat cell_hist = histc(src_cell, 0, (numPatterns-1), true);
// copy to the result matrix
Mat result_row = result.row(resultRowIdx);
cell_hist.reshape(1,1).convertTo(result_row, CV_32FC1);
// increase row count in result matrix
resultRowIdx++;
}
}
// return result as reshaped feature vector
return result.reshape(1,1);
}
result becomes a Mat of size 1 x 16384 and the values are sparse in the Mat... So how would I get the proper histogram data?

Related

Convolution using FFT gives a bad result

I'm trying to convolve an image using FFT. I use openCV so images are in Mat containers. I convert color image to gray image, then add a second channel for imaginary numbers that is all zero. Then I take this 2-channel Mat and convolve it with Prewitt's kernel. I get a result very different from the result I get when I use normal convolution algorithm. Left image is the output I get using FFT and right image is the output of normal convolution.
Below is the pseudo algorithm of how I do the operation;
Convert image Mat and kernel Mat to complex Mats by adding second channel (Result Mat type is CV_32FC2)
Assign all Mat elements to complex vectors
Zero pad the vectors to the same next power of 2
FFT the vectors
Signal multiply both vectors elementwise and assign result to result vector
Inverse FFT the result vector
Convert result vector to Mat
I think FFT algorithm is not the problem because when I take an image, FFT it, then inverse FFT it, I get the original image just fine. But I could be wrong. So here is the FFT algorithm. Notice how there are two of them. I use the second one. I also tried other FFT algorithms and they all output the same. FFT'ing and IFFT'ing same image only skips the signal multiplication step above. So I think that's where the problem is. Here is the code of the operation;
std::vector<cf> signalMultiplication(std::vector<cf> lh, std::vector<cf> rh) {
std::vector<cf> imVec = lh, kerVec = rh, resultVec;
resultVec.resize(imVec.size());
std::transform(imVec.begin(), imVec.end(), kerVec.begin(), resultVec.begin(), std::multiplies<cf>());
return resultVec;
}
I tried multiplying them using for loop but result was the same. I don't know the problem and I can't type the whole code here since it is too long, so tell me where you think the problem is and I'll give the code of that part.
#Paul below is the main body of the code;
cv::Mat convolution2D(cv::Mat image, cv::Mat kernel) {
cv::Mat imMat, kerMat;
imMat = convertToComplexMat(image);
kerMat = convertToComplexMat(kernel);
std::vector<cf> imVec, kerVec, resultVec;
imVec = matElementsToVector<cf>(imMat);
kerVec = matElementsToVector<cf>(kerMat);
float power = log2f(imVec.size());
if (abs(power - (int)power) == 0)
power++;
else
power = ceil(power);
zeroPadding(imVec, power);
zeroPadding(kerVec, power);
//FFT code I linked takes valarray as argument so I convert vectors to valarray and back
std::valarray<cf> imCArr(imVec.data(), imVec.size());
std::valarray<cf> kerCArr(kerVec.data(), kerVec.size());
fftRosetta(imCArr);
fftRosetta(kerCArr);
imVec.assign(std::begin(imCArr), std::end(imCArr));
kerVec.assign(std::begin(kerCArr), std::end(kerCArr));
resultVec = signalMultiplication(imVec, kerVec);
std::valarray<cf> resCArr(resultVec.data(), resultVec.size());
ifftRosetta(resCArr);
resultVec.assign(std::begin(resCArr), std::end(resCArr));
cv::Mat resultMat;
resultMat = vectorToMatElementsRowMajor(resultVec, imMat.rows, imMat.cols, imMat.type());
std::vector<cv::Mat> matVec;
cv::split(resultMat, matVec);
return matVec[0]; }
These are the custom functions;
convertToComplexMat, matElementsToVector, zeroPadding, fftRosetta, ifftRosetta, signalMultiplication, vectorToMatElementsRowMajor
signalMultiplication is posted, fftRosetta and ifftRosetta are linked so here, the rest of the functions;
using cf = std::complex<float>;
cv::Mat convertToComplexMat(cv::Mat imageMat) {
cv::Mat matOper;
if (imageMat.channels() == 3)
cv::cvtColor(imageMat, matOper, cv::COLOR_BGR2GRAY);
else
matOper = imageMat.clone();
matOper.convertTo(matOper, CV_32FC1);
cv::Mat compChannel = cv::Mat::zeros(matOper.rows, matOper.cols, CV_32FC1);
std::vector<cv::Mat> channels;
channels.push_back(matOper);
channels.push_back(compChannel);
cv::merge(channels, matOper);
return matOper;
}
template <typename T>
std::vector<T> matElementsToVector(cv::Mat operand) {
std::vector<T> vecOper;
int cn = operand.channels();
int lele = operand.total();
for (int i = 0; i < operand.total(); i++) {
if (cn == 1)
vecOper.push_back(operand.at<cv::Vec<T, 1>>(i)[0]);
else if (cn == 2) {
if (typeid(T) == typeid(cf)) {
T xd = operand.at<T>(i);
vecOper.push_back(xd);
}
else
for (int k = 0; k < cn; k++)
vecOper.push_back(operand.at<cv::Vec<T, 2>>(i)[k]);
}
else if (cn == 3)
for (int k = 0; k < cn; k++)
vecOper.push_back(operand.at<cv::Vec<T,3>>(i)[k]);
}
return vecOper;
}
void zeroPadding(std::vector<cf>& a, int power) {
int p, ioper;
if (power == -1)
p = ceil(log2f(a.size()));
else
p = power;
ioper = pow(2, p);
int size = a.size();
for (int i = 0; i < ioper - size; i++) {
a.push_back(0);
}
}
template <typename T>
cv::Mat vectorToMatElementsRowMajor(std::vector<T> operand, int mrows, int mcols, int mtype) {
cv::Mat matoper(mrows, mcols, mtype);
for (int j = 0; j < matoper.total(); j++) {
matoper.at<T>(j) = operand[j];
}
return matoper;
}
#Cris I tried it again with openCV DFT like you said, following the directions here. I applied DFT to image and kernel, then element-wise multiplied them, then applied IDFT. But result is something very different now. I can see resemblance of original image in there, but there are multiple shadows of it in different angles. I think the problem is how I do signal multiplication, but I can't find any answers on how to multiply 2D signals. Here is the code, output image is below it;
cv::Mat convolution2DopenCV(cv::Mat image, cv::Mat kernel) {
cv::Mat paddedImage, paddedKernel, imgOper, kerOper;
if (image.channels() == 3)
cv::cvtColor(image, imgOper, cv::COLOR_BGR2GRAY);
else
imgOper = image.clone();
kerOper = kernel;
int m = cv::getOptimalDFTSize(imgOper.rows);
int n = cv::getOptimalDFTSize(imgOper.cols);
cv::copyMakeBorder(imgOper, paddedImage, 0, m - imgOper.rows, 0, n - imgOper.cols, cv::BORDER_CONSTANT, cv::Scalar::all(0));
cv::copyMakeBorder(kerOper, paddedKernel, 0, m - kerOper.rows, 0, n - kerOper.cols, cv::BORDER_CONSTANT, cv::Scalar::all(0));
cv::Mat planesImage[] = { cv::Mat_<float>(paddedImage), cv::Mat::zeros(paddedImage.size(), CV_32F) };
cv::Mat cmpImgMat;
cv::merge(planesImage, 2, cmpImgMat);
cv::dft(cmpImgMat, cmpImgMat);
cv::Mat planesKernel[] = { cv::Mat_<float>(paddedKernel), cv::Mat::zeros(paddedKernel.size(), CV_32F) };
cv::Mat cmpKerMat;
cv::merge(planesKernel, 2, cmpKerMat);
cv::dft(cmpKerMat, cmpKerMat);
cv::Mat resultMat = cmpImgMat.mul(cmpKerMat);
cv::Mat planes[2];
cv::idft(resultMat, resultMat);
cv::split(resultMat, planes);
cv::normalize(planes[0], planes[0], 0, 255, cv::NORM_MINMAX);
return planes[0];
}
That's everything, if there is something I'm missing, let me know.

How to pass an image buffer to an OpenCV Mat object?

I am currently programming with a PixeLINK USB3 machine vision camera along with OpenCV in C. I have had some success passing camera images in Mat format with the following code:
PXL_RETURN_CODE rc = PxLInitialize(0, &hCamera);
if (!API_SUCCESS(rc))
{
printf("Error: Unable to initialize a camera. \n");
return EXIT_FAILURE;
}
vector<U8> frameBuffer(3000 * 3000 * 2);
FRAME_DESC frameDesc;
if (API_SUCCESS(PxLSetStreamState(hCamera, START_STREAM)))
{
while (true)
{
frameDesc.uSize = sizeof(frameDesc);
rc = GetNextFrame(hCamera, (U32)frameBuffer.size(), &frameBuffer[0],
&frameDesc, 5);
Mat image(2592, 2048, CV_8UC1);
Mat imageCopy;
// Where passing of image data occurs
int k = 0;
for (int row = 0; row < 2048; row++)
{
for (int col = 0; col < 2592; col++)
{
image.at<uchar>(row, col) = frameBuffer[k];
k++;
}
}...
As I mentioned this works, but it seems very sloppy. I have looked online but haven't found too much detail.
I have tried:
Mat image(2592, 2048, CV_8UC1, &frameBuffer, size_t step=AUTO_STEP);
as well as,
Mat image(2592, 2048, CV_8UC1, frameBuffer, size_t step=AUTO_STEP).
The former is the only one that compile successfully, and displays gibberish - I mean, it doesn't form an image.
Have you tried switching the row and col of your Mat?
You initialized your Mat with row = 2592, col = 2048,
but you're using switched row and col in your for() loop.
I think this code should work properly:
Mat image(2048, 2592, CV_8UC1, &frameBuffer[0]);
Or, if you're using C++11,
Mat image(2048, 2592, CV_8UC1, frameBuffer.data());

Make 32x32 sections on an image in C++ OpenCV?

I want to take a gray scaled image and divide it into 32x32 sections. Each section will contain pixels and based their intensity and volume, they would be considered a 1 or a 0.
My thought is that I would name the sections like "(x,y)". For example:
Section(1,1) contains this many pixels that are within this range of intensity so this is a 1.
Does that make sense? I tried looking for the answer to this question but dividing up the image into overlaying sections doesn't seem to yield any results in the OpenCV community. Keep in mind I don't want to change the way the image looks, just divide it up into a 32x32 table with (x,y) being a "section" of the picture.
Yes you can do that. Here is the code. It is rough around the edges, but it does what you request. See comments in the code for explanations.
#include <opencv2/imgcodecs.hpp>
#include <opencv2/imgproc.hpp>
struct BradleysImage
{
int rows;
int cols;
cv::Mat data;
int intensity_threshold;
int count_threshold;
cv::Mat buff = cv::Mat(32, 32, CV_8UC1);
// When we call the operator with arguments y and x, we check
// the region(y,x). We then count the number of pixels within
// that region that are greater than some threshold. If the
// count is greater than desired number, we return 255, else 0.
int operator()(int y, int x) const
{
int j = y*32;
int i = x*32;
auto window = cv::Rect(i, j, 32, 32);
// threshold window contents
cv::threshold(data(window), buff, intensity_threshold, 1, CV_THRESH_BINARY);
int num_over_threshold = cv::countNonZero(buff);
return num_over_threshold > count_threshold ? 255 : 0;
}
};
int main() {
// Input image
cv::Mat img = cv::imread("walken.jpg", CV_8UC1);
// I resize it so that I get dimensions divisible
// by 32 and get better looking result
cv::Mat resized;
cv::resize(img, resized, cv::Size(3200, 3200));
BradleysImage b; // I had no idea how to name this so I used your nick
b.rows = resized.rows / 32;
b.cols = resized.cols / 32;
b.data = resized;
b.intensity_threshold = 128; // just some threshold
b.count_threshold = 512;
cv::Mat result(b.rows -1, b.cols-1, CV_8UC1);
for(int y = 0; y < result.rows; ++y)
for(int x = 0; x < result.cols; ++x)
result.at<uint8_t>(y, x) = b(y, x);
imwrite("walken.png", result);
return 0;
}
I used Christopher Walken's image from Wikipedia and obtained this result:

OpenCV-2.4.8.2: imshow differs from imwrite

I'm using OpenCV2.4.8.2 on Mac OS 10.9.5.
I have the following snippet of code:
static void compute_weights(const vector<Mat>& images, vector<Mat>& weights)
{
weights.clear();
for (int i = 0; i < images.size(); i++) {
Mat image = images[i];
Mat mask = Mat::zeros(image.size(), CV_32F);
int x_start = (i == 0) ? 0 : image.cols/2;
int y_start = 0;
int width = image.cols/2;
int height = image.rows;
Mat roi = mask(Rect(x_start,y_start,width,height)); // Set Roi
roi.setTo(1);
weights.push_back(mask);
}
}
static void blend(const vector<Mat>& inputImages, Mat& outputImage)
{
int maxPyrIndex = 6;
vector<Mat> weights;
compute_weights(inputImages, weights);
// Find the fused pyramid:
vector<Mat> fused_pyramid;
for (int i = 0; i < inputImages.size(); i++) {
Mat image = inputImages[i];
// Build Gaussian Pyramid for Weights
vector<Mat> weight_gaussian_pyramid;
buildPyramid(weights[i], weight_gaussian_pyramid, maxPyrIndex);
// Build Laplacian Pyramid for original image
Mat float_image;
inputImages[i].convertTo(float_image, CV_32FC3, 1.0/255.0);
vector<Mat> orig_guassian_pyramid;
vector<Mat> orig_laplacian_pyramid;
buildPyramid(float_image, orig_guassian_pyramid, maxPyrIndex);
for (int j = 0; j < orig_guassian_pyramid.size() - 1; j++) {
Mat sized_up;
pyrUp(orig_guassian_pyramid[j+1], sized_up, Size(orig_guassian_pyramid[j].cols, orig_guassian_pyramid[j].rows));
orig_laplacian_pyramid.push_back(orig_guassian_pyramid[j] - sized_up);
}
// Last Lapalcian layer is the same as the Gaussian layer
orig_laplacian_pyramid.push_back(orig_guassian_pyramid[orig_guassian_pyramid.size()-1]);
// Convolve laplacian original with guassian weights
vector<Mat> convolved;
for (int j = 0; j < maxPyrIndex + 1; j++) {
// Create 3 channels for weight gaussian pyramid as well
vector<Mat> gaussian_3d_vec;
for (int k = 0; k < 3; k++) {
gaussian_3d_vec.push_back(weight_gaussian_pyramid[j]);
}
Mat gaussian_3d;
merge(gaussian_3d_vec, gaussian_3d);
//Mat convolved_result = weight_gaussian_pyramid[j].clone();
Mat convolved_result = gaussian_3d.clone();
multiply(gaussian_3d, orig_laplacian_pyramid[j], convolved_result);
convolved.push_back(convolved_result);
}
if (i == 0) {
fused_pyramid = convolved;
} else {
for (int j = 0; j < maxPyrIndex + 1; j++) {
fused_pyramid[j] += convolved[j];
}
}
}
// Blending
for (int i = (int)fused_pyramid.size()-1; i > 0; i--) {
Mat sized_up;
pyrUp(fused_pyramid[i], sized_up, Size(fused_pyramid[i-1].cols, fused_pyramid[i-1].rows));
fused_pyramid[i-1] += sized_up;
}
Mat final_color_bgr;
fused_pyramid[0].convertTo(final_color_bgr, CV_32F, 255);
final_color_bgr.copyTo(outputImage);
imshow("final", outputImage);
waitKey(0);
imwrite(outputImagePath, outputImage);
}
This code is doing some basic pyramid blending for 2 images. The key issues are related to imshow and imwrite in the last line. They gave me drastically different results. I apologize for displaying such a long/messy code, but I am afraid this difference is coming from some other parts of the code that can subsequently affect the imshow and imwrite.
The first image shows the result from imwrite and the second image shows the result from imshow, based on the code given. I'm quite confused about why this is the case.
I also noticed that when I do these:
Mat float_image;
inputImages[i].convertTo(float_image, CV_32FC3, 1.0/255.0);
imshow("float image", float_image);
imshow("orig image", image);
They show exactly the same thing, that is they both show the same picture in the original rgb image (in image).
IMWRITE functionality
By default, imwrite, converts the input image into Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function.
So whatever format you feed in for imwrite, it blindly converts into CV_8U with a range 0(black) - 255(white) in BGR format.
IMSHOW - problem
So when noticed your function, fused_pyramid[0].convertTo(final_color_bgr, CV_32F, 255); fused_pyramid is already under mat type 21 (floating point CV_32F). You tried to convert into floating point with a scale factor 255. This scaling factor 255 caused the problem # imshow. Instead to visualize, you can directly feed in fused_pyramid without conversion as already it is scaled to floating point between 0.0(black) - 1.0(white).
Hope it helps.

How to run findContours() on meanShiftSegmentation() output?

I'm trying to rewrite my very slow naive segmentation using floodFill to something faster. I ruled out meanShiftFiltering a year ago because of the difficulty in labelling the colours and then finding their contours.
The current version of opencv seems to have a fast new function that labels segments using mean shift: gpu::meanShiftSegmentation(). It produces images like the following:
(source: ekran.org)
So this looks to me pretty close to being able to generating contours. How can I run findContours to generate segments?
Seems to me, this would be done by extracting the labelled colours from the image, and then testing which pixel values in the image match each label colour to make a boolean image suitable for findContours. This is what I have done in the following (but its a bit slow and strikes me there should be a better way):
Mat image = imread("test.png");
...
// gpu operations on image resulting in gpuOpen
...
// Mean shift
TermCriteria iterations = TermCriteria(CV_TERMCRIT_ITER, 2, 0);
gpu::meanShiftSegmentation(gpuOpen, segments, 10, 20, 300, iterations);
// convert to greyscale (HSV image)
vector<Mat> channels;
split(segments, channels);
// get labels from histogram of image.
int size = 256;
labels = Mat(256, 1, CV_32SC1);
calcHist(&channels.at(2), 1, 0, Mat(), labels, 1, &size, 0);
// Loop through hist bins
for (int i=0; i<256; i++) {
float count = labels.at<float>(i);
// Does this bin represent a label in the image?
if (count > 0) {
// find areas of the image that match this label and findContours on the result.
Mat label = Mat(channels.at(2).rows, channels.at(2).cols, CV_8UC1, Scalar::all(i)); // image filled with label colour.
Mat boolImage = (channels.at(2) == label); // which pixels in labeled image are identical to this label?
vector<vector<Point>> labelContours;
findContours(boolImage, labelContours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Loop through contours.
for (int idx = 0; idx < labelContours.size(); idx++) {
// get bounds for this contour.
bounds = boundingRect(labelContours[idx]);
// create ROI for bounds to extract this region
Mat patchROI = image(bounds);
Mat maskROI = boolImage(bounds);
}
}
}
Is this the best approach or is there a better way to get the label colours? Seems it would be logical for meanShiftSegmentation to provide this information? (vector of colour values, or vector of masks for each label, etc.)
Thank you.
Following is another way of doing this without thowing away the colour information in the meanShiftSegmentation results. I did not compare the two for performance.
// Loop through whole image, pixel and pixel and then use the colour to index an array of bools indicating presence.
vector<Scalar> colours;
vector<Scalar>::iterator colourIter;
vector< vector< vector<bool> > > colourSpace;
vector< vector< vector<bool> > >::iterator colourSpaceBIter;
vector< vector<bool> >::iterator colourSpaceGIter;
vector<bool>::iterator colourSpaceRIter;
// Initialize 3D Vector
colourSpace.resize(256);
for (int i = 0; i < 256; i++) {
colourSpace[i].resize(256);
for (int j = 0; j < 256; j++) {
colourSpace[i][j].resize(256);
}
}
// Loop through pixels in the image (should be fastish, look into LUT for faster)
uchar r, g, b;
for (int i = 0; i < segments.rows; i++)
{
Vec3b* pixel = segments.ptr<Vec3b>(i); // point to first pixel in row
for (int j = 0; j < segments.cols; j++)
{
b = pixel[j][0];
g = pixel[j][1];
r = pixel[j][2];
colourSpace[b][g][r] = true; // this colour is in the image.
//cout << "BGR: " << int(b) << " " << int(g) << " " << int(r) << endl;
}
}
// Get all the unique colours from colourSpace
// loop through colourSpace
int bi=0;
for (colourSpaceBIter = colourSpace.begin(); colourSpaceBIter != colourSpace.end(); colourSpaceBIter++) {
int gi=0;
for (colourSpaceGIter = colourSpaceBIter->begin(); colourSpaceGIter != colourSpaceBIter->end(); colourSpaceGIter++) {
int ri=0;
for (colourSpaceRIter = colourSpaceGIter->begin(); colourSpaceRIter != colourSpaceGIter->end(); colourSpaceRIter++) {
if (*colourSpaceRIter)
colours.push_back( Scalar(bi,gi,ri) );
ri++;
}
gi++;
}
bi++;
}
// For each colour
int segmentCount = 0;
for (colourIter = colours.begin(); colourIter != colours.end(); colourIter++) {
Mat label = Mat(segments.rows, segments.cols, CV_8UC3, *colourIter); // image filled with label colour.
Mat boolImage = Mat(segments.rows, segments.cols, CV_8UC3);
inRange(segments, *colourIter, *colourIter, boolImage); // which pixels in labeled image are identical to this label?
vector<vector<Point> > labelContours;
findContours(boolImage, labelContours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
// Loop through contours.
for (int idx = 0; idx < labelContours.size(); idx++) {
// get bounds for this contour.
Rect bounds = boundingRect(labelContours[idx]);
float area = contourArea(labelContours[idx]);
// Draw this contour on a new blank image
Mat maskImage = Mat::zeros(boolImage.rows, boolImage.cols, boolImage.type());
drawContours(maskImage, labelContours, idx, Scalar(255,255,255), CV_FILLED);
Mat patchROI = frame(bounds);
Mat maskROI = maskImage(bounds);
}
segmentCount++;
}