OpenCV: Am I declaring matrix correctly? - c++

I want to change:
int q[10] ;
double weight[10];
for ( int i = 0; i < 10; i ++ )
{
q ++ ;
weight[i] = 10;
}
into cv::Mat form, so I did it like this:
cv::Mat q = cv::Mat ( 1, 10, CV_8UC3 );
cv::Mat w = cv::Mat ( 1, 10, CV_8UC3 );
for ( int i = 0; i < 10; i ++ )
{
uchar* p = q.ptr ( i );
*p += 1;
}
weight.setTo ( 10 );
The code compiles without error, but since I don't have any reference to judge the result, I doubt there might be mistakes in my changes. Or am I doing everything right here? Thank you.

int q[10] will be changed to cv::Mat q = cv::Mat(1,10,CV_32SC1);
double w[10] will be changed to cv::Mat w = cv::Mat(1,10,CV_64FC1);.
You can access the raw pointers as:
int* qPtr = q.ptr<int>();
double* wPtr = w.ptr<double>();

Related

from float array to mat , concatenate blocks of image

I have an image 800x800 which is broken down to 16 blocks of 200x200.
(you can see previous post here)
These blocks are : vector<Mat> subImages;
I want to use float pointers on them , so I am doing :
float *pdata = (float*)( subImages[ idxSubImage ].data );
1) Now, I want to be able to get again the same images/blocks, going from float array to Mat data.
int Idx = 0;
pdata = (float*)( subImages[ Idx ].data );
namedWindow( "Display window", WINDOW_AUTOSIZE );
for( int i = 0; i < OriginalImgSize.height - 4; i+= 200 )
{
for( int j = 0; j < OriginalImgSize.width - 4; j+= 200, Idx++ )
{
Mat mf( i,j, CV_32F, pdata + 200 );
imshow( "Display window", mf );
waitKey(0);
}
}
So , the problem is that I am receiving an
OpenCV Error: Assertion failed
in imshow.
2) How can I recombine all the blocks to obtain the original 800x800 image?
I tried something like:
int Idx = 0;
pdata = (float*)( subImages[ Idx ].data );
Mat big( 800,800,CV_32F );
for( int i = 0; i < OriginalImgSize.height - 4; i+= 200 )
{
for( int j = 0; j < OriginalImgSize.width - 4; j+= 200, Idx++ )
{
Mat mf( i,j, CV_32F, pdata + 200 );
Rect roi(j,i,200,200);
mf.copyTo( big(roi) );
}
}
imwrite( "testing" , big );
This gives me :
OpenCV Error: Assertion failed (!fixedSize()) in release
in mf.copyTo( big(roi) );.
First, you need to know where are your subimages into the big image. To do this, you can save the rect of each subimage into the vector<Rect> smallImageRois;
Then you can use pointers (keep in mind that subimages are not continuous), or simply use copyTo to the correct place:
Have a look:
#include <opencv2\opencv.hpp>
#include <vector>
using namespace std;
using namespace cv;
int main()
{
Mat3b img = imread("path_to_image");
resize(img, img, Size(800, 800));
Mat grayImg;
cvtColor(img, grayImg, COLOR_BGR2GRAY);
grayImg.convertTo(grayImg, CV_32F);
int N = 4;
if (((grayImg.rows % N) != 0) || ((grayImg.cols % N) != 0))
{
// Error
return -1;
}
Size graySize = grayImg.size();
Size smallSize(grayImg.cols / N, grayImg.rows / N);
vector<Mat> smallImages;
vector<Rect> smallImageRois;
for (int i = 0; i < graySize.height; i += smallSize.height)
{
for (int j = 0; j < graySize.width; j += smallSize.width)
{
Rect rect = Rect(j, i, smallSize.width, smallSize.height);
smallImages.push_back(grayImg(rect));
smallImageRois.push_back(rect);
}
}
// Option 1. Using pointer to subimage data.
Mat big1(800, 800, CV_32F);
int big1step = big1.step1();
float* pbig1 = big1.ptr<float>(0);
for (int idx = 0; idx < smallImages.size(); ++idx)
{
float* pdata = (float*)smallImages[idx].data;
int step = smallImages[idx].step1();
Rect roi = smallImageRois[idx];
for (int i = 0; i < smallSize.height; ++i)
{
for (int j = 0; j < smallSize.width; ++j)
{
pbig1[(roi.y + i) * big1step + (roi.x + j)] = pdata[i * step + j];
}
}
}
// Option 2. USing copyTo
Mat big2(800, 800, CV_32F);
for (int idx = 0; idx < smallImages.size(); ++idx)
{
smallImages[idx].copyTo(big2(smallImageRois[idx]));
}
return 0;
}
For concatenating the sub-images into a single squared image, you can use the following function:
// Important: all patches should have exactly the same size
Mat concatPatches(vector<Mat> &patches) {
assert(patches.size() > 0);
// make it square
const int patch_width = patches[0].cols;
const int patch_height = patches[0].rows;
const int patch_stride = ceil(sqrt(patches.size()));
Mat image = Mat::zeros(patch_stride * patch_height, patch_stride * patch_width, patches[0].type());
for (size_t i = 0, iend = patches.size(); i < iend; i++) {
Mat &patch = patches[i];
const int offset_x = (i % patch_stride) * patch_width;
const int offset_y = (i / patch_stride) * patch_height;
// copy the patch to the output image
patch.copyTo(image(Rect(offset_x, offset_y, patch_width, patch_height)));
}
return image;
}
It takes a vector of sub-images (or patches as I refer them to) and concatenates them into a squared image. Example usage:
vector<Mat> patches;
vector<Scalar> colours = {Scalar(255, 0, 0), Scalar(0, 255, 0), Scalar(0, 0, 255)};
// fill vector with circles of different colours
for(int i = 0; i < 16; i++) {
Mat patch = Mat::zeros(100,100, CV_32FC3);
circle(patch, Point(50,50), 40, colours[i % 3], -1);
patches.push_back(patch);
}
Mat img = concatPatches(patches);
imshow("img", img);
waitKey();
Will produce the following image
print the values of i and j before creating Mat mf and I believe you will soon be able to find the error.
Hint 1: i and j will be 0 the first time
Hint 2: Use the copyTo() with a ROI like:
cv::Rect roi(0,0,200,200);
src.copyTo(dst(roi))
Edit:
Hint 3: Try not to do such pointer fiddling, you will get in trouble. Especially if you're ignoring the step (like you seem to do).

Three images are displayed in output window instead of one

Hi everyone i tried using kmeans clustering to group the objects. So that i can use this clustering method to detect objects. I get output but the problem is its too slow{How can i solve this?? } and i get the output window is as shown in the below link. Three output images are displayed instead of one how can i solve this. I don't know where exactly the error lies.
http://tinypic.com/view.php?pic=30bd7dc&s=8#.VgkSIPmqqko
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
Mat src = imread( "Light.jpg", 0 );
// imshow("fff",src);
// cvtColor(src,src,COLOR_BGR2GRAY);
Mat dst;
// pyrDown(src,src,Size( src.cols/2, src.rows/2 ),4);
// src=dst;
resize(src,src,Size(128,128),0,0,1);
Mat samples(src.rows * src.cols, 3, CV_32F);
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
// for( int z = 0; z < 3; z++)
samples.at<float>(y + x*src.rows) = src.at<uchar>(y,x);
cout<<"aaa"<<endl;
int clusterCount = 15;
Mat labels;
int attempts = 2;
Mat centers;
cout<<"aaa"<<endl;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10000, 0.0001), attempts, KMEANS_PP_CENTERS, centers );
Mat new_image( src.size(), src.type() );
cout<<"aaa"<<endl;
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*src.rows,0);
new_image.at<uchar>(y,x) = centers.at<float>(cluster_idx,0);
//new_image.at<Vec3b>(y,x)[1] = centers.at<float>(cluster_idx, 1);
// new_image.at<Vec3b>(y,x)[2] = centers.at<float>(cluster_idx, 2);
}
imshow( "clustered image", new_image );
waitKey( 0 );
}
In your initial code you have to change the intermedia Mat sample from 3 channels to 1 channel if you use grayscale images.
In addition, if you change the memory ordering, it might be faster (changed to (y*src.cols + x, 0) in both places):
int main( )
{
clock_t start = clock();
Mat src = imread( "Light.jpg", 0 );
Mat dst;
resize(src,src,Size(128,128),0,0,1);
Mat samples(src.rows * src.cols, 1, CV_32F);
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
samples.at<float>(y*src.cols + x, 0) = src.at<uchar>(y,x);
int clusterCount = 15;
Mat labels;
int attempts = 2;
Mat centers;
kmeans(samples, clusterCount, labels, TermCriteria(CV_TERMCRIT_ITER|CV_TERMCRIT_EPS, 10000, 0.0001), attempts, KMEANS_PP_CENTERS, centers );
Mat new_image( src.size(), src.type() );
for( int y = 0; y < src.rows; y++ )
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y*src.cols + x,0);
new_image.at<uchar>(y,x) = centers.at<float>(cluster_idx,0);
}
imshow( "clustered image", new_image );
clock_t end = clock();
std::cout << "time: " << (end - start)/(float)CLOCKS_PER_SEC << std::endl;
waitKey( 0 );
}

How can I get Back Projection matrix including numbers over 255 or decimal by calcBackProject?

I need to use calcBackProject and then display the exact number.
for ( int i = 0; i < backProj.rows; ++i )
{
for ( int j = 0; j < backProj.cols; ++j )
{
cout << int(backProj.at< uchar >( i, j )) << " ";
}
cout << endl;
}
But its max value is 255 because of "uchar".
I tried to use
Mat backProj( slid_.rows, slid_.cols, CV_64FC1 );
After using calcBackProject, display it
cout << backProj.at< double >( i, j );
but it does not work.
I really need the exact numbers which are bigger than 255. I don't want to use normalize before. Can I make it by calcBackProject?
If I try to scale it down, can this Back Projection matrix includes decimal? Because I don't want that 0 exists in this matrix.
Thank you.
Finally, I wrote my own function to get Back Projection. Hope it can help you who have same problem.
float ReadValueFromHist( const Mat& hist, const int x, const int y ) const
{
int indexAlpha = int( mat.at< Vec4b >( x, y )[ 3 ] ) * bins / 256;
return hist.at< float >( indexAlpha, 0 );
}
void CalcBackProj()
{
backProj = Mat( mat.rows, mat.cols, CV_32FC1);
for ( int i = 0; i < mat.rows; ++i )
{
for ( int j = 0; j < mat.cols; ++j )
{
backProj.at< float >( i, j ) = ReadValueFromHist( hist, i, j );
}
}
}

OpenCV: am I exceeding the matrix size ?

I have the following code:
cv::Mat data ( HEIGHT,WIDTH, CV_32SC1 );
cv::Mat means = cv::Mat::zeros (HEIGHT, WIDTH, CV_64FC1 );
int *dPtr = new int [HEIGHT*WIDTH];
dPtr = data.ptr<int>();
double *mPtr = new double [HEIGHT*WIDTH];
mPtr = means.ptr < double>();
for ( int i = 0; i < N; i ++)
{
for ( int j = 0; j < M; j ++ )
{
mPtr[ WIDTH * (i-1) + j ] += dPtr[ WIDTH * (i-1) + j ];
}
}
But the program crashes inside the for loop, and I doubt I am somehow exceeding the matrix size. But I cannot figure it out. Could someone help me? Thank you in advance.
Since your indices i,j start with 0 you should omit the -1 in the array expressions (i-1).

Opencv color mapping with direct pixel access

I have a gray scale image that I want to display in color by mapping the gray scale values with a color palette (like colormap in Matlab).
I managed to do it by using OpenCV cvSet2D function, but I would like to access to the pixels directly for performance reasons.
But when I do that the image has strange colors. I tried to set the colors in different orders (RGB, BGR,…) but can’t seem to get around it.
There is my code:
IplImage* temp = cvCreateImage( cvSize(img->width/scale,img->height/scale), IPL_DEPTH_8U, 3 );
for (int y=0; y<temp->height; y++)
{
uchar* ptr1 = (uchar*) ( temp->imageData + y * temp->widthStep );
uchar* ptr2 = (uchar*) ( img->imageData + y * img->widthStep );
for (int x=0; x<temp->width; x++)
{
CvScalar v1;
int intensity = (int)ptr2[x];
int b=0, g=0, r=0;
r = colormap[intensity][0];
g = colormap[intensity][1];
b = colormap[intensity][2];
if (true)
{
ptr1[3*x] = b;
ptr1[3*x+1] = g;
ptr1[3*x+2] = r;
}
else
{
v1.val[0] = r;
v1.val[1] = g;
v1.val[2] = b;
cvSet2D(temp, y, x, v1);
}
}
}
Change the if (true) to if (false) for different pixel access.
The correct result is with cvSet2D:
The wrong result with the direct memory access:
Thank you for your help
I have done something similar for coloring depth maps from Microsoft Kinect Sensor. The code I used for converting a grayscale depth map into color image will work for what you are trying to do. You may require slight modifications as in my case the depth values were in the range 500 to 2000, and I had to rescale them.
The function for colouring a grayscale image into colour image is:
void colorizeDepth( const Mat& gray, Mat& rgb)
{
double maxDisp= 255;
float S=1.f;
float V=1.f ;
rgb.create( gray.size(), CV_8UC3 );
rgb = Scalar::all(0);
if( maxDisp < 1 )
return;
for( int y = 0; y < gray.rows; y++ )
{
for( int x = 0; x < gray.cols; x++ )
{
uchar d = gray.at<uchar>(y,x);
unsigned int H = 255 - ((uchar)maxDisp - d) * 280/ (uchar)maxDisp;
unsigned int hi = (H/60) % 6;
float f = H/60.f - H/60;
float p = V * (1 - S);
float q = V * (1 - f * S);
float t = V * (1 - (1 - f) * S);
Point3f res;
if( hi == 0 ) //R = V, G = t, B = p
res = Point3f( p, t, V );
if( hi == 1 ) // R = q, G = V, B = p
res = Point3f( p, V, q );
if( hi == 2 ) // R = p, G = V, B = t
res = Point3f( t, V, p );
if( hi == 3 ) // R = p, G = q, B = V
res = Point3f( V, q, p );
if( hi == 4 ) // R = t, G = p, B = V
res = Point3f( V, p, t );
if( hi == 5 ) // R = V, G = p, B = q
res = Point3f( q, p, V );
uchar b = (uchar)(std::max(0.f, std::min (res.x, 1.f)) * 255.f);
uchar g = (uchar)(std::max(0.f, std::min (res.y, 1.f)) * 255.f);
uchar r = (uchar)(std::max(0.f, std::min (res.z, 1.f)) * 255.f);
rgb.at<Point3_<uchar> >(y,x) = Point3_<uchar>(b, g, r);
}
}
}
For an input image which looks like this:
Output of this code is:
I'm posting this to close the question like asked in the comments of my question.
The answer was:
I found out my error... In fact it's RGB but that wasn't the problem, I had color values at 256 instead of 255... Really sorry... I guess asking the question helped me found the answer
Thank you