I have the following code:
cv::Mat data ( HEIGHT,WIDTH, CV_32SC1 );
cv::Mat means = cv::Mat::zeros (HEIGHT, WIDTH, CV_64FC1 );
int *dPtr = new int [HEIGHT*WIDTH];
dPtr = data.ptr<int>();
double *mPtr = new double [HEIGHT*WIDTH];
mPtr = means.ptr < double>();
for ( int i = 0; i < N; i ++)
{
for ( int j = 0; j < M; j ++ )
{
mPtr[ WIDTH * (i-1) + j ] += dPtr[ WIDTH * (i-1) + j ];
}
}
But the program crashes inside the for loop, and I doubt I am somehow exceeding the matrix size. But I cannot figure it out. Could someone help me? Thank you in advance.
Since your indices i,j start with 0 you should omit the -1 in the array expressions (i-1).
Related
I recently asked question about how to work with element Edit1 dynamically, now I want to ask something about values, which I received from dynamical arrays. First I try to divide image into sectors:
const n=20;
unsigned short i, j, line_length, w = Image1->Width, h = Image1->Height, l = Left + Image1->Left, t = Top + Image1->Top;
unsigned short border = (Width-ClientWidth)/2, topborder = Height-ClientHeight-border;
Image1->Canvas->Pen->Color = clRed;
for (i = 0; i <= n; i++)
{
Image1->Canvas->MoveTo(0, 0);
line_length = w * tan(M_PI/2*i/n);
if (line_length <= h)
Image1->Canvas->LineTo(w, line_length);
else
{
line_length = h * tan(M_PI/2*(1-1.*i/n));
Image1->Canvas->LineTo(line_length, h);
}
}
Then I use regions to count black dots in each sector and I want to add values to element Memo:
HRGN region[n];
TPoint points[3];
points[0] = Point(l + border, t + topborder);
for (i = 0; i < n; i++)
{
for (j = 0; j <= 1; j++)
{
line_length = w * tan(M_PI/2*(i+j)/n);
if (line_length <= h)
points[j+1] = Point(l + border + w, t + topborder + line_length);
else
{
line_length = h * tan(M_PI/2*(1-1.*(i+j)/n));
points[j+1] = Point(l + border + line_length, t + topborder + h);
}
}
region[i] = CreatePolygonRgn(points, 3, ALTERNATE); // or WINDING ?? as u want
}
Byte k;
unsigned __int64 point_count[n] = {0}, points_count = 0;
for(j = 0; j < h; j++)
for (i = 0; i < w; i++)
if (Image1->Canvas->Pixels[i][j] == clBlack)
{
points_count++;
for (k = 0; k < n; k++)
if (PtInRegion(region[k], l + border + i, t + topborder + j))
point_count[k]++;
}
unsigned __int64 sum = 0;
for (i = 0; i < n; i++)
{
sum += point_count[i];
Memo1->Lines->Add(point_count[i]);
}
As i received an advice from one man, in order to allocate an array using a TEdit to specify the array's count I should use, for example DynamicArray:
#include <sysdyn.h>
DynamicArray<HRGN> region;
...
int n = Edit1-> Text.ToInt();
region.Length = n;
I have made the same changes to point_count array:
Byte k;
DynamicArray<unsigned __int64> point_count;
point_count.Length = n;
unsigned __int64 /*point_count[n] = {0},*/ points_count = 0;
...
The problem is that I received different values if I do it dynamically or statically(n=20).
Statically:
Dynamically:
The problem is that I received different values if I do it dynamically or statically(n=20)
There is no difference whatsoever in accessing elements of a static array vs a dynamic array. Your problem has to be elsewhere.
For instance, your static code is initializing all of the array elements to 0, but your dynamic code is not doing that, so they will have random values before your loop then increments them.
Try this:
DynamicArray<unsigned __int64> point_count;
point_count.Length = n;
for(int i = 0; i < n; ++i) {
point_count[i] = 0;
}
...
Alternatively:
DynamicArray<unsigned __int64> point_count;
point_count.Length = n;
ZeroMemory(&point_count[0], sizeof(unsigned __int64) * n);
...
Also, using the Image1->Canvas->Pixels[][] property is very slow. Consider using the Image1->Picture->Bitmap->ScanLine[] property instead for faster access to the raw pixels.
I am trying to implement Laplace sharpening using C++ , here's my code so far:
img = imread("cow.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
}
}
}
}
//img - laplace
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = img.at<uchar>(y, x) - res.at<uchar>(y, x);
}
}
return res;
}
I don't really know what went wrong, I also tried different filter (1,1,1),(1,-8,1),(1,1,1) and the result is also same (more or less). I don't think that I need to normalize the result because the result is in range of 0 - 255. Can anyone explain what really went wrong in my code?
Problem: uchar is too small to hold partial results of filerting operation.
You should create a temporary variable and add all the filtered positions to this variable then check if value of temp is in range <0,255> if not, you need to clamp the end result to fit <0,255>.
By executing below line
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
partial result may be greater than 255 (max value in uchar) or negative (in filter you have -4 or -8). temp has to be singed integer type to handle the case when partial result is negative value.
Fix:
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
int temp = res.at<uchar>(i,j); // added
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp += filter[h - i][w - j] * img.at<uchar>(h,w); // add to temp
}
}
// clamp temp to <0,255>
res.at<uchar>(i,j) = temp;
}
}
You should also clamp values to <0,255> range when you do the subtraction of images.
The problem is partially that you’re overflowing your uchar, as rafix07 suggested, but that is not the full problem.
The Laplace of an image contains negative values. It has to. And you can’t clamp those to 0, you need to preserve the negative values. Also, it can values up to 4*255 given your version of the filter. What this means is that you need to use a signed 16 bit type to store this output.
But there is a simpler and more efficient approach!
You are computing img - laplace(img). In terms of convolutions (*), this is 1 * img - laplace_kernel * img = (1 - laplace_kernel) * img. That is to say, you can combine both operations into a single convolution. The 1 kernel that doesn’t change the image is [(0,0,0),(0,1,0),(0,0,0)]. Subtract your Laplace kernel from that and you obtain [(0,-1,0),(-1,5,-1),(0,-1,0)].
So, simply compute the convolution with that kernel, and do it using int as intermediate type, which you then clamp to the uchar output range as shown by rafix07.
hi I just started c++/openCv and trying to write a median code
i'm cinda confused......
EDIT2:
OK thank to dear friends my first error is corrected
now this is my new error :|
I want to sort 9 elemented Mat file. could I use onother type for window not Mat file? how can I sort it corectly
the error refers to this line:
std::sort(window.begin(), window.end());
error: request for member 'begin' in 'window', which is of non-class type 'cv::Mat [9]'|
|36|error: request for member 'end' in 'window', which is of non-class type 'cv::Mat [9]'|
I exped matlab and i'm a complete noob at c++, this is my code:
using namespace std;
using namespace cv;
Mat img_gray,img;
int main ()
{
img = imread( "6.jpg", IMREAD_COLOR ); // Load an image
if( img.empty() )
{ return -1; }
cvtColor( img, img_gray, COLOR_BGR2GRAY );
int M = img.rows;
int N = img.cols;
cvNamedWindow("windows",WINDOW_AUTOSIZE);
imshow("windows",img);
for (int m = 2; m < M - 1; ++m)
for (int n = 2; n < N - 1; ++n)
{
int k = 0;
int tmpmedian = 0;
//int window[9]={0};
Mat window[9];
for (int i = m - 1; i < m + 2; ++i){
for (int j = n - 1; j < n + 2; ++j)
{
window[k++] = img_gray.at<uchar>(i, j);
}
std::sort(window.begin(), window.end());
tmpmedian = window[5];
fimg[m][n] = tmpmedian;
}
}
}
i'm a student and need this for my class project
and I appreciate your responses thanks alot
In your double for loop, try this.
int k = 0;
int tmpmedian = 0;
int window[9]={0};
for (int i = m - 1; i < m + 2; ++i)
for (int j = n - 1; j < n + 2; ++j)
window[k++] = img_gray.at<uchar>(i, j);
std::sort(std::begin(window), std::end(window));
tmpmedian = window[4];
fimg[m][n] = tmpmedian;
Mat window[9] declares an array of 9 Mat type object. I don't think you'd want that. You just need an array of 9 int values. So what you need is int window[9].
Your usage of std::begin() and std::end() is wrong. std::sort(std::begin(window), std::end(window)) is what you need.
Array indices are zero-based. So your median number is stored at window[4] not window[5].
I have points in an image. I need to detect the most collinear points. The fastest method is to use Hough transform, but I have to modify the opencv method. Actually I need that the semi collinear points to be returned with detected line, for this reason I modified the polar line struct. A tolerance threshold is also needed to detect nearly detected points as shown in the image. Can someone help in how to tune this threshold?
I need at least four semi collinear points to detect the line to which they belong.
The points of first image were detected by 6 overlapped lines
the point of middle images were detected by nothing
the third's points
were detected by three lines
Which is the best way to get rid from the overlapped liens?? Or how to tune the tolerance threshold to detect the semi collinear points by only one line?
the is my own function call:
vector<CvLinePolar2> lines;
CvMat c_image = source1; // loaded image
HoughLinesStandard(&c_image,1,CV_PI/180,4,&lines,INT_MAX);
typedef struct CvLinePolar2
{
float rho;
float angle;
vector<CvPoint> points;
};
void HoughLinesStandard( const CvMat* img, float rho, float theta,
int threshold, vector<CvLinePolar2> *lines, int linesMax= INT_MAX )
{
cv::AutoBuffer<int> _accum, _sort_buf;
cv::AutoBuffer<float> _tabSin, _tabCos;
const uchar* image;
int step, width, height;
int numangle, numrho;
int total = 0;
int i, j;
float irho = 1 / rho;
double scale;
vector<vector<CvPoint>> lpoints;
CV_Assert( CV_IS_MAT(img) && CV_MAT_TYPE(img->type) == CV_8UC1 );
image = img->data.ptr;
step = img->step;
width = img->cols;
height = img->rows;
numangle = cvRound(CV_PI / theta);
numrho = cvRound(((width + height) * 2 + 1) / rho);
_accum.allocate((numangle+2) * (numrho+2));
_sort_buf.allocate(numangle * numrho);
_tabSin.allocate(numangle);
_tabCos.allocate(numangle);
int *accum = _accum, *sort_buf = _sort_buf;
float *tabSin = _tabSin, *tabCos = _tabCos;
memset( accum, 0, sizeof(accum[0]) * (numangle+2) * (numrho+2) );
//memset( lpoints, 0, sizeof(lpoints) );
lpoints.resize(sizeof(accum[0]) * (numangle+2) * (numrho+2));
float ang = 0;
for(int n = 0; n < numangle; ang += theta, n++ )
{
tabSin[n] = (float)(sin(ang) * irho);
tabCos[n] = (float)(cos(ang) * irho);
}
// stage 1. fill accumulator
for( i = 0; i < height; i++ )
for( j = 0; j < width; j++ )
{
if( image[i * step + j] != 0 )
{
CvPoint pt;
pt.x = j; pt.y = i;
for(int n = 0; n < numangle; n++ )
{
int r = cvRound( j * tabCos[n] + i * tabSin[n] );
r += (numrho - 1) / 2;
int ind = (n+1) * (numrho+2) + r+1;
int s = accum[ind];
accum[ind]++;
lpoints[ind].push_back(pt);
}
}
}
// stage 2. find local maximums
for(int r = 0; r < numrho; r++ )
for(int n = 0; n < numangle; n++ )
{
int base = (n+1) * (numrho+2) + r+1;
if( accum[base] > threshold &&
accum[base] > accum[base - 1] && accum[base] >= accum[base + 1] &&
accum[base] > accum[base - numrho - 2] && accum[base] >= accum[base + numrho + 2] )
sort_buf[total++] = base;
}
// stage 3. sort the detected lines by accumulator value
icvHoughSortDescent32s( sort_buf, total, accum );
// stage 4. store the first min(total,linesMax) lines to the output buffer
linesMax = MIN(linesMax, total);
scale = 1./(numrho+2);
for( i = 0; i < linesMax; i++ )
{
CvLinePolar2 line;
int idx = sort_buf[i];
int n = cvFloor(idx*scale) - 1;
int r = idx - (n+1)*(numrho+2) - 1;
line.rho = (r - (numrho - 1)*0.5f) * rho;
line.angle = n * theta;
line.points = lpoints[idx];
lines->push_back(line);
}
}
One approach is non-maximal suppression to thin out the candidate set for potential lines. Once you've identified the thinned potential lines you could then compute an average of the remaining lines that would satisfy some angular or spatial difference threshold.
Try HoughLinesP..opencv reference
I need to convert QImage to an array where each pixel is represented by three integers(channels). So I'm trying to achieve this with fillowing code:
void Processor::init(QImage *image){
QColor* c = new QColor();
int i,j;
int local_ind;
x = image->width();
y = image->height();
if(this->img!=NULL)
delete [] this->img;
this->img = new int[ x * y * 3];
for( i = 0 ; i < y ; i++ )
for( j = 0 ; j < x ; j++ ){
c->setRgb(image->pixel(j, i));
local_ind = i * x + j * 3;
this->img[local_ind + 0] = c->red();
this->img[local_ind + 1] = c->green();
this->img[local_ind + 2] = c->blue();
}
delete c;
}
void Processor::flush(QImage *image){
QColor* c = new QColor();
QRgb color;
int i, j;
int local_ind;
for( i = 0 ; i < y ; i++ )
for( j = 0 ; j < x ; j++ ){
local_ind = i * x + j * 3;
color = qRgb(this->img[local_ind + 0],
this->img[local_ind + 1],
this->img[local_ind + 2]);
image->setPixel(j, i, color);
}
delete c;
}
Both functions are seem to work fine, as I can see via debugger, but when I call them one after the other (just copy info from QImage to array and backwards), result is a bit wierd. Whole image consists of three repeated images: a third of original image (blue, green and red channels of source image). I guess I just used setPixel in a wrong way so the format of QImage is not observed or smth.
I use QImage RGB32 format, if it really important here.
PS. Sorry for my English, corrections are welcome)
The issue is that you are using
local_ind = i * x + j * 3;
But in your buffer each pixel take 3 bytes. So the you need to use instead
( i * x + j ) * 3
btw: Why are you using x for height and y for width? This is not intuitive.