Assertion error when accessing pointer to pixels openCV - c++

so I am struggling to understand why I am getting this assertion error from opencv when accessing a pointer in the next col/row of an image. Let me tell you what is happening and provide some code.
I am taking a ROI from the image, which is a cv::Mat or a header to a section of a bigger cv::Mat.
I constructed some pointers to access the value of my ROI. So lets say my ROI is filled with values of pixels and is a 3x3 Mat
with the following Dimensions (index starting at 0,0)
---------
| 1 | 2 | 3 |
| 4 | 5 | 6 |
| 7 | 8 | 9 |
first of all I need to initialize my pointers to point to their positions respectively. I took the ptr function of the cv::Mat and their location in the grid via cv::Point.
Problem faced:
When I try to access the pixel of the next neighbor, I get an assertion error.
Diagnostics by me:
I thought it might be the range, but I made sure that wouldn't be the case by defining the for loop conditions according to my dimensions.
The item I am trying to access doesn't exist, but how I understand it when I go through the ROI, i already have the values in a new Matrix and I should be able to access all values around my desired pixels.
PART OF THE CODE:
cv::Mat ROI =disTrafo(cv::Rect(cv::Point(x,y),cv::Size(3,3)));
cv::minMaxLoc(ROI,&minVal,&maxVal,&minCoord,&maxCoord);
auto* maxPtr_x = &maxCoord.x;
auto* maxPtr_y = &maxCoord.y;
auto* maxPtr_value = &maxVal;
uchar diff1 = 0;
uchar diff2= 0;
uchar diff3 = 0;
uchar diff4 = 0;
uchar max_diff = 0;
for(int j = 1; j < ROI.rows ; j++){
auto current = ROI.ptr<uchar>(maxCoord.y);
auto neighbor_down = ROI.ptr<uchar>(maxCoord.y+1); //THE PROB IS HERE according to debugging
auto neighbor_up = ROI.ptr<uchar>(maxCoord.y-1);
cv::Point poi ; //point of interest
for(int i= 0; i < ROI.cols; i++){
switch(maxCoord.x){ //PROOF FOR LOGIC
case 0:
if(maxCoord.y == 0){ //another switch statement maybe ??
diff1 = std::abs(current[maxCoord.x+1] - current[maxCoord.x]);
diff2 = std::abs(neighbor_down[maxCoord.x] - current[maxCoord.x]);
if(diff2 > diff1){
cv::Point(maxCoord.x,maxCoord.y+1) = poi;
} else {
cv::Point(maxCoord.x+1,maxCoord.y) = poi;
}
};
ASSERTION FAILED when running it: OpenCV Error: Assertion failed(y == 0|| < data && dims >= 1 && (unsigned)y < (unsigned)size.p[0])) in cv::Mat::ptr, file //... indicates path to header file mat.hpp// line 428
I can't put my finger on the problem, could you please be of assistance. And please give me some knowledge when working with pointers and pixels in case I misunderstood something.
Thank you

try this
for(int j = 1; j < ROI.rows ; j++){
auto current = ROI.ptr<uchar>(maxCoord.y);
auto neighbor_down = ROI.ptr<uchar>(maxCoord.y+1); //THE PROB IS HERE according to debugging
auto neighbor_up = ROI.ptr<uchar>(maxCoord.y-1);
cv::Point poi ; //point of interest
cv::Point bordess(Point(0,0));
for(int i= 0; i < ROI.cols; i++){
switch(maxCoord.x){ //PROOF FOR LOGIC
case 0:
if(maxCoord.y == 0){ //another switch statement maybe ??
diff1 = std::abs(current[maxCoord.x+1] - current[maxCoord.x]);
diff2 = std::abs(neighbor_down[maxCoord.x] - current[maxCoord.x]);
if(diff2 > diff1){
cv::Point(maxCoord.x,maxCoord.y+1) = poi & bordess;
} else {
cv::Point(maxCoord.x+1,maxCoord.y) = poi & bordess;
}
};

Ok so basically I figured out that my pointer definition was wrong due to the nature of the input image. I have done some preprocessing with the image and the ranges of the values inside changed from ucharto some other value. When I changed the auto neighbor_down = ROI.ptr<uchar>(maxCoord.y+1);for example to auto neighbor_down = ROI.ptr<float>(maxCoord.y+1); everything ran normal.

Related

Calculate 1DPlot, determine the maxima and their distances between each other

I want to create a 1D plot from an image. Then I want to determine the maxima and their distances to each other in c++.
I am looking for some tips on how I could approach this.
I load the image as cv::Mat. In opencv I have searched, but only found the histogram function, which is wrong. I want to get a cross section of the image - from left to right.
does anyone have an idea ?
Well I have the following picture:
From this I want to create a 1D plot like in the following picture (I created the plot in ImageJ).
Here you can see the maxima (I could refine it with "smooth").
I want to determine the positions of these maxima and then the distances between them.
I have to get to the 1D plot somehow. I suppose I can get to the maxima with a derivation?
++++++++++ UPDATE ++++++++++
Now i wrote this to get an 1D Plot:
cv::Mat img= cv::imread(imgFile.toStdString(), cv::IMREAD_ANYDEPTH | cv::IMREAD_COLOR);
cv::cvtColor(img, img, cv::COLOR_BGR2GRAY);
uint8_t* data = img.data;
int width = img.cols;
int height = img.rows;
int stride = img.step;
std::vector<double> vPlot(width, 0);
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
uint8_t val = data[ i * stride + j];
vPlot[j]=vPlot[j] + val;
}
}
std::ofstream file;
file.open("path\\plot.csv");
for(int i = 0; i < vPlot.size(); i++){
file << vPlot[i];
file << ";";
}
file.close();
When i plot this in excel i got this:
Thats looks not so smooth as in ImageJ. Did i something wrong?
I need it like in the Plot of ImageJ - more smooth.
ok I got it:
for (int i = 0; i < vPlot.size(); i++) {
vPlot[i] = vPlot[i] / height;
}
Ok but i don't know how to get the maxima an distances.
When i have the local maxima (i don't know how), i can calculate the distance between them with the index of the vetcor elements.
Has anybody an idea to get the local Maxima out of the vector, that I plot above ?
Now o wrote this to find the maxima:
// find maxima
std::vector<int> idxMax;
int flag = 0;
for(int i = 1; i < avg.size(); i++){
double diff = avg[i] - avg[i-1];
if(diff < 0){
if(flag>0){
idxMax.push_back(i);
flag = -1;
}
}
if(diff >= 0){
if(flag<=0){
flag = 1;
}
}
}
But more maxima are found than wanted. The length of the vector varies and also the number of peaks. These can be close together or far away. They are also not always the same height, as can be seen in the picture

adding float openCV3.0

I actually have a problem on openCV3.0.
I used 12 gabor filters(12 differents orientation) on 1 image and stocked them.
Now I want to add all those images and then divide by 12 each value to obtain the mean of the 12 filters.
Because those image are RGB, I have to work on each channel separatly.
The problem is : when I add all the values, I obtain values > 12 while all the values are between 0 and 1.
The part of the code bugged :
for (i = 0; i < gaborV.size(); ++i) { //gaborV contain the 12 gabor filters
std::vector<cv::Mat> vec_split; //I split because of the 3 channels
cv::split(gaborV[i], vec_split);
for (int k = 0; k < imgCol.rows; ++k) {
for (int j = 0; j < imgCol.cols; ++j) {
if (k == 1 && j == 1)
std::cout << mat_X.at<float>(k, j) << " " << vec_split[0].at<float>(k, j) << std::endl;
mat_X.at<float>(k, j) += vec_split[0].at<float>(k, j);
mat_Y.at<float>(k, j) += vec_split[1].at<float>(k, j);
mat_Z.at<float>(k, j) += vec_split[2].at<float>(k, j);
}
}
}
and mat_X, mat_Y and mat_Z are created as follow :
mat_X = mat_Y = mat_Z = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
As I said, all values in vec_split are between 0 and 1, but when I'm out of the loop, mat_X, mat_Y and mat_Z contain values > 12..
The output of the cout I used :
0 0.507358
1.54751 0.496143
3.00963 0.528832
4.53887 0.465426
... and at the end I have 15.9459
And i don't understand since 0 + 0.507358 != 1.54751; 1.54751 + 0.496143 != 3.00963 ...
Do someone understand the problem?
Thanks for all!
I think the problem is here:
mat_X = mat_Y = mat_Z = cv::Mat(cvSize(imgColNormalize.cols,
imgColNormalize.rows), CV_32FC1, cvScalar(0.));
The way you initialise these arrays results in all three cv::Mat objects referencing the same data. Only one Mat is created and so your code increments the values in this array three times.
For info, OpenCV uses a reference counting mechanism with cv::Mat and the assignment operator simply creates a new reference to existing data. If you wanted to create a genuine deep-copy of a cv::Mat, you would need to use cv::Mat::clone().
So, instead, initialise like so:
mat_X = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
mat_Y = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
mat_Z = cv::Mat(cvSize(imgColNormalize.cols, imgColNormalize.rows), CV_32FC1, cvScalar(0.));
An excerpt from the documentation copied below for posterity:

OpenCV VLFeat Slic function call

I am trying to use the vl_slic_segment function of the VLFeat library using an input image stored in an OpenCV Mat. My code is compiling and running, but the output superpixel values do not make sense. Here is my code so far :
Mat bgrUChar = imread("/pathtowherever/image.jpg");
Mat bgrFloat;
bgrUChar.convertTo(bgrFloat, CV_32FC3, 1.0/255);
cv::Mat labFloat;
cvtColor(bgrFloat, labFloat, CV_BGR2Lab);
Mat labels(labFloat.size(), CV_32SC1);
vl_slic_segment(labels.ptr<vl_uint32>(),labFloat.ptr<const float>(),labFloat.cols,labFloat.rows,labFloat.channels(),30,0.1,25);
I have tried not converting it to the Lab colorspace and setting different regionSize/regularization, but the output is always very glitchy. I am able to retrieve the label values correctly, the thing is the every labels is usually scattered on a little non-contiguous area.
I think the problem is the format of my input data is wrong but I can't figure out how to send it properly to the vl_slic_segment function.
Thank you in advance!
EDIT
Thank you David, as you helped me understand, vl_slic_segment wants data ordered as [LLLLLAAAAABBBBB] whereas OpenCV is ordering its data [LABLABLABLABLAB] for the LAB color space.
In the course of my bachelor thesis I have to use VLFeat's SLIC implementation as well. You can find a short example applying VLFeat's SLIC on Lenna.png on GitHub: https://github.com/davidstutz/vlfeat-slic-example.
Maybe, a look at main.cpp will help you figuring out how to convert the images obtained by OpenCV to the right format:
// OpenCV can be used to read images.
#include <opencv2/opencv.hpp>
// The VLFeat header files need to be declared external.
extern "C" {
#include "vl/generic.h"
#include "vl/slic.h"
}
int main() {
// Read the Lenna image. The matrix 'mat' will have 3 8 bit channels
// corresponding to BGR color space.
cv::Mat mat = cv::imread("Lenna.png", CV_LOAD_IMAGE_COLOR);
// Convert image to one-dimensional array.
float* image = new float[mat.rows*mat.cols*mat.channels()];
for (int i = 0; i < mat.rows; ++i) {
for (int j = 0; j < mat.cols; ++j) {
// Assuming three channels ...
image[j + mat.cols*i + mat.cols*mat.rows*0] = mat.at<cv::Vec3b>(i, j)[0];
image[j + mat.cols*i + mat.cols*mat.rows*1] = mat.at<cv::Vec3b>(i, j)[1];
image[j + mat.cols*i + mat.cols*mat.rows*2] = mat.at<cv::Vec3b>(i, j)[2];
}
}
// The algorithm will store the final segmentation in a one-dimensional array.
vl_uint32* segmentation = new vl_uint32[mat.rows*mat.cols];
vl_size height = mat.rows;
vl_size width = mat.cols;
vl_size channels = mat.channels();
// The region size defines the number of superpixels obtained.
// Regularization describes a trade-off between the color term and the
// spatial term.
vl_size region = 30;
float regularization = 1000.;
vl_size minRegion = 10;
vl_slic_segment(segmentation, image, width, height, channels, region, regularization, minRegion);
// Convert segmentation.
int** labels = new int*[mat.rows];
for (int i = 0; i < mat.rows; ++i) {
labels[i] = new int[mat.cols];
for (int j = 0; j < mat.cols; ++j) {
labels[i][j] = (int) segmentation[j + mat.cols*i];
}
}
// Compute a contour image: this actually colors every border pixel
// red such that we get relatively thick contours.
int label = 0;
int labelTop = -1;
int labelBottom = -1;
int labelLeft = -1;
int labelRight = -1;
for (int i = 0; i < mat.rows; i++) {
for (int j = 0; j < mat.cols; j++) {
label = labels[i][j];
labelTop = label;
if (i > 0) {
labelTop = labels[i - 1][j];
}
labelBottom = label;
if (i < mat.rows - 1) {
labelBottom = labels[i + 1][j];
}
labelLeft = label;
if (j > 0) {
labelLeft = labels[i][j - 1];
}
labelRight = label;
if (j < mat.cols - 1) {
labelRight = labels[i][j + 1];
}
if (label != labelTop || label != labelBottom || label!= labelLeft || label != labelRight) {
mat.at<cv::Vec3b>(i, j)[0] = 0;
mat.at<cv::Vec3b>(i, j)[1] = 0;
mat.at<cv::Vec3b>(i, j)[2] = 255;
}
}
}
// Save the contour image.
cv::imwrite("Lenna_contours.png", mat);
return 0;
}
In addition, have a look at README.md within the GitHub repository. The following figures show some example outputs of setting the regularization to 1 (100,1000) and setting the region size to 30 (20,40).
Figure 1: Superpixel segmentation with region size set to 30 and regularization set to 1.
Figure 2: Superpixel segmentation with region size set to 30 and regularization set to 100.
Figure 3: Superpixel segmentation with region size set to 30 and regularization set to 1000.
Figure 4: Superpixel segmentation with region size set to 20 and regularization set to 1000.
Figure 5: Superpixel segmentation with region size set to 20 and regularization set to 1000.

Problems writing to a subsection of a Mat-Object

I'm new to OpenCV a have some trouble regarding writing to a subrange of a Mat-Object.
The code below iterates a given Image. For each pixel, it takes pixel within a range of 5x5, finds the brightest pixel, and put all other pixel to 0.
I call the function multiple times. After a random number of calls the function gives me a segmentation fault or "malloc memory corruption". Sometimes I can call the function 10 times with no problems sometimes only twice, then the program stops.
I tracked down the problem to the line, where I write to the original image using the subimage.
subimage.at<uchar>(rowSubimage,colSubimage) = 0;
There is the function that drives me crazy:
void findMaxAndBlackout(Mat& image, int size){
Point centralPoint;
Size rangeSize = Size(size,size);
Mat subimage;
Rect range;
// iterate the image
for(int row = 0; row <= image.rows-size; row++){
for(int col = 0; col <= image.cols-size; col++){
centralPoint = Point(col,row);
range = Rect(centralPoint, rangeSize);
// slice submatrix and find max
subimage = image(range);
double max;
minMaxLoc( subimage, NULL, &max, NULL, NULL );
// iterate the surrounding
for(int rowSubimage = 0; rowSubimage <= subimage.rows; rowSubimage++){
for(int colSubimage = 0; colSubimage <= subimage.cols; colSubimage++){
if(subimage.at<uchar>(rowSubimage,colSubimage) < max){
//this line cause the trouble
subimage.at<uchar>(rowSubimage,colSubimage) = 0;
}
}
}
}
}}
The Mat-Object is generated using:
Mat houghImage = imread("small_schachbrett1_cam.png", CV_LOAD_IMAGE_GRAYSCALE);
Please help me understand the problem.
If you know a better or more efficient way to achieve the same result please let me know. I am open for any improvements
Regards
benniz
You are out of range:
row <= image.rows-size
col <= image.cols-size
rowSubimage <= subimage.rows
colSubimage <= subimage.cols
should be
row < image.rows-size
col < image.cols-size
rowSubimage < subimage.rows
colSubimage < subimage.cols

Finding Local Maxima Grayscale Image opencv

I am trying to create my personal Blob Detection algorithm
As far as I know I first must create different Gaussian Kernels with different sigmas (which I am doing using Mat kernel= getGaussianKernel(x,y);) Then get the Laplacian of that kernel and then filter the Image with that so I create my scalespace. Now I need to find the Local Maximas in each result Image of the scalespace. But I cannot seem to find a proper way to do so.... my Code so far is
vector <Point> GetLocalMaxima(const cv::Mat Src,int MatchingSize, int Threshold)
{
vector <Point> vMaxLoc(0);
if ((MatchingSize % 2 == 0) ) // MatchingSize has to be "odd" and > 0
{
return vMaxLoc;
}
vMaxLoc.reserve(100); // Reserve place for fast access
Mat ProcessImg = Src.clone();
int W = Src.cols;
int H = Src.rows;
int SearchWidth = W - MatchingSize;
int SearchHeight = H - MatchingSize;
int MatchingSquareCenter = MatchingSize/2;
uchar* pProcess = (uchar *) ProcessImg.data; // The pointer to image Data
int Shift = MatchingSquareCenter * ( W + 1);
int k = 0;
for(int y=0; y < SearchHeight; ++y)
{
int m = k + Shift;
for(int x=0;x < SearchWidth ; ++x)
{
if (pProcess[m++] >= Threshold)
{
Point LocMax;
Mat mROI(ProcessImg, Rect(x,y,MatchingSize,MatchingSize));
minMaxLoc(mROI,NULL,NULL,NULL,&LocMax);
if (LocMax.x == MatchingSquareCenter && LocMax.y == MatchingSquareCenter)
{
vMaxLoc.push_back(Point( x+LocMax.x,y + LocMax.y ));
// imshow("W1",mROI);cvWaitKey(0); //For gebug
}
}
}
k += W;
}
return vMaxLoc;
}
which I found in this thread here, which it supposedly returns a vector of points where the maximas are. it does return a vector of points but all the x and y coordinates of each point are always -17891602... What to do???
Please if you are to lead me in something else other than correcting my code be informative because I know nothing about opencv. I am just learning
The problem here is that your LocMax point is declared inside the inner loop and never initialized, so it's returning garbage data every time. If you look back at the StackOverflow question you linked, you'll see that their similar variable Point maxLoc(0,0) is declared at the top and constructed to point at the middle of the search window. It only needs to be initialized once. Subsequent loop iterations will replace the value with the minMaxLoc function result.
In summary, remove this line in your inner loop:
Point LocMax; // delete this
And add a slightly altered version near the top:
vector <Point> vMaxLoc(0); // This was your original first line
Point LocMax(0,0); // your new second line
That should get you started anyway.
I found it guys. The problem was my threshold was too high. I do not understand why it gave me negative points instead of zero points but lowering the threshold worked