Calculate 1DPlot, determine the maxima and their distances between each other - c++

I want to create a 1D plot from an image. Then I want to determine the maxima and their distances to each other in c++.
I am looking for some tips on how I could approach this.
I load the image as cv::Mat. In opencv I have searched, but only found the histogram function, which is wrong. I want to get a cross section of the image - from left to right.
does anyone have an idea ?
Well I have the following picture:
From this I want to create a 1D plot like in the following picture (I created the plot in ImageJ).
Here you can see the maxima (I could refine it with "smooth").
I want to determine the positions of these maxima and then the distances between them.
I have to get to the 1D plot somehow. I suppose I can get to the maxima with a derivation?
++++++++++ UPDATE ++++++++++
Now i wrote this to get an 1D Plot:
cv::Mat img= cv::imread(imgFile.toStdString(), cv::IMREAD_ANYDEPTH | cv::IMREAD_COLOR);
cv::cvtColor(img, img, cv::COLOR_BGR2GRAY);
uint8_t* data = img.data;
int width = img.cols;
int height = img.rows;
int stride = img.step;
std::vector<double> vPlot(width, 0);
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
uint8_t val = data[ i * stride + j];
vPlot[j]=vPlot[j] + val;
}
}
std::ofstream file;
file.open("path\\plot.csv");
for(int i = 0; i < vPlot.size(); i++){
file << vPlot[i];
file << ";";
}
file.close();
When i plot this in excel i got this:
Thats looks not so smooth as in ImageJ. Did i something wrong?
I need it like in the Plot of ImageJ - more smooth.
ok I got it:
for (int i = 0; i < vPlot.size(); i++) {
vPlot[i] = vPlot[i] / height;
}
Ok but i don't know how to get the maxima an distances.
When i have the local maxima (i don't know how), i can calculate the distance between them with the index of the vetcor elements.
Has anybody an idea to get the local Maxima out of the vector, that I plot above ?
Now o wrote this to find the maxima:
// find maxima
std::vector<int> idxMax;
int flag = 0;
for(int i = 1; i < avg.size(); i++){
double diff = avg[i] - avg[i-1];
if(diff < 0){
if(flag>0){
idxMax.push_back(i);
flag = -1;
}
}
if(diff >= 0){
if(flag<=0){
flag = 1;
}
}
}
But more maxima are found than wanted. The length of the vector varies and also the number of peaks. These can be close together or far away. They are also not always the same height, as can be seen in the picture

Related

How does the remap function in OpenCV for undistorting images work?

for debugging purposes I tried to reimplement the remap function of OpenCV. Without considering interpolation, it should look something like this:
for( int j = 0; j < height; j++ )
{
for( int i = 0; i < width; i++ )
{
undistortedImage.at<double>(mapy.at<float>(j,i),mapx.at<float>(j,i)) = distortedImage.at<double>(j,i);
}
}
To test this, I used following maps to mirror the image around the y-axis:
int width = distortedImage.cols;
int height = distortedImage.rows;
cv::Mat mapx = Mat(height, width, CV_32FC1);
cv::Mat mapy = Mat(height, width, CV_32FC1);
for( int j = 0; j < height; j++)
{
for( int i = 0; i < width; i++)
{
mapx.at<float>(j,i) = width - i - 1;
mapy.at<float>(j,i) = j;
}
}
But the interpolation it works exactly like
cv::remap( distortedImage, undistortedImage, mapx, mapy, CV_INTER_LINEAR);
Now I tried to apply this function on maps created by the OCamCalib Toolbox for undistorting images. This is basicly the same as what is done by the OpenCV undistortion as well.
My implementation now obviously does not consider that several pixels from the source image are mapped to the same pixel in the destination image. But it is worse. Actually, it looks like my source image appears three times in smaller versions in the destination image. Otherwise the remap command works perfectly.
After exhaustive debugging I decided to ask you guys for some help. Can anyone explain me what I am doing wrong or provide a link to the implementation of remap in OpenCV?
I figured it out myself. My original implementation has two fundamental mistakes:
Misunderstanding on how the maps are used.
Misunderstanding on how to extract intensity values.
How to do it correctly:
for( int j = 0; j < height; j++ )
{
for( int i = 0; i < width; i++ )
{
undistortedImage.at<uchar>(mapy.at<float>(j,i),mapx.at<float>(j,i)) = distortedImage.at<uchar>(j,i);
}
}
I want to highlight that the intensity values from the images are now extracted using .at<uchar> instead of .at<double>. Furthermore, the indices for the maps are switched.

How do I compute the brightness histogram aggregated by column in OpenCV C++

I want to segment car plate to get separate characters.
I found some article, where such segmentation performed using brightness histograms (as i understand - sum of all non-zero pixels).
How can i calculate such histogram? I would really appreciate for any help!
std::vector<int> computeColumnHistogram(const cv::Mat& in) {
std::vector<int> histogram(in.cols,0); //Create a zeroed histogram of the necessary size
for (int y = 0; y < in.rows; y++) {
p_row = in.ptr(y); ///Get a pointer to the y-th row of the image
for (int x = 0; x < in.cols; x++)
histogram[x] += p_row[x]; ///Update histogram value for this image column
}
//Normalize if you want (you'll get the average value per column):
// for (int x = 0; x < in.cols; x++)
// histogram[x] /= in.rows;
return histogram;
}
Or use reduce as suggested by Berak, either calling
cv::reduce(in, out, 0, CV_REDUCE_AVG);
or
cv::reduce(in, out, 0, CV_REDUCE_SUM, CV_32S);
out is a cv::Mat, and it will have a single row.

Opencv Mat vector assignment to a row of a matrix, fastest way?

What is the fastest way of assigning a vector to a matrix row in a loop? I want to fill a data matrix along its rows with vectors. These vectors are computed in a loop. This loop last until all the entries of data matrix is filled those vectors.
Currently I am using cv::Mat::at<>() method for accessing the elements of the matrix and fill them with the vector, however, it seems this process is quite slow. I have tried another way by using cv::Mat::X.row(index) = data_vector, it works fast but fill my matrix X with some garbage values which I can not understand, why.
I read that there exists another way of using pointers (fastest way), however, I can not able to understand. Can somebody explain how to use them or other different methods?
Here is a part of my code:
#define OFFSET 2
cv::Mat im = cv::imread("001.png", CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat X = cv::Mat((im.rows - 2*OFFSET)*(im.cols - 2*OFFSET), 25, CV_64FC1); // Holds the training data. Data contains image patches
cv::Mat patch = cv::Mat(5, 5, im.type()); // Holds a cropped image patch
typedef cv::Vec<float, 25> Vec25f;
int ind = 0;
for (int row = 0; row < (im.rows - 2*OFFSET); row++){
for (int col = 0; col < (im.cols - 2*OFFSET); col++){
cv::Mat temp_patch = im(cv::Rect(col, row, 5, 5)); // crop an image patch (5x5) at each pixel
patch = temp_patch.clone(); // Needs to do this because temp_patch is not continuous in memory
patch.convertTo(patch, CV_64FC1);
Vec25f data_vector = patch.reshape(0, 1); // make it row vector (1X25).
for (int i = 0; i < 25; i++)
{
X.at<float>(ind, i) = data_vector[i]; // Currently I am using this way (quite slow).
}
//X_train.row(ind) = patch.reshape(0, 1); // Tried this but it assigns some garbage values to the data matrix!
ind += 1;
}
}
To do it the regular opencv way you could do :-
ImageMat.row(RowIndex) = RowMat.clone();
or
RowMat.copyTo(ImageMat.row(RowIndex));
Haven't tested for correctness or speed.
Just a couple of edits in your code
double * xBuffer = X.ptr<double>(0);
for (int row = 0; row < (im.rows - 2*OFFSET); row++){
for (int col = 0; col < (im.cols - 2*OFFSET); col++){
cv::Mat temp_patch = im(cv::Rect(col, row, 5, 5)); // crop an image patch (5x5) at each pixel
patch = temp_patch.clone(); // Needs to do this because temp_patch is not continuous in memory
patch.convertTo(patch, CV_64FC1);
memcpy(xBuffer, patch.data, 25*sizeof(double));
xBuffer += 25;
}
}
Also, you dont seem to do any computation in patch just extract grey level values, so you can create X with the same type as im, and convert it to double at the end. In this way, you could memcpy each row of your patch, the address in memory beeing `unsigned char* buffer = im.ptr(row) + col
According to the docs:
if you need to process a whole row of matrix, the most efficient way is to get the pointer to the row first, and then just use plain C operator []:
// compute sum of positive matrix elements
// (assuming that M is double-precision matrix)
double sum=0;
for(int i = 0; i < M.rows; i++)
{
const double* Mi = M.ptr<double>(i);
for(int j = 0; j < M.cols; j++)
sum += std::max(Mi[j], 0.);
}

OpenCV VLFeat Slic function call

I am trying to use the vl_slic_segment function of the VLFeat library using an input image stored in an OpenCV Mat. My code is compiling and running, but the output superpixel values do not make sense. Here is my code so far :
Mat bgrUChar = imread("/pathtowherever/image.jpg");
Mat bgrFloat;
bgrUChar.convertTo(bgrFloat, CV_32FC3, 1.0/255);
cv::Mat labFloat;
cvtColor(bgrFloat, labFloat, CV_BGR2Lab);
Mat labels(labFloat.size(), CV_32SC1);
vl_slic_segment(labels.ptr<vl_uint32>(),labFloat.ptr<const float>(),labFloat.cols,labFloat.rows,labFloat.channels(),30,0.1,25);
I have tried not converting it to the Lab colorspace and setting different regionSize/regularization, but the output is always very glitchy. I am able to retrieve the label values correctly, the thing is the every labels is usually scattered on a little non-contiguous area.
I think the problem is the format of my input data is wrong but I can't figure out how to send it properly to the vl_slic_segment function.
Thank you in advance!
EDIT
Thank you David, as you helped me understand, vl_slic_segment wants data ordered as [LLLLLAAAAABBBBB] whereas OpenCV is ordering its data [LABLABLABLABLAB] for the LAB color space.
In the course of my bachelor thesis I have to use VLFeat's SLIC implementation as well. You can find a short example applying VLFeat's SLIC on Lenna.png on GitHub: https://github.com/davidstutz/vlfeat-slic-example.
Maybe, a look at main.cpp will help you figuring out how to convert the images obtained by OpenCV to the right format:
// OpenCV can be used to read images.
#include <opencv2/opencv.hpp>
// The VLFeat header files need to be declared external.
extern "C" {
#include "vl/generic.h"
#include "vl/slic.h"
}
int main() {
// Read the Lenna image. The matrix 'mat' will have 3 8 bit channels
// corresponding to BGR color space.
cv::Mat mat = cv::imread("Lenna.png", CV_LOAD_IMAGE_COLOR);
// Convert image to one-dimensional array.
float* image = new float[mat.rows*mat.cols*mat.channels()];
for (int i = 0; i < mat.rows; ++i) {
for (int j = 0; j < mat.cols; ++j) {
// Assuming three channels ...
image[j + mat.cols*i + mat.cols*mat.rows*0] = mat.at<cv::Vec3b>(i, j)[0];
image[j + mat.cols*i + mat.cols*mat.rows*1] = mat.at<cv::Vec3b>(i, j)[1];
image[j + mat.cols*i + mat.cols*mat.rows*2] = mat.at<cv::Vec3b>(i, j)[2];
}
}
// The algorithm will store the final segmentation in a one-dimensional array.
vl_uint32* segmentation = new vl_uint32[mat.rows*mat.cols];
vl_size height = mat.rows;
vl_size width = mat.cols;
vl_size channels = mat.channels();
// The region size defines the number of superpixels obtained.
// Regularization describes a trade-off between the color term and the
// spatial term.
vl_size region = 30;
float regularization = 1000.;
vl_size minRegion = 10;
vl_slic_segment(segmentation, image, width, height, channels, region, regularization, minRegion);
// Convert segmentation.
int** labels = new int*[mat.rows];
for (int i = 0; i < mat.rows; ++i) {
labels[i] = new int[mat.cols];
for (int j = 0; j < mat.cols; ++j) {
labels[i][j] = (int) segmentation[j + mat.cols*i];
}
}
// Compute a contour image: this actually colors every border pixel
// red such that we get relatively thick contours.
int label = 0;
int labelTop = -1;
int labelBottom = -1;
int labelLeft = -1;
int labelRight = -1;
for (int i = 0; i < mat.rows; i++) {
for (int j = 0; j < mat.cols; j++) {
label = labels[i][j];
labelTop = label;
if (i > 0) {
labelTop = labels[i - 1][j];
}
labelBottom = label;
if (i < mat.rows - 1) {
labelBottom = labels[i + 1][j];
}
labelLeft = label;
if (j > 0) {
labelLeft = labels[i][j - 1];
}
labelRight = label;
if (j < mat.cols - 1) {
labelRight = labels[i][j + 1];
}
if (label != labelTop || label != labelBottom || label!= labelLeft || label != labelRight) {
mat.at<cv::Vec3b>(i, j)[0] = 0;
mat.at<cv::Vec3b>(i, j)[1] = 0;
mat.at<cv::Vec3b>(i, j)[2] = 255;
}
}
}
// Save the contour image.
cv::imwrite("Lenna_contours.png", mat);
return 0;
}
In addition, have a look at README.md within the GitHub repository. The following figures show some example outputs of setting the regularization to 1 (100,1000) and setting the region size to 30 (20,40).
Figure 1: Superpixel segmentation with region size set to 30 and regularization set to 1.
Figure 2: Superpixel segmentation with region size set to 30 and regularization set to 100.
Figure 3: Superpixel segmentation with region size set to 30 and regularization set to 1000.
Figure 4: Superpixel segmentation with region size set to 20 and regularization set to 1000.
Figure 5: Superpixel segmentation with region size set to 20 and regularization set to 1000.

Problems writing to a subsection of a Mat-Object

I'm new to OpenCV a have some trouble regarding writing to a subrange of a Mat-Object.
The code below iterates a given Image. For each pixel, it takes pixel within a range of 5x5, finds the brightest pixel, and put all other pixel to 0.
I call the function multiple times. After a random number of calls the function gives me a segmentation fault or "malloc memory corruption". Sometimes I can call the function 10 times with no problems sometimes only twice, then the program stops.
I tracked down the problem to the line, where I write to the original image using the subimage.
subimage.at<uchar>(rowSubimage,colSubimage) = 0;
There is the function that drives me crazy:
void findMaxAndBlackout(Mat& image, int size){
Point centralPoint;
Size rangeSize = Size(size,size);
Mat subimage;
Rect range;
// iterate the image
for(int row = 0; row <= image.rows-size; row++){
for(int col = 0; col <= image.cols-size; col++){
centralPoint = Point(col,row);
range = Rect(centralPoint, rangeSize);
// slice submatrix and find max
subimage = image(range);
double max;
minMaxLoc( subimage, NULL, &max, NULL, NULL );
// iterate the surrounding
for(int rowSubimage = 0; rowSubimage <= subimage.rows; rowSubimage++){
for(int colSubimage = 0; colSubimage <= subimage.cols; colSubimage++){
if(subimage.at<uchar>(rowSubimage,colSubimage) < max){
//this line cause the trouble
subimage.at<uchar>(rowSubimage,colSubimage) = 0;
}
}
}
}
}}
The Mat-Object is generated using:
Mat houghImage = imread("small_schachbrett1_cam.png", CV_LOAD_IMAGE_GRAYSCALE);
Please help me understand the problem.
If you know a better or more efficient way to achieve the same result please let me know. I am open for any improvements
Regards
benniz
You are out of range:
row <= image.rows-size
col <= image.cols-size
rowSubimage <= subimage.rows
colSubimage <= subimage.cols
should be
row < image.rows-size
col < image.cols-size
rowSubimage < subimage.rows
colSubimage < subimage.cols