ITK Fast Marching output - c++

I'm using ITK to do some preprocessing and I wanted to test something with the Fast Marching filter and the Geodesic Active Contour filter.
I'm following the algorithm described in the ITK software guide, section 9.3.3.
However, I'm not getting the expected results. I'm working with a 3D image.
Here is my code:
AnisotropicDiffusionFilter::Pointer anisotropic_filter = AnisotropicDiffusionFilter::New();
anisotropic_filter->SetInput(itk_image_in);
anisotropic_filter->SetTimeStep(0.0625);
anisotropic_filter->SetNumberOfIterations(5);
anisotropic_filter->SetConductanceParameter(3.0);
anisotropic_filter->Update();
GradientFilter::Pointer gradient_filter = GradientFilter::New();
gradient_filter->SetInput(anisotropic_filter->GetOutput());
gradient_filter->SetSigma(0.5);
gradient_filter->Update();
SigmoidFilter::Pointer sigmoid_filter = SigmoidFilter::New();
sigmoid_filter->SetInput(gradient_filter->GetOutput());
sigmoid_filter->SetOutputMinimum(0.0);
sigmoid_filter->SetOutputMaximum(1.0);
sigmoid_filter->SetAlpha(-1.5);
sigmoid_filter->SetBeta(4.0);
sigmoid_filter->Update();
FastMarchingFilter::Pointer fast_marching = FastMarchingFilter::New();
NodeContainer::Pointer seeds = NodeContainer::New();
Node node;
const double seedValue = -50.0;
node.SetValue(seedValue);
seeds->Initialize();
vector<GeoVec3s>::iterator it = m_clicks_.begin();
int i=0;
for(; it != m_clicks_.end(); it++)
{
itkIndex index;
index[0] = (*it)[0];
index[1] = (*it)[1];
index[2] = (*it)[2];
node.SetIndex(index);
seeds->InsertElement(i++, node);
}
fast_marching->SetTrialPoints(seeds);
fast_marching->SetSpeedConstant(1.0);
fast_marching->SetStoppingValue(100);
//fast_marching->SetInput(sigmoid_filter->GetOutput());
fast_marching->SetOutputSize(sigmoid_filter->GetOutput()->GetBufferedRegion().GetSize());
fast_marching->Update();
GeodesicFilter::Pointer geodesic_filter = GeodesicFilter::New();
geodesic_filter->SetInput(fast_marching->GetOutput());
geodesic_filter->SetFeatureImage(sigmoid_filter->GetOutput());
geodesic_filter->SetPropagationScaling(0.5);
geodesic_filter->SetCurvatureScaling(5.0);
geodesic_filter->SetAdvectionScaling(1.0);
geodesic_filter->SetMaximumRMSError( 0.02 );
geodesic_filter->Update();
BinaryThresholdFilter::Pointer thresholder = BinaryThresholdFilter::New();
thresholder->SetLowerThreshold(-1000);
thresholder->SetUpperThreshold(0);
thresholder->SetOutsideValue(0);
thresholder->SetInsideValue(255);
thresholder->SetInput( geodesic_filter->GetOutput() );
I'm using metrics described in this paper which goal is the same as mine.
I have a few questions:
The fast marching filter should output a distance map right? Instead, when I output my volume to a series of png (between values 0 and 4095) I have a binary image (pixels are either 0 or 4095). I think I should get a greyscale volume indicating the time needed for each pixel to be attained from the seeds.
Following the procedure described by Suzuki I succeeded to make the algorithm work more or less however I changed the values of the parameters of the geodesic filter. I don't remember the exact values but it wasn't close to those described in the paper. As we are working with the sigmoid input which is normalized between 0 and 1, what is happening?
Should I rather use a constant speed function for the fast marching filter or the sigmoid image? When should either method be preferred?
I'm using a re-scaler to output my float images (output from the filters). Could this be the reason for the inconsistencies I'm seeing?
Any advice on what I could be doing wrong?
Thanks.

Ok so I found my problem. The Fast Marching filter does output a time crossing map (distance map) but as I specified a stopping value in the algorithm all the pixels that weren't visited had a high value (1.7e+38 as it is half the max value of the type used for the output image which were float in my case 3.4e+38). So it squeezed all my image dynamic when I used the rescale filter and the result was an binary image.
I think better results are achieved with a sigmoid image as input for the fast marching filter.
Thanks #nav for the advice.

Related

Fast, good quality pixel interpolation for extreme image downscaling

In my program, I am downscaling an image of 500px or larger to an extreme level of approx 16px-32px. The source image is user-specified so I do not have control over its size. As you can imagine, few pixel interpolations hold up and inevitably the result is heavily aliased.
I've tried bilinear, bicubic and square average sampling. The square average sampling actually provides the most decent results but the smaller it gets, the larger the sampling radius has to be. As a result, it gets quite slow - slower than the other interpolation methods.
I have also tried an adaptive square average sampling so that the smaller it gets the greater the sampling radius, while the closer it is to its original size, the smaller the sampling radius. However, it produces problems and I am not convinced this is the best approach.
So the question is: What is the recommended type of pixel interpolation that is fast and works well on such extreme levels of downscaling?
I do not wish to use a library so I will need something that I can code by hand and isn't too complex. I am working in C++ with VS 2012.
Here's some example code I've tried as requested (hopefully without errors from my pseudo-code cut and paste). This performs a 7x7 average downscale and although it's a better result than bilinear or bicubic interpolation, it also takes quite a hit:
// Sizing control
ctl(0): "Resize",Range=(0,800),Val=100
// Variables
float fracx,fracy;
int Xnew,Ynew,p,q,Calc;
int x,y,p1,q1,i,j;
//New image dimensions
Xnew=image->width*ctl(0)/100;
Ynew=image->height*ctl(0)/100;
for (y=0; y<image->height; y++){ // rows
for (x=0; x<image->width; x++){ // columns
p1=(int)x*image->width/Xnew;
q1=(int)y*image->height/Ynew;
for (z=0; z<3; z++){ // channels
for (i=-3;i<=3;i++) {
for (j=-3;j<=3;j++) {
Calc += (int)(src(p1-i,q1-j,z));
} //j
} //i
Calc /= 49;
pset(x, y, z, Calc);
} // channels
} // columns
} // rows
Thanks!
The first point is to use pointers to your data. Never use indexes at every pixel. When you write: src(p1-i,q1-j,z) or pset(x, y, z, Calc) how much computation is being made? Use pointers to data and manipulate those.
Second: your algorithm is wrong. You don't want an average filter, but you want to make a grid on your source image and for every grid cell compute the average and put it in the corresponding pixel of the output image.
The specific solution should be tailored to your data representation, but it could be something like this:
std::vector<uint32_t> accum(Xnew);
std::vector<uint32_t> count(Xnew);
uint32_t *paccum, *pcount;
uint8_t* pin = /*pointer to input data*/;
uint8_t* pout = /*pointer to output data*/;
for (int dr = 0, sr = 0, w = image->width, h = image->height; sr < h; ++dr) {
memset(paccum = accum.data(), 0, Xnew*4);
memset(pcount = count.data(), 0, Xnew*4);
while (sr * Ynew / h == dr) {
paccum = accum.data();
pcount = count.data();
for (int dc = 0, sc = 0; sc < w; ++sc) {
*paccum += *i;
*pcount += 1;
++pin;
if (sc * Xnew / w > dc) {
++dc;
++paccum;
++pcount;
}
}
sr++;
}
std::transform(begin(accum), end(accum), begin(count), pout, std::divides<uint32_t>());
pout += Xnew;
}
This was written using my own library (still in development) and it seems to work, but later I changed the variables names in order to make it simpler here, so I don't guarantee anything!
The idea is to have a local buffer of 32 bit ints which can hold the partial sum of all pixels in the rows which fall in a row of the output image. Then you divide by the cell count and save the output to the final image.
The first thing you should do is to set up a performance evaluation system to measure how much any change impacts on the performance.
As said precedently, you should not use indexes but pointers for (probably) a substantial
speed up & not simply average as a basic averaging of pixels is basically a blur filter.
I would highly advise you to rework your code to be using "kernels". This is the matrix representing the ratio of each pixel used. That way, you will be able to test different strategies and optimize quality.
Example of kernels:
https://en.wikipedia.org/wiki/Kernel_(image_processing)
Upsampling/downsampling kernel:
http://www.johncostella.com/magic/
Note, from the code it seems you apply a 3x3 kernel but initially done on a 7x7 kernel. The equivalent 3x3 kernel as posted would be:
[1 1 1]
[1 1 1] * 1/9
[1 1 1]

Logistic regression for fault detection in an image

Basically, I want to detect a fault in an image using logistic regression. I'm hoping to get so feedback on my approach, which is as follows:
For training:
Take a small section of the image marked "bad" and "good"
Greyscale them, then break them up into a series of 5*5 pixel segments
Calculate the histogram of pixel intensities for each of these segments
Pass the histograms along with the labels to the Logistic Regression class for training
Break the whole image into 5*5 segments and predict "good"/"bad" for each segment.
Using the sigmod function the linear regression equation is:
1/ (1 - e^(xθ))
Where x is the input values and theta (θ) is the weights. I use gradient descent to train the network. My code for this is:
void LogisticRegression::Train(float **trainingSet,float *labels, int m)
{
float tempThetaValues[m_NumberOfWeights];
for (int iteration = 0; iteration < 10000; ++iteration)
{
// Reset the temp values for theta.
memset(tempThetaValues,0,m_NumberOfWeights*sizeof(float));
float error = 0.0f;
// For each training set in the example
for (int trainingExample = 0; trainingExample < m; ++trainingExample)
{
float * x = trainingSet[trainingExample];
float y = labels[trainingExample];
// Partial derivative of the cost function.
float h = Hypothesis(x) - y;
for (int i =0; i < m_NumberOfWeights; ++i)
{
tempThetaValues[i] += h*x[i];
}
float cost = h-y; //Actual J(theta), Cost(x,y), keeps giving NaN use MSE for now
error += cost*cost;
}
// Update the weights using batch gradient desent.
for (int theta = 0; theta < m_NumberOfWeights; ++theta)
{
m_pWeights[theta] = m_pWeights[theta] - 0.1f*tempThetaValues[theta];
}
printf("Cost on iteration[%d] = %f\n",iteration,error);
}
}
Where sigmoid and the hypothesis are calculated using:
float LogisticRegression::Sigmoid(float z) const
{
return 1.0f/(1.0f+exp(-z));
}
float LogisticRegression::Hypothesis(float *x) const
{
float z = 0.0f;
for (int index = 0; index < m_NumberOfWeights; ++index)
{
z += m_pWeights[index]*x[index];
}
return Sigmoid(z);
}
And the final prediction is given by:
int LogisticRegression::Predict(float *x)
{
return Hypothesis(x) > 0.5f;
}
As we are using a histogram of intensities the input and weight arrays are 255 elements. My hope is to use it on something like a picture of an apple with a bruise and use it to identify the brused parts. The (normalized) histograms for the whole brused and apple training sets look somthing like this:
For the "good" sections of the apple (y=0):
For the "bad" sections of the apple (y=1):
I'm not 100% convinced that using the intensites alone will produce the results I want but even so, using it on a clearly seperable data set isn't working either. To test it I passed it a, labeled, completely white and a completely black image. I then run it on the small image below:
Even on this image it fails to identify any segments as being black.
Using MSE I see that the cost is converging downwards to a point where it remains, for the black and white test it starts at about cost 250 and settles on 100. The apple chuncks start at about 4000 and settle on 1600.
What I can't tell is where the issues are.
Is, the approach sound but the implementation broken? Is logistic regression the wrong algorithm to use for this task? Is gradient decent not robust enough?
I forgot to answer this... Basically the problem was in my histograms which when generated weren't being memset to 0. As to the overall problem of whether or not logistic regression with greyscale images was a good solution, the answer is no. Greyscale just didn't provide enough information for good classification. Using all colour channels was a bit better but I think the complexity of the problem I was trying to solve (bruises in apples) was a bit much for simple logistic regression on its own. You can see the results on my blog here.

colorbalance in an image using c++ and opencv

I'm trying to score the colorbalance of an image using c++ and opencv.
To do this the easiest way is to count the number of pixels in each color and then see if one of the colors is more prevalent.
I figured I should probably used calcHist and with the split function I can split a image in R, G, and B histograms. However I am unsure about what to do next. I could probably walk through all the bins and just see how many pixels are in there but this seems like a lot of work (I currently use 256 bins).
Is there a faster way to count the pixels in a color range? Also I am not sure how it would work if white or black are the more prevalant colors?
Automatic color balance algorithm is described in this link http://web.stanford.edu/~sujason/ColorBalancing/simplestcb.html
For C++ Code you can refer to this link : https://www.morethantechnical.com/2015/01/14/simplest-color-balance-with-opencv-wcode/
/// perform the Simplest Color Balancing algorithm
void SimplestCB(Mat& in, Mat& out, float percent) {
assert(in.channels() == 3);
assert(percent > 0 && percent < 100);
float half_percent = percent / 200.0f;
vector<Mat> tmpsplit; split(in,tmpsplit);
for(int i=0;i<3;i++) {
//find the low and high precentile values (based on the input percentile)
Mat flat; tmpsplit[i].reshape(1,1).copyTo(flat);
cv::sort(flat,flat,CV_SORT_EVERY_ROW + CV_SORT_ASCENDING);
int lowval = flat.at<uchar>(cvFloor(((float)flat.cols) * half_percent));
int highval = flat.at<uchar>(cvCeil(((float)flat.cols) * (1.0 - half_percent)));
cout << lowval << " " << highval << endl;
//saturate below the low percentile and above the high percentile
tmpsplit[i].setTo(lowval,tmpsplit[i] < lowval);
tmpsplit[i].setTo(highval,tmpsplit[i] > highval);
//scale the channel
normalize(tmpsplit[i],tmpsplit[i],0,255,NORM_MINMAX);
}
merge(tmpsplit,out);
}
// Usage example
void main() {
Mat tmp,im = imread("lily.png");
SimplestCB(im,tmp,1);
imshow("orig",im);
imshow("balanced",tmp);
waitKey(0);
return;
}
Colour balance is normally looking at a white (or gray) surface and checking the ratios of red/blue to green. A perfectly balanced system would have equal signal levels in red/blue.
You can then simply work out the average red/blue from the test gray card image and apply the same scaling to your real image.
Doing it on a live image with no reference is trickier, you have to find areas that are probably white (ie bright and nearly r=g=b) and use them as the reference
There's no definitive algorithm for colour balance, so anything you might implement, however good it is, will probably fail in some conditions.
One of the simplest algorithms is called Grey World, and assumes that statistically the average colour of a scene should be grey. And if it isn't, it means that it needs to be corrected to grey. So, very simply (in pseudo-python), if you have an image RGB:
cc[0] = np.mean(RGB[:,0]) # calculating channel-wise average
cc[1] = np.mean(RGB[:,1])
cc[2] = np.mean(RGB[:,2])
cc = cc / np.sqrt((cc**2).sum()) # normalise the light (you might want to
# play with this a bit
RGB /= cc # divide every pixel by the estimated light
Note that here I'm assuming that RGB is an array of floats with values between 0 and 1. Something else that helps is to exclude from the average pixels that contain values below and above certain thresholds (e.g., below 0.05 and above 0.95). This way you ignore pixels whose value is heavily influenced by noise (small values) and pixels that saturated the camera sensor and whose colour may not be reliable (large values).

Template Matching with Mask

I want to perform Template matching with mask. In general Template matching can be made faster by converting the image from Spacial domain into Frequency domain. But is there any any method i can apply if i want to perform the same with mask? I'm using opencv c++. Is there any matching function already there in opencv for this task?
My current Approach:
Bitwise Xor Image A & Image B with Mask.
Count the Non-Zero Pixels.
Fill the Resultant matrix with this count.
Search for maxi-ma.
Few parameters I'm guessing now are:
Skip the Tile position if the matches are less than 25%.
Skip the tile position if the matches are less than 25%.
Skip the Tile position if the previous Tile has matches are less than 50%.
My question: is there any algorithm to do this matching already? Is there any mathematical operation which can speed up this process?
With binary images, you can use directly HU-Moments and Mahalanobis distance to find if image A is similar to image B. If the distance tends to 0, then the images are the same.
Of course you can use also Features detectors so see what matches, but for pictures like these, HU Moments or Features detectors will give approximately same results, but HU Moments are more efficient.
Using findContours, you can extract the black regions inside the white star and fill them, in order to have image A = image B.
Other approach: using findContours on your mask and apply the result to Image A (extracting the Region of Interest), you can extract what's inside the star and count how many black pixels you have (the mismatching ones).
I have same requirement and I have tried the almost same way. As in the image, I want to match the castle. The castle has a different shield image and variable length clan name and also grass background(This image comes from game Clash of Clans). The normal opencv matchTemplate does not work. So I write my own.
I follow the ways of matchTemplate to create a result image, but with different algorithm.
The core idea is to count the matched pixel under the mask. The code is following, it is simple.
This works fine, but the time cost is high. As you can see, it costs 457ms.
Now I am working on the optimization.
The source and template images are both CV_8U3C, mask image is CV_8U. Match one channel is OK. It is more faster, but it still costs high.
Mat tmp(matTempl.cols, matTempl.rows, matTempl.type());
int matchCount = 0;
float maxVal = 0;
double areaInvert = 1.0 / countNonZero(matMask);
for (int j = 0; j < resultRows; j++)
{
float* data = imgResult.ptr<float>(j);
for (int i = 0; i < resultCols; i++)
{
Mat matROI(matSource, Rect(i, j, matTempl.cols, matTempl.rows));
tmp.setTo(Scalar(0));
bitwise_xor(matROI, matTempl, tmp);
bitwise_and(tmp, matMask, tmp);
data[i] = 1.0f - float(countNonZero(tmp) * areaInvert);
if (data[i] > matchingDegree)
{
SRect rc;
rc.left = i;
rc.top = j;
rc.right = i + imgTemplate.cols;
rc.bottom = j + imgTemplate.rows;
rcOuts.push_back(rc);
if ( data[i] > maxVal)
{
maxVal = data[i];
maxIndex = rcOuts.size() - 1;
}
if (++matchCount == maxMatchs)
{
Log_Warn("Too many matches, stopped at: " << matchCount);
return true;
}
}
}
}
It says I have not enough reputations to post image....
http://i.stack.imgur.com/mJrqU.png
New added:
I success optimize the algorithm by using key points. Calculate all the points is cost, but it is faster to calculate only server key points. See the picture, the costs decrease greatly, now it is about 7ms.
I still can not post image, please visit: http://i.stack.imgur.com/ePcD9.png
Please give me reputations, so I can post images. :)
There is a technical formulation for template matching with mask in OpenCV Documentation, which works well. It can be used by calling cv::matchTemplate and its source code is also available under the Intel License.

OpenCV: Output of the predict function of Expectation Maximization

Background:
I have 2 sets of color pixels from an image, one corresponding to the background, another corresponding to the foreground. Next, I train 2 Gaussian Mixture Models using EM from OpenCV for each set. My aim is to find the probability of a random pixel to belong to the foreground and to the background. Thus, I use the function "predict" for each EM on my pixel.
Question:
I don't understand the values returned by this function. In the documentation of OpenCV, it is written:
The method returns a two-element double vector. Zero element is a likelihood logarithm value for the sample. First element is an index of the most probable mixture component for the given sample.
http://docs.opencv.org/modules/ml/doc/expectation_maximization.html?highlight=predict#Vec2d%20EM::predict%28InputArray%20sample,%20OutputArray%20probs%29%20const
I don't understand what means "likehood logarithm". In my results, I have sometimes negative values and values > 1. Is anyone who used the same function has this kind of results or resuts between 0 and 1 ? What can I conclude from my results ?
How can I get the probability of a pixel to belong to the whole GMM (not the probality to belong to each cluster of the GMM) ?
Here is my code:
Mat mask = imread("mask.tif", 0);
Mat formerImage = imread("ImageFormer.tif");
Mat currentImage = imread("ImageCurrent.tif");
// number of cluster in the GMM
int nClusters = 5;
int countB=0, countF=0;
Vec3b color;
Vec2d probFg, probBg; // probabilities to belong to the foreground or background from GMMs
//count the number of pixels for each training data
for(int c=0; c<=40;c++) {
for(int l=0; l<=40;l++) {
if(mask.at<BYTE>(l, c)==255) {
countF++;
} else if(mask.at<BYTE>(l, c)==0) {
countB++;
}
}
}
printf("countB %d countF %d \n", countB, countF);
Mat samplesForeground = Mat(countF,3, CV_64F);
Mat samplesBackground = Mat(countB,3, CV_64F);
// Expectation-Maximisation able to resolve the GMM and to predict the probability for a pixel to belong to the GMM.
EM em_foreground= EM(nClusters);
EM em_background= EM(nClusters);
countB=0;
countF=0;
// fill the training data from the former image depending of the mask
for(int c=0; c<=40;c++) {
for(int l=0; l<=40;l++) {
if(mask.at<BYTE>(l, c)==255) {
color = formerImage.at<Vec3b>(l, c);
samplesForeground.at<double>(countF,0)=color[0];
samplesForeground.at<double>(countF,1)=color[1];
samplesForeground.at<double>(countF,2)=color[2];
countF++;
} else if(mask.at<BYTE>(l, c)==0) {
color = formerImage.at<Vec3b>(l, c);
samplesBackground.at<double>(countB, 0)=color[0];
samplesBackground.at<double>(countB, 1)=color[1];
samplesBackground.at<double>(countB, 2)=color[2];
countB++;
}
}
}
printf("countB %d countF %d \n", countB, countF);
em_foreground.train(samplesForeground);
em_background.train(samplesBackground);
Mat sample(1, 3, CV_64F);
// try every pixel of the current image and get the log likelihood
for(int c=0; c<=40;c++) {
for(int l=0; l<=40;l++) {
color = currentImage.at<Vec3b>(l,c);
sample.at<double>(0)=color[0];
sample.at<double>(1)=color[1];
sample.at<double>(2)=color[2];
probFg=em_foreground.predict(sample);
probBg=em_background.predict(sample);
if(probFg[0]>0 || probBg[0]>0)
printf("probFg[0] %f probBg[0] %f \n", probFg[0], probBg[0]);
}
}
EDIT
After #BrianL explained, I now understand the log likelihood.
My problem is the log probability of the predict function is sometimes >0. But it should be <=0. Has anyone met this problem before?
I have edited the code above to show the problem. I have tried the program with images below:
The first image is the ImageCurrent.tif, the second is the ImageFormer.tif and the last one is mask.tif.
Is this can be considered a bug in OpenCV? Should I open a ticket on OpenCV bug tracker?
The "likelihood logarithm" means the log of the probability. Since for a probability p we expect 0 ≤ p ≤ 1, I would expect the values to be negative: log(p) ≤ 0. Larger negative numbers imply smaller probabilities.
This form is helpful when you are dealing with products of very small probabilities: if you multiplied the normal way, you could easily get underflow and lose precision because the probability becomes very small. But in log space the multiplication turns into an addition, which improves the accuracy and also potentially the speed of the calculation.
The predict function is for classifying a data point. If you want to give a point a score for how likely it is to belong to any component in the model, you can use the model parameters (see the get documentation) to calculate it yourself.
As I understand you have two separate GMMs for the foreground and background part of the image.The total probability of a sample pixel 'x' in the test image when evaluated in the foreground GMM is
P_fg(x) = sum_over_j_1_to_k ( Wj_fg * Pj_fg( x ))
where
k = number of clusters in foreground GMM
x = test sample
Pj_fg(x) = probability that sample x is in j-th cluster according to the foreground GMM
Wj_fg = weight of the j-th cluster in foreground GMM
also, sum of all weights should be 1 for each GMM.
We can do a similar calculation for the background GMM.
From looking at the EM code in opencv, it looks like the first part of the 2 values that EM returns is the log likelihood. For the foreground GMM this is
log(P_fg(x_i))
I implemented your algorithm and for each pixel in the test image, I compared the log-likelihoods returned for each of the two GMM-s and classified the pixel with the GMM with higher value. I got decent results.
In that respect, yes this value is an indication of the pixel to be belonging to the entire GMM.
2)
In my implementation of your problem, I always got the log likelihoods of all GMMS of all test-sample pixels under 0.
I notice that you are doing graphcut based image segmentation.
You might want to take a look at the following blog post which use OpenCV and its GMM class in a very similar way as what you are doing to perform graph cut-based image segmentation. Code is given in C++ with detailed explanations. Here is the link: link
Basically, I can only say that the log probability, whether it is correct or not, is not what you are looking for. Check out the above link for details.