Threshold image data using VTK (vtkImageThreshold) - c++

I have a 3D vector field that I am storing in a vtkImageData object. The vtkImageData object contains two arrays:
a 3 component vtkDoubleArray (the vector x, y and z components)
a 1 component vtkDoubleArray containing a separate quantity
I would like to extract the corresponding elements of the two arrays, for which the values of the 1 component array lie within a certain range. Here's what I've tried:
vtkSmartPointer<vtkImageThreshold> threshold =
vtkSmartPointer<vtkImageThreshold>::New();
threshold->SetInputData(image);
threshold->SetInputArrayToProcess(1, image->GetInformation()); // 1 is the Energy array index
threshold->ThresholdBetween(1e-22, 2e-22);
threshold->Update();
vtkSmartPointer<vtkImageData> thresholdedImage = threshold->GetOutput();
I've also tried using vtkThresholdPoints but to no avail. Any suggestions would be much appreciated.

Looks like I can use this example:
vtkSmartPointer<vtkThresholdPoints> threshold =
vtkSmartPointer<vtkThresholdPoints>::New();
threshold->SetInputData(image);
threshold->ThresholdBetween(1e-21, 2e-21);
threshold->SetInputArrayToProcess(0, 0, 0,
vtkDataObject::FIELD_ASSOCIATION_POINTS, "Energy");
threshold->Update();
vtkSmartPointer<vtkPolyData> thresholded = threshold->GetOutput();
I didn't realise that this approach was applicable but it would seem so. This does change the type of my data from vtkImageData to vtkPolyData and I have very little idea what the arguments to vtkThresholdPoints::SetInputArrayToProcess() mean. However, it seems to do the job. I'd be happy to hear any alternative suggestions!

Related

How to do this specific tensor transformation in Eigen?

I am looking for an idiomatic and efficient solution for this problem:
Let's say I have 3D Tensor where I want to represent an image with 100*100 pixels on 3 color channels,
Eigen::Tensor<int, 3> input(3,100,100);
The output I would like to get could be stored in
Eigen::Tensor<int, 4> output(3,3,100,100);
I would like to project the 3D input into the 4D output in a way that each color channel in the original tensor would have its own individual 3D tensor in the output, where each channel would contain the same values, that is
tensor(0,0,42,42) = tensor(0,1,42,42) = tensor(0,2,42,42)
tensor(0,0,12,12) = tensor(0,1,12,12) = tensor(0,2,12,12)
Illustrated on a picture:
Originally I wanted to solve this method:
Chip the individual color channels.
Broadcast the individual color channels into the size I need,
Reshape the broadcasted result into the desirable format(this is just a 3D Tensor at this point)
Concatenate the individual 3D Tensors into a big 4d one.
I have two problems with this approach.
Firstly, I just can not get the reshaping right, it always gives back a reshaped tensor with the dimensionality I want, but the coefficients get shuffled. I started to experiment with the layout of the Tensors, but it did not seem to help.
Secondly, this seems to be very tedious, I just feel like there should be a more convenient way to achieve this but I could not find any cue about that in the documentation.

Append multiple rows to an openCV matrix [duplicate]

This question already has an answer here:
Add a row to a matrix in OpenCV
(1 answer)
Closed 5 years ago.
Can alyone tell me how to append a couple of rows (as a cv::Mat) at the end of an existing cv::Mat? since it is a lot of data, I don't want to go through the rows with a for-loop and add them one-by-one. So here is what I want to do:
cv::Mat existing; //This is a Matrix, say of size 700x16
cv::Mat appendNew; //This is the new Matrix with additional data, say of size 200x16.
existing.push_back(appendNew);
If I try to push back the smaller matrix, I get an error of non-matching sizes:
OpenCV Error: Sizes of input arguments do not match
(Pushed vector length is not equal to matrix row length)
So I guess .push_back() tries to append the whole matrix like a kind of new channel, which won't work because it is much smaller than the existing matrix. Does someone know if the appending of the rows at the end of the existing matrix is possible as a whole, not going through them with a for-loop?
It seems like an easy question to me, nevertheless I was not able to find a simple solution online... So thanks in advance!
Cheers:)
You can use cv::hconcat() to append rows, either on top or bottom of a given matrix as:
import cv2
import numpy as np
box = np.ones((50, 50, 3), dtype=np.uint8)
box[:] = np.array([0, 0, 255])
sample_row = np.ones((1, 50, 3), dtype=np.uint8)
sample_row[:] = np.array([255, 0, 0])
for i in xrange(5):
box = cv2.vconcat([box, sample_row])
===>
For visualization purposes I have created a RGB matrix with red color and tried to append Blue rows to the bottom, You may replace with original data, Just make sure that both the matrices to be concatenated have same number of columns and same data type. I have explicitly defined the dtype while creating matrices.

Sum elements in a channel in caffe

If I have a 4-D blob, say of size (40,1024,300,1) and I want to average pool across the second channel and generate an output of size (40,1,300,1), how would I do it? I think the reduction layer collapses the whole blob and generates a blob of size (40) by summing elements in all other axises (after 1) also. Is there any work around for this without re-implementing a new layer?
The only easy workaround I found is as follows. Permute your blob to a shape (40,300,1,1024). Use reduction layer to compute the mean with axis = -1 and operation = MEAN. I think the blob will be of shape (40,300,1). You may need to use reshape to append an extra dimension at the end (check if this is needed) and then permute back to shape (40,1,300,1).
You can find an implementation of a Permute layer here or here. I hope this helps.

Pixel-indexing in OpenCV's distance transform

I want to use the function distanceTransform() to find the minimum distance of non-zero pixels to zeros pixels, but also the position of that closest zero pixel. I call the second version of the function with the labelType flag set to DIST_LABEL_PIXEL. Everything works fine and I get the distances to and indices of the closest zero pixels.
Now I want to convert the indices back to pixel locations and I thought the indexing would be like idx=(row*cols+col) or something like this but I had to find out that OpenCV is just counting the zero pixels and using this count as the index. So if I get 123 as the index of the closest pixel this means that the 123th zero pixel is the closest.
How is OpenCV counting them? Probably in a row-wise manner?
Is there an efficient way of mapping the indices back to the locations? Obviously I could recount them and keep track of the counts and positions, if I know how OpenCV counts them, but this seems stupid and not very efficient.
Is there a good reason to use the indexing they used? I mean, are there any advantages over using an absolute indexing?
Thanks in advance.
EDIT:
If you want to see what I mean, you can run this:
Mat mask = Mat::ones(100, 100, CV_8U);
mask.at<uchar>(50, 50) = 0;
Mat dist, labels;
distanceTransform(mask, dist, labels, CV_DIST_L2, CV_DIST_MASK_PRECISE, DIST_LABEL_PIXEL);
cout << labels.at<int>(0,0) << endl;
You will see that all the labels are 1 because there is only one zero pixel, but how am I supposed to find the location (50,50) with that information?
The zero pixels also get labelled - they will have the same label as the non-zero pixels to which they are closest.
So you will have a 2D array of labels, the same size as your source image. If you examine all of the zero pixels in the source image, you can then find the associated label from the 2D array returned. This can then allow you to find which non-zero pixels are associated with each zero pixel by matching the labels.
If you see what I mean.
In python you can use numpy to associate the labels and the coordinates:
import cv2
import numpy as np
# create an image with two 0-lines
a = np.ones((100,100), dtype=np.uint8)
a[50,:] = 0
a[:,70] = 0
dt,lbl = cv2.distanceTransformWithLabels(a, cv2.DIST_L2, 3, labelType=cv2.DIST_LABEL_PIXEL)
# coordinates of 0-value pixels
xy = np.where(a==0)
# print label id and coordinate
for i in range(len(np.unique(lbl))):
print(i,xy[0][i], xy[1][i])

How to convert bitmaps to 'matrices' that can be processed in C++ ( ANN )?

I want to feed my extracted character bitmaps (.bmp files) into some kind of matrices that can be processed in C++ and then fed into the artificial neural network e.g. the network will take 72 inputs - each one as a pixel of the binarized picture of dimensions 6 x 12.
For instance: I have a binarized bitmap of size let's say 40 x 80. I want to make out of it a structure that will have dimensions 6 x 12 and it would consist of my scaled bitmap. So I need a bitmap library that would allow me to scale the bmps and then fed them into the ANN. (As some of you stated already, they will be stored already as a matrix of so kind so no transformations will be necessary)
What can I use in here ?
It seems that any image processing library could suit your needs. So, my advice would be to use a library that is as simple as possible to integrate in your build process.
In this context, the CImg library is extremely easy to us, as it is composed of a simple .h file.
Concerning your need, a possible implementation would be
#include "CImg.h"
using namespace cimg_library;
int main(int argc,char **argv)
{
CImg<unsigned char> image("img/logo.bmp");
//Simple resize with nearest neighbour interpolation
//image = image.resize(64, 64);
//If you want to specify the interpolation type
image = image.resize(64, 64, -100, -100, 4);//The last param specifies the interpolation type
//\param interpolation_type Method of interpolation :
// -1 = no interpolation : raw memory resizing.
// - 0 = no interpolation : additional space is filled according to \p border_condition.
// - 1 = nearest-neighbor interpolation.
// - 2 = moving average interpolation.
// - 3 = linear interpolation.
// - 4 = grid interpolation.
// - 5 = bicubic interpolation.
// - 6 = lanczos interpolation.
CImgDisplay main_disp(image,"Image resized");
//This last part of code is not usfeul for you, it is only used to display the resized image
while (!main_disp.is_closed() )
main_disp.wait();
return 0;
}
The bitmap file format (see the specs) already store bitmaps as a matrix, or more precisely an array of pixels, which can be divided into 2D array by row (or column, but doesn't matter).
Thus you just have to read the header and get the image size, then read the data in arrays of packed struct (with no padding, as explained here).
This way you will get your matrix, then you can wrap it in a class to store width and height attributes, or even give arrays to constructor of personal-flavor matrices.
Use some sort of bmp lib to access the data (platform dependent). This will usually give you the bmp as a flat array. Take that flat array and plug each value into your matrix structure, or pass it directly into your NN code. Can't offer you much more than this without more info.