Morphological Watershed From Markers filter on ITK - c++

I'm trying to create a pipeline for image segmentation, with the libraries from ITK. But, when I apply the itkMorphologicalWatershedFromMarkersFilter, the result is a blank image (binary image with only 1's).
Does anyone know how to apply this filter correctly?
My input image should be the gradient of an image, and the marker image should be the result of the application of a watershed filter on the same image.
input image
marker image
And this is the declaration and the application of the filter:
typedef itk::MorphologicalWatershedFromMarkersImageFilter < OutputImageType, OutputImageType >
MorphologicalWatershedFromMarkersImageFilterType;
MorphologicalWatershedFromMarkersImageFilterType::Pointer CwatershedFilter
= MorphologicalWatershedFromMarkersImageFilterType::New();
CwatershedFilter->SetInput1(reader1->GetOutput());
CwatershedFilter->SetMarkerImage(reader2->GetOutput());
CwatershedFilter->SetMarkWatershedLine(true);
try{
CwatershedFilter->Update();
}
catch (itk::ExceptionObject & error)
{
std::cerr << "Error: " << error << std::endl;
getchar();
return EXIT_FAILURE;
}
Also, this is the link to the documentation of this filter, from itk.org:
http://www.itk.org/Doxygen48/html/classitk_1_1MorphologicalWatershedFromMarkersImageFilter.html#a20e3b8de42219606ba759e822be0aaa2
Thank you so much!!

While not C++ ITK, there is a SimpleITK Notebooks which demonstrates it's usage:
http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/32_Watersheds_Segmentation.html
Your marker image is just binary, and not a label image. By that I mean your image is only 0 and 1 ( or 255). In the linked example note the following:
min_img = sitk.RegionalMinima(feature_img, backgroundValue=0, foregroundValue=1.0, fullyConnected=False, flatIsMinima=True)
marker_img = sitk.ConnectedComponent(min_img, fullyConnected=False)
The "min_img" is a binary image, but then the image is processed with the "ConnectedComponent" image filter, which gives each "island" a unique number. This is what is expected for the marker ( or label ) image for the WatershedFromMarker filter.
I will also note that your input image has some boundary lines, that you may not want as input.

Related

error: (-215:Assertion failed) m.dims <= 2 in function 'FormattedImpl' in cv::dnn

I am loading a pre-trained TensorFlow model in the opencv dnn module using the following code -
cv::dnn::Net net = cv::dnn::readNetFromTensorflow("frozen_inference_graph.pb",
"graph.pbtxt");
net.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA); //Run model on GPU
net.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA);
Mat image = imread("img.jpeg");
Mat resized;
cv::resize(image, resized, cv::Size(300, 300));
cout<<resized.size()<<endl;
cout<<"Resized"<<endl;
auto input_image = dnn::blobFromImage(image, 1.0, cv::Size(300, 300),
cv::Scalar(127.5, 127.5, 127.5),
false, false, CV_32F);
cout<<"Now setting Input";
net.setInput(input_image);
auto detections = net.forward();
cout<<detections;
return 0;
However the I get the following error as mentioned in the question -
what(): OpenCV(4.4.0) /home/atharva/opencv-4.4.0/modules/core/src/out.cpp:87: error: (-215:Assertion failed) m.dims <= 2 in function 'FormattedImpl'
Could someone please point out what the mistake is?. I believe there is some problem in BlobFromImage as nothing after it is getting printed.
TIA
This error occurs because you are trying to print a cv::Mat to standard output that has more than 2 dimensions. With cv::dnn, the output after using net.forward() is 4-dimensional. However I have no idea what model you are using because the output structure of the blob is different depending on what task you are trying to do. If I had to guess you are doing some sort of object detection given your choice of variable names. In that case, usually the first dimension is the batch size and since you are using only one image, the batch size is 1. The second dimension is the number of channels in the output. As you're doing object detection on the image, this will also be size 1. The third and fourth dimensions are the number of rows and columns for the final output layer.
Going on faith, you can extract a 2D version of this cv::Mat to print out to standard output by doing:
cv::Mat output(detections.size[2], detections.size[3], CV_32F, detection.ptr<float>());
Now that this is a 2D matrix, you can print out this instead by doing std::cout << output << std::endl;.

OpenCV load CIE L*a*b* image

I'm trying to load a CIE Lab* image using openCV in C++.
Online I can find only examples that load an RGB image and convert it into a LAB image but I already have the LAB image so how can I load it and than access to the values of L, a and b?
The only way I find is to load the LAB image considering it an RGB image and convert it into a Lab image using:
cvtColor(source, destination, CV_BGR2Lab);
But I think this is not a good way to solve the problem because if I do this, the converted image looks very different from the original.
With a test image and the following code:
originalImage = imread(originalImagePath, CV_LOAD_IMAGE_UNCHANGED);
cout << originalImage.type() << endl;
Mat originalImageSplitted[3];
split(originalImage, originalImageSplitted);
cout << originalImageSplitted[0] << endl;
cout << originalImageSplitted[1] << endl;
cout << originalImageSplitted[2] << endl;
I get the result:
0
[]
[]
[]
Not really an answer, but too much for a comment.
You can make a Lab colourspace TIF file for testing like this with ImageMagick from the Terminal in Linux, macOS or Windows:
convert -depth 8 xc:black xc:white xc:red xc:lime xc:blue +append -colorspace Lab result.tif
That will look like this if I scale it up as it is currently only 5 pixels wide and 1 pixel tall:
You can then dump the pixels to see their values and hopefully work out what OpenCV is doing:
convert result.tif txt:
Sample Output
# ImageMagick pixel enumeration: 5,1,65535,cielab
0,0: (0,-0.5,-0.5) #000000 cielab(0%,-0.000762951%,-0.000762951%) <--- black pixel
1,0: (65535,-0.5,-0.5) #FF0000 cielab(100%,-0.000762951%,-0.000762951%) <--- white pixel
2,0: (34952,20559.5,17218.5) #885043 cielab(53.3333%,31.3718%,26.2737%) <--- red pixel
3,0: (57568,-22102.5,21330.5) #E00053 cielab(87.8431%,-33.7263%,32.5483%) <--- green pixel
4,0: (21074,20302.5,-27756.5) #524F00 cielab(32.1569%,30.9796%,-42.3537%) <--- blue pixel
Looking at the red pixel, you get:
L=53.33%
a=31.37% of 256, i.e. 80.3
b=26.27% of 256, i.e. 67.2
To keep the image unchanged you should read it into a Mat image similarly:
Mat image;
image = imread(<path_of_image>, CV_LOAD_IMAGE_UNCHANGED)
In this case the second argument should preserve your image color channels as is.
With #DanMaĆĄek using #MarkSetchell image we solved the problem.
Using imread function the image is automatically converted into an RGB image so it's needed to convert it into a Lab image again.
Another problem is releated to 8bit images. The resulted image has modified values of L,a and b following this rule:
L * 255/100
a as a+128
b as b+128
So I solved doing the following:
originalImage = imread(originalImagePath, CV_LOAD_IMAGE_UNCHANGED);
Mat originalImageLab;
cvtColor(originalImage, originalImageLab, COLOR_RGB2Lab);
Mat originalImageSplitted[3];
split(originalImageLab, originalImageSplitted);
Thank you all!

Unable to access pixel values from greyscale image

I am reading an image of size 1600x1200 as greyscale and then trying to access pixel value at location (1201,0). I get segfault in the end as shown in comments:
Mat gray_image = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE); // Read the file
if(! gray_image.data ) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
const int width = gray_image.cols;
const int height = gray_image.rows;
cout<<gray_image.cols<<endl; // 1600
cout<<gray_image.rows<<endl; // 1200
cout<<(int)gray_image.at<uchar>(1201,0)<<endl; //SEGFAULT
You already stated that you are reading an image of size 1600x1200, then how can you access 1201 element from a total of 1200 rows only, actually you have misunderstood the convention of mat.at<>(). I would recommend you to use mat.at<>(cv::Point(p_x, p_y)) in this way you would never get confused with rows and cols to be passed in the mat.at<>().
EDIT : However, creating a new cv::Point is not a recommended way of accessing pixels as suggested by #Micka, So consider it as a workaround and don't rely completely on cv::Point() to access a pixel. The best possible way to iterate the pixels is defined in Opencv documentation.
It should be .at(row,col) not (col,row) and you dont have a 1201th row thats why its a seg fault

Why Sift Feature Detector cannot detect key points for some images?

I am reading images from a set and extracting their features. However, for some images(very few, around 4 per mille), SiftFeatureDetector::detect( image, keypoints) cannot detect key points and returns me an empty set of key points. When I tried with SurfFeatureDetector::detect( image, keypoints), it detects the key points.
Here is the code:
query = imread( it->path().string());
/* Here, I resize the image in proportion so that its longest side will be 400*/
cvtColor( query, query_gray, CV_RGB2GRAY);
SiftFeatureDetector feature_detector;
vector<KeyPoint> query_kp;
feature_detector.detect( query_gray, query_kp);
// check whether KeyPoints are detected
if( !query_kp.size())
{
cerr << "KeyPoints couldn't be detected. Image " << it->path() << " is skipped." << endl;
++cantDetecteds;
waitKey(0);
continue;
}
What is the reason behind this? Can someone explain please?
Thanks.
EDIT: Surf also cannot detect some key points , around 2 per mille.

CImg: How to save a grayscale?

When I use CImg to load a .BMP, how can I know whether it is a gray-scale or color image?
I have tried as follows, but failed:
cimg_library::CImg<unsigned char> img("lena_gray.bmp");
const int spectrum = img.spectrum();
img.save("lenaNew.bmp");
To my expectation, no matter what kind of .BMP I have loaded, spectrum will always be 3. As a result, when I load a gray-scale and save it, the result size will be 3 times bigger than it is.
I just want to save a same image as it is loaded. How do I save as gray-scale?
I guess the BMP format always store images as RGB-coded data, so reading a BMP will always result in a color image.
If you know your image is scalar, all channels will be the same, so you can discard two of them (here keeping the first one).
img.channel(0);
If you want to check that it is a scalar image, you can test the equality between channels, as
const CImg<unsigned char> R = img.get_shared_channel(0),
G = img.get_shared_channel(1),
B = img.get_shared_channel(2);
if (R==G && R==B) {
.. Your image is scalar !
} else {
.. Your image is in color.
}