Image renderer with X Toolkit - xtk

I'm using scanner (not IRM) images to load 2D volumes, the contrast seems different more saturated comparing to ITK-Snap. Here is the screenshot of what I get :
http://i.stack.imgur.com/IHX5M.jpg
And with IT -snap :
http://i.stack.imgur.com/8tIv7.jpg
Any idea why this differences ?
Thank you

Right now, XTK doesn't read this dicom tag and crops all the negative values.
https://github.com/xtk/X/blob/master/io/parserDCM.js
Do you have one image you can share for testing?
I would add this tag to the parserDCM.js, and if this flag is ON, increment all the pixels values to make sure they are all positive.
Thanks

Related

Correct display of DICOM images ITK-VTK (images too dark)

I read dicom images with ITK using itk::ImageSeriesReader and itk::GDCMImageIO after reading i flip the images with itk::FlipImageFilter (to get right orientation of the images) and convert the itkImageData to vtkImageData using itk::ImageToVTKImageFilter. I visualization images with VTK using vtkResliceImageViewer in QVTKWidget2.
I set:
(vtkResliceImageViewer)m_imageViewer[i]->SetColorWindow(windowWidthTAGvalue[0028|1051]);
(vtkResliceImageViewer)m_imageViewer[i]->SetColorLevel(windowCenterTAGvalue[0028|1050]);
and i set following blac&white LookUpTable:
vtkLookupTable* lutbw = vtkLookupTable::New();
lutbw->SetTableRange(0,1000);
lutbw->SetSaturationRange(0,0);
lutbw->SetHueRange(0,0);
lutbw->SetValueRange(0,1);
lutbw->Build();
And images shown into my software compared with the same images shown into other software are much darker, i can not get the same effect as other DICOM viewers
My software images are right other software image is left also when i use some other LookUpTable in this example Flow i can not get the same effect (2nd row images) my image on right is much darker then other.
What i am missing why my images are darker what can i do? i was research a lot into dicom and ikt/vtk can not find good solution any help is appreciate.
Please check the values for Rescale Slope (0028,1053) and Rescale Intercept(0028,1052) and apply the Modality LUT transformation before applying the Window level.
Your dataset may have VOI LUT Function (0028,1056) attribute value of "SIGMOID" instead of "LINEAR".
I extracted the image data from one of your DICOM file (brain_009.dcm) and looked at the histogram of the image data. It looks like, the minimum value stored in the image is 0 and maximum value is 960 regardless of interpreting the data is signed or unsigned. Also, the Window Width (0028:1051) has an invalid value of “0” and you cannot use that for displaying the image.
So your default display could set the Window Width to 960 and Window Center to half the window width plus the minimum value.

Photoshop-like image difference with Python

I want to compare to images of the same size with some text on it.
Let's say the two words are: 'google' and 'gooogle'.
Before measuring the image difference in PS, I am blurring the images using Gauß.
The neat thing in PS is, no matter how you arrange the layers - gooogle on top or google on top - the difference of the layers stay the same.
You get a black background and the difference as (more or less) white pixels.
I am unable to reproduce this functionality in Python.
How did PS manage to get commutativity in there?
I was able to find the solution:
You need to take the absolute. Problem is, you need to convert the RGB image (uint8) to a bigger datatype.
After that you can subtract the images and take the absolute. In the end you need to convert it back to RGB, which is uint8.
def ps_like_diff:(img1, img2):
img1_ = img1.astype(int)
img2_ = img2.astype(int)
diff = img1_ - img2_
return (np.abs(diff)).astype('uint8')

How to create a depth map from PointGrey BumbleBee2 stereo camera using Triclops and FlyCapture SDKs?

I've got the BumbleBee 2 stereo camera and two mentioned SDKs.
I've managed to capture a video from it in my program, rectify stereo images and get a disparity map. Next thing I'd like to have is a depth map similar to one, the Kinect gives.
The Triclops' documentation is rather short, it only references functions, without typical workflow description. The workflow is described in examples.
Up to now I've found 2 relevant functions: family of triclopsRCDxxToXYZ() functions and triclopsExtractImage3d() function.
Functions from the first family calculate x, y and z coordinate for a single pixel. Z coordinate perfectly corresponds to the depth in meters. However, to use this function I should create two nested loops, as shown in the stereo3dpoints example. That gives too much overhead, because each call returns two more coordinates.
The second function, triclopsExtractImage3d(), always returns error TriclopsErrorInvalidParameter. The documentation says only that "there is a geometry mismatch between the context and the TriclopsImage3d", which is not clear for me.
Examples of Triclops 3.3.1 SDK do not show how to use it. Google brings example from Triclops SDK 3.2, which is absent in 3.3.1.
I've tried adding lines 253-273 from the link above to current stereo3dpoints - got that error.
Does anyone have an experience with it?
Is it valid to use triclopsExtractImage3d() or is it obsolete?
I also tried plotting values of disparity vs. z, obtained from triclopsRCDxxToXYZ().
The plot shows almost exact inverse proportionality: .
That is z = k / disparity. But k is not constant across the image, it varies from approximately 2.5e-5 to 1.4e-3, that is two orders of magnitude. Therefore, it is incorrect to calculate this value once and use forever.
Maybe it is a bit to late and you figured it out by yourself but:
To use triclopsExtractImage3d you have to create a 3dImage first.
TriclopsImage3d *depthImage;
triclopsCreateImage3d(triclopsContext, &depthImage);
triclopsExtractImage3d(triclopsContext, depthImage);
triclopsDestroyImage3d(&depthImage);

How to detect image location before stitching with OpenCV / C++

I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.

Comparing Sprites?

For a game I'm working on, I'd like to compare two sprites in SFML2, such as with an if() statement. For example, I could have a large 1280x1024 image with one gray pixel among all black pixels. I would then have 2 separate sprites, one is the gray pixel alone, and the other is the map. I would crop only the gray pixel from the map and compare the two, if true, do other things.
Do you see what I'm getting at here? Is this possible? If so, how?
Im with Alex in saying there are smarter ways to check sprites.
Compare the file names not, don't reference a single pixel within an image, because you have to load the entire image into memory to do that atm you are loading 1.3MBytes into memory just to check a single pixel?
Store all of your resources in a Resource Manager and reference them via a UID, if a resource has UID then use that resource.
Number 2 is preferable above all else, but there are many other ways
Edit: As per comments, you wouldn't "crop" out the pixel, you would just load image into memory and use the Image class to get the colour of a pixel at a location. The following would be an example
sf::Image* map = MapSprite->GetTexture()->CopyToImage()
if (map->GetPixel(666,666) == sf::Color::Black)
{
//Funky stuff here
}
NOTE: You mentioned SFML2 so this is from that set of Documentation, may be different for 1.6
Edit2: Its been a while since I've used SMFL so hopefully the code snippet will at least give you direction