windowing for DICOM should use rescaleSlope and rescaleIntercept - xtk

How do I do DICOM windowing in XTK? Just converting window width and center to WindowHigh and WindowLow doesn't produce the correct image. Shouldn't the code use the rescale slope and rescale intercept tags from the DICOM header to calculate the pixel values?

As per the OP's comment, add rescale slope and intercept to the X.renderer2D.prototype.render_ function. The calculation is:
var _intensityTransformed = (_intensity * _volume._rescaleSlope + _volume._rescaleIntercept);

Related

How can I tile an image within a given area of a page in my document, using the Mako SDK?

I'm looking to use an image for tilling, to fill and area of my page, in my document.
I've seen that there is an IDOMImage and IDOMImageBrush, but I'm not sure how to use them in order to scale and tile my source image.
How can I do this with the Mako SDK?
Mako can tile an image into a given area, and also flip alternate tiles to create a pattern. Use a scaling transform to control its size. This code shows you how.
// Declare an output pointer
IOutputPtr output;
// Create new assembly, document and page
IDocumentAssemblyPtr assembly = IDocumentAssembly::create(jawsMako);
IDocumentPtr document = IDocument::create(jawsMako);
IPagePtr page = IPage::create(jawsMako);
// Add the page to the document, and the document to the assembly
document->appendPage(page);
assembly->appendDocument(document);
// Create a fixed page to work with
double pageWidth = 10 * 96.0;
double pageHeight = 20 * 96.0;
IDOMFixedPagePtr fixedPage = IDOMFixedPage::create(jawsMako, pageWidth, pageHeight);
// Load the image file into an image
IDOMImagePtr image = IDOMJPEGImage::create(jawsMako, IInputStream::createFromFile(jawsMako, imageFilePath));
// Find its dimensions
IImageFramePtr frame;
image->getFirstImageFrame(jawsMako, frame);
double imageWidth = frame->getWidth();
double imageHeight = frame->getHeight();
// Create a rect to hold the image
FRect printBounds(0.0, 0.0, pageWidth, pageHeight);
// Create a transformation matrix to scale the image, taking into account the page proportions
// Scaling factor is a float ranging from 0.0 to 1.0
double pageWidthHeightRatio = pageWidth / pageHeight;
FMatrix transform;
transform.scale(scalingFactor, scalingFactor * pageWidthHeightRatio);
// Stick the image in a brush
IDOMBrushPtr imageBrush = IDOMImageBrush::create(jawsMako, image, FRect(), printBounds, transform, 1.0, eFlipXY);
// And now create a path using the image brush
IDOMPathNodePtr path = IDOMPathNode::createFilled(jawsMako, IDOMPathGeometry::create(jawsMako, printBounds), imageBrush);
// Add the path to the fixed page
fixedPage->appendChild(path);
// This becomes the page contents
page->setContent(fixedPage);
// Write to the output
output = IPDFOutput::create(jawsMako);
output->writeAssembly(assembly, outputFilePath);
Using this code, with this image:
Produced this tilled image:
The code uses an enum, eTileXY. These are the available tiling options:
eTilingMode
Tiling mode type enumeration.
eNoTile
No tiling. If the area to be painted is larger than the image, just paint the image once (in the location specified by the brush's viewport), and leave the remaining area transparent.
eTile
Tile image without any flipping or rotating of the image. A square image consisting of a single diagonal line between opposite corners would produce diagonal lines when tiled in this mode.
eFlipX
Tile image such that alternate columns of tiles are flipped horizontally. A square image consisting of a single diagonal line between opposite corners would produce chevrons running horizontally across the area when tiled in this mode.
eFlipY
Tile image such that alternate rows of tiles are flipped vertically. A square image consisting of a single diagonal line between opposite corners would produce chevrons running vertically across the area when tiled in this mode.
eFlipXY
Tile image such that alternate columns of tiles are flipped horizontally AND alternate rows of tiles are flipped vertically. A square image consisting of a single diagonal line between opposite corners would produce a grid of squares balanced on their points when tiled in this mode.

VTK - how to flip\mirror image

I'm using vtkResliceImageViewer to display image (multi-planar reconstruction). How can I flip\mirror that image vertically and horizontally? Operating with camera is not working as expected, since flip has to take into consideration also camera rotation angle, so it gets very complicated. It would be great if there is a way to change image's texture coordinates. Is this possible?
// Create an image
vtkSmartPointer<vtkImageMandelbrotSource> source =
vtkSmartPointer<vtkImageMandelbrotSource>::New();
source->Update();
// Flip the image
vtkSmartPointer<vtkImageFlip> flipYFilter =
vtkSmartPointer<vtkImageFlip>::New();
flipYFilter->SetFilteredAxis(1); // flip y axis
flipYFilter->SetInputConnection(source->GetOutputPort());
flipYFilter->Update();
// Create the Viewer
vtkSmartPointer<vtkResliceImageViewer> viewer =
vtkSmartPointer<vtkResliceImageViewer>::New();
viewer->SetInputData(flipYFilter->GetOutput())

Dicom Toolkit (DCMTK) - How to get Window Centre and Width

I am currently using DCMTK in C++. I am quite new to this toolkit but, as I understand it, I should be able to read the window centre and width for normalisation purposes.
I have a DicomImage DCM_image object with my Dicom data.
I read the values to an opencv Mat object. However, I now would like to normalise them.
The following shows how I am reading and transferring the Data to an opencv Mat.
DicomImage DCM_image("test.dcm");
uchar *pixelData = (uchar *)(DCM_image.getOutputData(8));
cv::Mat image(int(DCM_image.getHeight()), int(DCM_image.getWidth()), CV_8U, pixelData);
Any help is appreciated. Thanks
Reading window center and width is not difficult, however you need to use a different constructor and pass a DcmDataset to the image.
DcmFileFormat file;
file.loadFile("test.dcm");
DcmDataset* dataset = file.getDataset()
DicomImage image(dataset);
double windowCenter, windowWidth;
dataset->findAndGetFloat64(DcmTagKey(0x0010, 0x1050), windowCenter);
dataset->findAndGetFloat64(DcmTagKey(0x0010, 0x1051), windowWidth);
But actually I do not think it is a good idea to apply the windowing to the image upon loading. Windowing is something which should be adjustable by the user. The attributes Window Center and Window Width allow multiple values which can be applied to adjust the window to the grayscale range of interest ("VOI", Values of Interest).
If you really just want to create a windowed image, you can use your code to construct the image from the file contents and use one of the createXXXImage methods that the DicomImage provides.
HTH

Correct display of DICOM images ITK-VTK (images too dark)

I read dicom images with ITK using itk::ImageSeriesReader and itk::GDCMImageIO after reading i flip the images with itk::FlipImageFilter (to get right orientation of the images) and convert the itkImageData to vtkImageData using itk::ImageToVTKImageFilter. I visualization images with VTK using vtkResliceImageViewer in QVTKWidget2.
I set:
(vtkResliceImageViewer)m_imageViewer[i]->SetColorWindow(windowWidthTAGvalue[0028|1051]);
(vtkResliceImageViewer)m_imageViewer[i]->SetColorLevel(windowCenterTAGvalue[0028|1050]);
and i set following blac&white LookUpTable:
vtkLookupTable* lutbw = vtkLookupTable::New();
lutbw->SetTableRange(0,1000);
lutbw->SetSaturationRange(0,0);
lutbw->SetHueRange(0,0);
lutbw->SetValueRange(0,1);
lutbw->Build();
And images shown into my software compared with the same images shown into other software are much darker, i can not get the same effect as other DICOM viewers
My software images are right other software image is left also when i use some other LookUpTable in this example Flow i can not get the same effect (2nd row images) my image on right is much darker then other.
What i am missing why my images are darker what can i do? i was research a lot into dicom and ikt/vtk can not find good solution any help is appreciate.
Please check the values for Rescale Slope (0028,1053) and Rescale Intercept(0028,1052) and apply the Modality LUT transformation before applying the Window level.
Your dataset may have VOI LUT Function (0028,1056) attribute value of "SIGMOID" instead of "LINEAR".
I extracted the image data from one of your DICOM file (brain_009.dcm) and looked at the histogram of the image data. It looks like, the minimum value stored in the image is 0 and maximum value is 960 regardless of interpreting the data is signed or unsigned. Also, the Window Width (0028:1051) has an invalid value of “0” and you cannot use that for displaying the image.
So your default display could set the Window Width to 960 and Window Center to half the window width plus the minimum value.

OpenCV measure rectangular image size

I have an app that finds an object in a frame and uses warpPerspective to correct the image to be square. In the course of doing so you specify an output image size. However, I want to know how to do so without harming its apparent size. How can I unwarp the 4-corners of the image without changing the size of the image? I don't need the image itself, I just want to measure its height and width in pixels within the original image.
Get a transform matrix that will square up the corners.
std::vector<cv::Point2f> transformedPoints;
cv::Mat M = cv::getPerspectiveTransform(points, objectCorners);
cv::perspectiveTransform(points, transformedPoints, M);
This will square up the image, but in terms of the objectCorners coordinate system. Which is -0.5f to 0.5f not the original image plane.
BoundingRect almost does what I want.
cv::Rect boundingRectangle = cv::boundingRect(points);
But as the documentation states
The function calculates and returns the minimal up-right bounding rectangle for the specified point set.
And what I want is the bounding rectangle after it has been squared-up, not without squaring it up.
According to my understanding to your post, here is something which should help you.
OpenCV perspective transform example.
Update if it still doesn't help you out in finding the height and width within the image
Minimum bounding rect of the points
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
As the minAreaRect reference on OpenCV's website states
Finds a rotated rectangle of the minimum area enclosing the input 2D point set.
You can call box.size and get the width and height.