Distorted Image in Secondary Capture DICOM file - c++

I want to create a secondary capture DICOM file as per the requirements.
I created one, but the image( pixel data in the tag 7FE0 0010 ) looks distorted. I am reading a JPEG image using Gdiplus::Bitmap and using API ::LockBits and 'btmpData.Scan0' to get the pixel data. The same is inserted into the pixel data tag - 7FE0,0010. But while viewing the same in a DICOM viewer, it is coming as distorted. The dicom tags Rows, Columns, PlannarConfiguration are updated properly. BitsAllocated, BitsStored and HighBit are given values 8,8 and 7 respectively.
While goggling I came to know that, instead of RGB format, the bits might be in the order BGR. Hence I tried to switch the bits in the place 'B' and 'R'.
But still the issue exist. Could anybody help me ?

Apparently you forgot to take into account Stride support from GDI+. An image being much more explicit than 1000 words here is what I mean:, the actual full article being here.

Related

Correct display of DICOM images ITK-VTK (images too dark)

I read dicom images with ITK using itk::ImageSeriesReader and itk::GDCMImageIO after reading i flip the images with itk::FlipImageFilter (to get right orientation of the images) and convert the itkImageData to vtkImageData using itk::ImageToVTKImageFilter. I visualization images with VTK using vtkResliceImageViewer in QVTKWidget2.
I set:
(vtkResliceImageViewer)m_imageViewer[i]->SetColorWindow(windowWidthTAGvalue[0028|1051]);
(vtkResliceImageViewer)m_imageViewer[i]->SetColorLevel(windowCenterTAGvalue[0028|1050]);
and i set following blac&white LookUpTable:
vtkLookupTable* lutbw = vtkLookupTable::New();
lutbw->SetTableRange(0,1000);
lutbw->SetSaturationRange(0,0);
lutbw->SetHueRange(0,0);
lutbw->SetValueRange(0,1);
lutbw->Build();
And images shown into my software compared with the same images shown into other software are much darker, i can not get the same effect as other DICOM viewers
My software images are right other software image is left also when i use some other LookUpTable in this example Flow i can not get the same effect (2nd row images) my image on right is much darker then other.
What i am missing why my images are darker what can i do? i was research a lot into dicom and ikt/vtk can not find good solution any help is appreciate.
Please check the values for Rescale Slope (0028,1053) and Rescale Intercept(0028,1052) and apply the Modality LUT transformation before applying the Window level.
Your dataset may have VOI LUT Function (0028,1056) attribute value of "SIGMOID" instead of "LINEAR".
I extracted the image data from one of your DICOM file (brain_009.dcm) and looked at the histogram of the image data. It looks like, the minimum value stored in the image is 0 and maximum value is 960 regardless of interpreting the data is signed or unsigned. Also, the Window Width (0028:1051) has an invalid value of “0” and you cannot use that for displaying the image.
So your default display could set the Window Width to 960 and Window Center to half the window width plus the minimum value.

How to detect image location before stitching with OpenCV / C++

I'm trying to merge/stitch 2 images together but found that the default stitcher class in OpenCV could not handle my images.
So I started to write my own..
Unfortunately the images are too large to attach to this message (they are both 12600x9000 pixels in size).. so I'll try to explain as good as possible.
The 2 images are not pictures takes by a camera but are tiff files extracted from a PDF file.
The images themselves were actually CAD drawings, so not much gradients in there and therefore I think the default stitcher class could not handle them.
So far, I managed to extract the features and match them.
Also I used the following well known example to stitch them together:
Mat WarpedImage;
cv::warpPerspective(img_2,WarpedImage,homography,cv::Size(2*img_2.cols,2*img_2.rows));
Mat half(WarpedImage,Rect(0,0,img_1.cols,img_1.rows));
img_1.copyTo(half);
I sort of made it fit.. because my problem is that in my case the 2 images could be aligned vertically or horizontally.
By default, all stitch examples on the internet assume the first image is the left image and the 2nd image is the right image.
So my first question would be:
How can I detect if the image is to the left, right, above or below the first image and create a proper sized new image?
Secondly..
Currently I'm getting the proper image.. however, because I'm not having some decent code to check the ideal width and height of the new image, I have a lot of black/empty space in the new image.
What would be the best C++ code to remove those black area's?
(I'm seeing a lot of Python scripts on the net.. but no C++ examples of this.. and I have 0 Python skills....)
Thank you very much in advance for your help.
Greetings,
Floris.
You can reproject the corners of the second image with perspectiveTransform. With the transformed points you can find the relative position of your image and calculate the new image size that will fit both images. This will also let you deal with the black areas, since you have the boundaries of the two images.

WebP lossless format overview

I am reading the official WebP lossless bitstream spec. and I have a feeling, that the document is missing some explanation.
Let me describe some fragments of the specification:
1. Introduction - clear
2. Riff header - clear
3. Transformations
The transformations are used only for the main level ARGB image: the
subresolution images have no transforms, not even the 0 bit indicating
the end-of-transforms.
Nowhere earlier was it mentioned, that the container holds some sub-resolution images. What are they? Where are they described, if not in the specification? How to they add to the final image?
Then, in the Predictor transform paragraph:
We divide the image into squares...
..what image? The main image or sub-resolution image? What if the image cannot be divided into squares (apart from pixel-size squares)?
The first 4 bits of prediction data define the block width and height
in number of bits. The number of block columns, block_xsize, is used
in indexing two-dimensionally.
Does this mean that the image width is block_xsize * block_width ?
The transform data contains the prediction mode for each block of the image.
In what way, what format?
I dont know why I am having a hard time understanding this. Maybe because I am not a native english speaker or because the description is too laconic.
I'd appreciate any help in decoding this specification :)
It was mentioned earlier. Right at the top of the document it says:
The format uses subresolution images, recursively embedded into the
format itself, for storing statistical data about the images, such as
the used entropy codes, spatial predictors, color space conversion,
and color table.
These are arrays (or a vector in the case of the color table) of data where each element applies to a block of pixels in the actual image, e.g. a 16x16 block. These "subresolution images" are not themselves subsamples of the image being compressed.
The format description calls them images because they are stored exactly like the main image is in the format. The transforms are instructions to the decoder to apply to the decompressed main image data. The entropy image is used to decompress the main image, by virtue of providing the Huffman codes for each block.

C++: How to interpret a byte array representation of an image?

I'm trying to work with this camera SDK, and let's say the camera has this function called CameraGetImageData(BYTE* data), which I assume takes in a byte array, modifies it with the image data, and then returns a status code based on success/failure. The SDK provides no documentation whatsoever (not even code comments) so I'm just guestimating here. Here's a code snippet on what I think works
BYTE* data = new BYTE[10000000]; // an array of an arbitrary large size, I'm not
// sure what the exact size needs to be so I
// made it large
CameraGetImageData(data);
// Do stuff here to process/output image data
I've run the code w/ breakpoints in Visual Studio and can confirm that the CameraGetImageData function does indeed modify the array. Now my question is, is there a standard way for cameras to output data? How should I start using this data and what does each byte represent? The camera captures in 8-bit color.
Take pictures of pure red, pure green and pure blue. See what comes out.
Also, I'd make the array 100 million, not 10 million if you've got the memory, at least initially. A 10 megapixel camera using 24 bits per pixel is going to use 30 million bytes, bigger than your array. If it does something crazy like store 16 bits per colour it could take up to 60 million or 80 million bytes.
You could fill this big array with data before passing it. For example fill it with '01234567' repeated. Then it's really obvious what bytes have been written and what bytes haven't, so you can work out the real size of what's returned.
I don't think there is a standard but you can try to identify which values are what by putting some solid color images in front of the camera. So all pixels would be approximately the same color. Having an idea of what color should be stored in each pixel you may understand how the color is represented in your array. I would go with black, white, reg, green, blue images.
But also consider finding a better SDK which has the documentation, because making just a big array is really bad design
You should check the documentation on your camera SDK, since there's no "standard" or "common" way for data output. It can be raw data, it can be RGB data, it can even be already compressed. If the camera vendor doesn't provide any information, you could try to find some libraries that handle most common formats, and try to pass the data you have to see what happens.
Without even knowing the type of the camera, this question is nearly impossible to answer.
If it is a scientific camera, chances are good that it adhers to the IEEE 1394 (aka IIDC or DCAM) standard. I have personally worked with such a camera made by Hamamatsu using this library to interface with the camera.
In my case the camera output was just raw data. The camera itself was monochrome and each pixel had a depth-resolution of 12 bit. Therefore, each pixel intensity was stored as 16-bit unsigned value in the result array. The size of the array was simply width * height * 2 bytes, where width and height are the image dimensions in pixels the factor 2 is for 16-bit per pixel. The width and height were known a-priori from the chosen camera mode.
If you have the dimensions of the result image, try to dump your byte array into a file and load the result either in Python or Matlab and just try to visualize the content. Another possibility is to load this raw file with an image editor such as ImageJ and hope to get anything out from it.
Good luck!
I hope this question's solution will helps you: https://stackoverflow.com/a/3340944/291372
Actually you've got an array of pixels (assume 1 byte per pixel if you camera captires in 8-bit). What you need - is just determine width and height. after that you can try to restore bitmap image from you byte array.

1bpp Monochromatic BMP

I ran a demo bmp file format helper program "DDDemo.exe" to help me visualize the format of a 32x1 pixel bmp file (monochromatic). I'm okay with the the two header sections but dont seem to understand the color table and pixel bits portions. I made two 32x1 pixel bmp files to help me compare (please see attached).
Can someone assit me understand how the "pixel bits" relates to the color map?
UPDATE: After some trial and error I finally was able to write a 32x1 pixel monochromatic BMP. Although it has different pixel bits as the attached images, this tool helped with the header and color mapping concept. Thank you for everyones input.
An unset bit in the PIXEL BITS refers to the first color table entry (0,0,0), black, and a set bit refers to the second color table entry (ff,ff,ff), white.
"The 1-bit per pixel (1bpp) format supports 2 distinct colors, (for example: black and white, or yellow and pink). The pixel values are stored in each bit, with the first (left-most) pixel in the most-significant bit of the first byte. Each bit is an index into a table of 2 colors. This Color Table is in 32bpp 8.8.8.0.8 RGBAX format. An unset bit will refer to the first color table entry, and a set bit will refer to the last (second) color table entry." - BMP file format
The color table for these images is simply indicating that there are two colors in the image:
Color 0 is (00, 00, 00) -- pure black
Color 1 is (FF, FF, FF) -- pure white
The image compression method shown (BI_RGB -- uncompressed) doesn't make sense with the given pixel data and images, though.