Get pixel data from DICOM file .dcm - c++

How can i get the pixel data from .dcm file as an array variable using DCMTK library?
I'm using this site for preference and it didn't work, the data result is very different from the original picture.

The code you referenced just extracts the pixel data from the corresponding attribute. But there is much more to this. Different header elements determine how the pixel data is to be interpreted. For this, the class DicomImage can be used. You can either use it to normalize the data to an array of (signed|unsigned) (char|short|int) using getInterData() or for rendering purposes using getOutputData().

Related

Create raster from XYZ

I have a data set consisting of XYZ data. The dimensions are 5587 rows by 3 columns.
I try to use rasterFromXYZ from the raster package but I get the following error:
Error in rasterFromXYZ(DATA) : x cell sizes are not regular
Any help would be appreciated.
You are not providing example data making it hard to help you out. What the message means is that your data does not appear to be regularly spaced.
Instead of rasterFromXYZ you can use rasterize in which case you specify the required geometry and then transfer the values to it.
Depending on your goals, you may also use interpolate

Save raw RBG values to JPG using libjpeg

I have a canvas that is represented by a 2D array of the type colorData
The class colorData simply holds the RGB value or each pixel.
I have been looking at examples of people using libjpeg to write a jpg but none of them seem to use the RGB values.
Is there a way to save raw RGB values to a jpeg using libjpeg? Or better yet, is there an example of code using the raw RGB data for the jpeg data?
Look in example.c in the libjpeg source. It gives a complete example of how to write a JPEG file using RGB data.
The example uses a buffer variable image_buffer and height and width variables image_width and image_height. You will need to adapt it to copy the RGB values from your ColorData class and place them into the image buffer (this can be done one row at a time).
Fill an array of bytes with the RGB data (3 bytes for each pixel) and then set row_buffer[0] point to the array before calling jpeg_write_scanlines.

Distorted Image in Secondary Capture DICOM file

I want to create a secondary capture DICOM file as per the requirements.
I created one, but the image( pixel data in the tag 7FE0 0010 ) looks distorted. I am reading a JPEG image using Gdiplus::Bitmap and using API ::LockBits and 'btmpData.Scan0' to get the pixel data. The same is inserted into the pixel data tag - 7FE0,0010. But while viewing the same in a DICOM viewer, it is coming as distorted. The dicom tags Rows, Columns, PlannarConfiguration are updated properly. BitsAllocated, BitsStored and HighBit are given values 8,8 and 7 respectively.
While goggling I came to know that, instead of RGB format, the bits might be in the order BGR. Hence I tried to switch the bits in the place 'B' and 'R'.
But still the issue exist. Could anybody help me ?
Apparently you forgot to take into account Stride support from GDI+. An image being much more explicit than 1000 words here is what I mean:, the actual full article being here.

How do images work in opencl kernel?

I'm trying to find ways to copy multidimensional arrays from host to device in opencl and thought an approach was to use an image... which can be 1, 2, or 3 dimensional objects. However I'm confused because when reading a pixle from an array, they are using vector datatypes. Normally I would think double pointer, but it doesn't sound like that is what is meant by vector datatypes. Anyway here are my questions:
1) What is actually meant to vector datatype, why wouldn't we just specify 2 or 3 indices when denoting pixel coordinates? It looks like a single value such as float2 is being used to denote coordinates, but that makes no sense to me. I'm looking at the function read_imageui and read_image.
2) Can the input image just be a subset of the entire image and sampler be the subset of the input image? I don't understand how the coordinates are actually specified here either since read_image() only seams to take a single value for input and a single value for sampler.
3) If doing linear algebra, should I just bite the bullet and translate 1-D array data from the buffer into multi-dim arrays in opencl?
4) I'm still interested in images, so even if what I want to do is not best for images, could you still explain questions 1 and 2?
Thanks!
EDIT
I wanted to refine my question and ask, in the following khronos documentation they define...
int4 read_imagei (
image2d_t image,
sampler_t sampler,
int2 coord)
But nowhere can I find what image2d_t's definition or structure is supposed to be. The samething for sampler_t and int2 coord. They seem like structs to me or pointers to structs since opencl is supposed to be based on ansi c, but what are the fields of these structs or how do I note the coord with what looks like a scala?! I've seen the notation (int2)(x,y), but that's not ansi c, that looks like scala, haha. Things seem conflicting to me. Thanks again!
In general you can read from images in three different ways:
direct pixel access, no sampling
sampling, normalized coordinates
sampling, integer coordinates
The first one is what you want, that is, you pass integer pixel coordinates like (10, 43) and it will return the contents of the image at that point, with no filtering whatsoever, as if it were a memory buffer. You can use the read_image*() family of functions which take no sampler_t param.
The second one is what most people want from images, you specify normalized image coords between 0 and 1, and the return value is the interpolated image color at the specified point (so if your coordinates specify a point in between pixels, the color is interpolated based on surrounding pixel colors). The interpolation, and the way out-of-bounds coordinates are handled, are defined by the configuration of the sampler_t parameter you pass to the function.
The third one is the same as the second one, except the texture coordinates are not normalized, and the sampler needs to be configured accordingly. In some sense the third way is closer to the first, and the only additional feature it provides is the ability to handle out-of-bounds pixel coordinates (for instance, by wrapping or clamping them) instead of you doing it manually.
Finally, the different versions of each function, e.g. read_imagef, read_imagei, read_imageui are to be used depending on the pixel format of your image. If it contains floats (in each channel), use read_imagef, if it contains signed integers (in each channel), use read_imagei, etc...
Writing to an image on the other hand is straightforward, there are write_image{f,i,ui}() functions that take an image object, integer pixel coordinates and a pixel color, all very easy.
Note that you cannot read and write to the same image in the same kernel! (I don't know if recent OpenCL versions have changed that). In general I would recommend using a buffer if you are not going to be using images as actual images (i.e. input textures that you sample or output textures that you write to only once at the end of your kernel).
About the image2d_t, sampler_t types, they are OpenCL "pseudo-objects" that you can pass into a kernel from C (they are reserved types). You send your image or your sampler from the C side into clSetKernelArg, and the kernel gets back a sampler_t or an image2d_t in the kernel's parameter list (just like you pass in a buffer object and it gets a pointer). The objects themselves cannot be meaningfully manipulated inside the kernel, they are just handles that you can send into the read_image/write_image functions, along with a few others.
As for the "actual" low-level difference between images and buffers, GPU's often have specially reserved texture memory that is highly optimized for "read often, write once" access patterns, with special texture sampling hardware and texture caches to optimize scatter reads, mipmaps, etc..
On the CPU there is probably no underlying difference between an image and a buffer, and your runtime likely implements both as memory arrays while enforcing image semantics.

WebP lossless format overview

I am reading the official WebP lossless bitstream spec. and I have a feeling, that the document is missing some explanation.
Let me describe some fragments of the specification:
1. Introduction - clear
2. Riff header - clear
3. Transformations
The transformations are used only for the main level ARGB image: the
subresolution images have no transforms, not even the 0 bit indicating
the end-of-transforms.
Nowhere earlier was it mentioned, that the container holds some sub-resolution images. What are they? Where are they described, if not in the specification? How to they add to the final image?
Then, in the Predictor transform paragraph:
We divide the image into squares...
..what image? The main image or sub-resolution image? What if the image cannot be divided into squares (apart from pixel-size squares)?
The first 4 bits of prediction data define the block width and height
in number of bits. The number of block columns, block_xsize, is used
in indexing two-dimensionally.
Does this mean that the image width is block_xsize * block_width ?
The transform data contains the prediction mode for each block of the image.
In what way, what format?
I dont know why I am having a hard time understanding this. Maybe because I am not a native english speaker or because the description is too laconic.
I'd appreciate any help in decoding this specification :)
It was mentioned earlier. Right at the top of the document it says:
The format uses subresolution images, recursively embedded into the
format itself, for storing statistical data about the images, such as
the used entropy codes, spatial predictors, color space conversion,
and color table.
These are arrays (or a vector in the case of the color table) of data where each element applies to a block of pixels in the actual image, e.g. a 16x16 block. These "subresolution images" are not themselves subsamples of the image being compressed.
The format description calls them images because they are stored exactly like the main image is in the format. The transforms are instructions to the decoder to apply to the decompressed main image data. The entropy image is used to decompress the main image, by virtue of providing the Huffman codes for each block.