How to get image data into arrary by ImageMagicK? - c++

I don't know where to start from.
I want to read an image with ImageMagick in an array.
At this point, there is no error:
My_image.read("c:\\a.jpg");
I want to put in the array what I have already read of the image data.
And I want to write to a file using the ImageMagick library.
Here is my code:
...
master.read("c:\\a.jpg");
Image my_image("640x480", "white");
my_image.modifyImage();
Pixels my_pixel_cache(my_image);
PixelPacket* pixels;
int start_x = 0, start_y = 0, size_x = 640, size_y = 480;
*pixels = Color("black");
*(pixels+200) = Color("green");
my_pixel_cache.sync();
...
But I can't get the array of a.jpg. How to get a.jpg image data to an array to modify?

You have to initialize your PixelPacket:
PixelPacket *pixels = my_image.getPixels(0, 0, 640, 480);
then you can modify your image pixel by pixel with a nested loop:
int w = 640;
for (int y = 0; y != h; ++y)
for (int x = 0; x != w; ++x)
{
pixels[w * y + x].red = 255; // if MAGICKCORE_QUANTUM_DEPTH=8
pixels[w * y + x].green = 0;
pixels[w * y + x].blue = 0;
}
Magick::PixelPacket is a struct which contains red, green and blue members (plus another one for the 4th channel). Finally, to write changes to disk:
my_image.syncPixels();
my_image.write("c:\\temp\\output.jpg");

Related

Setting pixel color of 8-bit grayscale image using pointer

I have this code:
QImage grayImage = image.convertToFormat(QImage::Format_Grayscale8);
int size = grayImage.width() * grayImage.height();
QRgb *data = new QRgb[size];
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
QRgb *ptr = data;
QRgb *end = ptr + size;
for (; ptr < end; ++ptr) {
int gray = qGray(*ptr);
}
delete[] data;
It is based on this: https://stackoverflow.com/a/40740985/8257882
How can I set the color of a pixel using that pointer?
In addition, using qGray() and loading a "bigger" image seem to crash this.
This works:
int width = image.width();
int height = image.height();
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
image.setPixel(x, y, qRgba(0, 0, 0, 255));
}
}
But it is slow when compared to explicitly manipulating the image data.
Edit
Ok, I have this code now:
for (int y = 0; y < height; ++y) {
uchar *line = grayImage.scanLine(y);
for (int x = 0; x < width; ++x) {
int gray = qGray(line[x]);
*(line + x) = uchar(gray);
qInfo() << gray;
}
}
And it seems to work. However, when I use an image that has only black and white colors and print the gray value, black color gives me 0 and white gives 39. How can I get the gray value in a range of 0-255?
First of all you are copying too much data in this line:
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
The size ob Qrgb is 4 bytes, but according to the documentation, the size of a Format_Grayscale8 pixel is only 8 bits or 1 byte. If you remove sizeof(QRgb) you should be copying the correct amount of bytes, assuming all the lines in the bitmap are consecutive (which, according to the documentation, they are not -- they are aligned to at minimum 32-bits, so you would have to account for that in size). The array data should not be of type Qrgb[size] but ucahr[size]. You can then modify data as you like. Finally, you will probably have to create a new QImage with one of the constructors that accept image bits as uchar and assign the new image to the old image:
auto newImage = QImage( data, image.width(), image.height(), QImage::Format_Grayscale8, ...);
grayImage = std::move( newImage );
But instead of copying image data, you could probably just modify grayImage directly by accessing its data through bits(), or even better, through scanLine(), maybe something like this:
int line, column;
auto pLine = grayImage.scanLine(line);
*(pLine + column) = uchar(grayValue);
EDIT:
According to scanLine documentation, the image is at least 32-bit aligned. So if your 8-bit grayScale image is 3 pixels wide, a new scan line will start every 4 bytes. If you have a 3x3 image, the total size of the memory required to hold the image pixels will be 12. The following code shows the required memory size:
int main() {
auto image = QImage(3, 3, QImage::Format_Grayscale8);
std::cout << image.bytesPerLine() * image.height() << "\n";
return 0;
}
The fill method (setting all gray values to 0xC0) could be implemented like this:
auto image = QImage(3, 3, QImage::Format_Grayscale8);
uchar gray = 0xc0;
for ( int i = 0; i < image.height(); ++i ) {
auto pLine = image.scanLine( i );
for ( int j = 0; j < image.width(); ++j )
*pLine++ = gray;
}

Why QDBMP fail to write 128*128 images?

I am developing a c++ application that reads some bitmap and work with them and then save them as bitmap . I use QDBMP library for working with bitmap file and every thing is good for 512*512 bitmap images . but when working with 128*128 bitmap files it just write some striped line in output . here is my code for reading and writing bitmap files :
int readBitmapImage(const char *file_name,UCHAR* r, UCHAR* g, UCHAR* b)
{
BMP* bmp;
UINT width, height;
bmp = BMP_ReadFile(file_name);
BMP_GetDepth(bmp);
BMP_CHECK_ERROR(stderr, -1);
width = BMP_GetWidth(bmp); height = BMP_GetHeight(bmp);
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
BMP_GetPixelRGB(bmp, x, y, &r[x*width+y], &g[x*width + y], &b[x*width + y]);
}
}
BMP_CHECK_ERROR(stderr, -2);
return 0;
}
void writeImageData(const char *file_name, UCHAR* r, UCHAR* g, UCHAR* b,int width,int height,int bitDepth)
{
BMP* bmp=BMP_Create(width,height,bitDepth);
width = BMP_GetWidth(bmp); height = BMP_GetHeight(bmp);
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
BMP_SetPixelRGB(bmp, x, y, r[x*width + y], g[x*width + y], b[x*width + y]);
}
}
BMP_WriteFile(bmp, file_name);
}
Tank's for your help
UPDATE1
The source image is :
The result of save source image is :
UPDATE2
The value of bitDepth is 24 and code block for alocate memory is :
UCHAR* WimageDataR = (UCHAR*)calloc(128* 128, sizeof(UCHAR));
UCHAR* WimageDataG = (UCHAR*)calloc(128 * 128, sizeof(UCHAR));
UCHAR* WimageDataB = (UCHAR*)calloc(128 * 128, sizeof(UCHAR));
After while i finally found out what is wrong . in BMP_ReadFile() function of QDBMP when the image has size of 128*128 , the header parameter ImageDataSize will not read from the file and has 0 size . so i add this block of code to it to prevent this problem and every thing is just fine.
if (bmp->Header.ImageDataSize == 0)
{
bmp->Header.ImageDataSize = bmp->Header.FileSize - bmp->Header.DataOffset;
}

Image Packing Using FreeImage C++ Library, Pixel Values of all images are not adding

I was trying to pack multiple images in a single image, using Bin Packing algorithm. In the part of adding images in a single image I was trying with collecting all the image pixel values and put them in the empty frame, but this is not working. Is there any suggestions?
Hi Edited the question,
` FIBITMAP *out_bmp = FreeImage_Allocate(4096, 4096, 32, 0, 0, 0);
BYTE *out_bits = FreeImage_GetBits(out_bmp);
int out_pitch = FreeImage_GetPitch(out_bmp);
// copy all the images to the final one
for (int i = 0; i < files.size(); i++) {
string s = "PathToFile" + files[i];
FIBITMAP* img0 = FreeImage_Load(FreeImage_GetFileType(s.c_str(), 0), s.c_str());
// make sure the input picture is 32-bits
if (FreeImage_GetBPP(img0) != 32) {
FIBITMAP *new_bmp = FreeImage_ConvertTo32Bits(img0);
FreeImage_Unload(img0);
img0 = new_bmp;
}
int img_pitch = FreeImage_GetPitch(img0);
BYTE *img_bits = FreeImage_GetBits(img0);
BYTE *out_bits_ptr = out_bits + out_pitch *
FreeImage_GetHeight(img0) + 4 * FreeImage_GetWidth(img0);
for (int y = 0; y < FreeImage_GetHeight(img0); y += 1) {
memcpy(out_bits_ptr, img_bits, FreeImage_GetWidth(img0) * 4);
out_bits_ptr += out_pitch;
img_bits += img_pitch;
}
}`

How to read data from a UTexture2D in C++

I am trying to read the pixel data from a populated UTexture2D in an Unreal Engine C++ project. Before I post the question here, I tried to use the method described in this link: https://answers.unrealengine.com/questions/25594/accessing-pixel-values-of-texture2d.html. However, it doesn't work for me. All pixel values I got from the texture are some garbage data.
I just want to get the depth values from the SceneCapture2D and a post-processing material that contains SceneTexture: Depth node. I need the depth values available in C++ so that I can do further processing with OpenCV. In Directx11, staging texture can be used for CPU read, but in the unreal engine, I don't know how to create a 'staging texture' like Dx11 has. I can't get the correct pixel values from the current method which makes me think I may try to access a no-CPU-readable texture.
Here is my experimental code for reading data back from an RGB UTexture2D.
Initialize the RGB Texture:
VideoTextureColor= UTexture2D::CreateTransient(640, 480, PF_B8G8R8A8);
VideoTextureColor->UpdateResource();
VideoUpdateTextureRegionColor = new FUpdateTextureRegion2D(0, 0, 0, 0, 640, 480);
ColorRegionData = new FUpdateTextureRegionsData;
PixelDepthData.Init(FColor(0, 0, 0, 255), 640 * 480);
// Populate the texture with blue color
for (int i = 0; i < 640; i++) {
for (int j = 0; j < 480; j++) {
int idx = j * 640 + i;
PixelDepthData[idx].B = 255;
PixelDepthData[idx].G = 0;
PixelDepthData[idx].R = 0;
PixelDepthData[idx].A = 255;
}
}
UpdateTextureRegions(
VideoTextureColor,
(int32)0,
(uint32)1,
VideoUpdateTextureRegionColor,
(uint32)(4 * 640),
(uint32)4,
(uint8*)PixelDepthData.GetData(),
false,
ColorRegionData
);
Then, update read its value back to the PixelDepthData (TArray type) array and update this texture with values storing in the PixelDepthData, which is its old value.
UpdateTextureRegions(
VideoTextureColor,
(int32)0,
(uint32)1,
VideoUpdateTextureRegionColor,
(uint32)(4 * 640),
(uint32)4,
(uint8*)PixelDepthData.GetData(),
false,
ColorRegionData
);
ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER(
FRealSenseDelegator,
ARealSenseDelegator*, RealSenseDelegator, this,
{
FColor* tmpImageDataPtr = static_cast<FColor*>((RealSenseDelegator->VideoTextureColor)->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY));
for (uint32 j = 0; j < 480; j++) {
for (uint32 i = 0; i < 640; i++) {
uint32 idx = j * 640 + i;
RealSenseDelegator->PixelDepthData[idx] = tmpImageDataPtr[idx];
RealSenseDelegator->PixelDepthData[idx].A = 255;
}
}
(RealSenseDelegator->VideoTextureColor)->PlatformData->Mips[0].BulkData.Unlock();
}
);
All I got is a white color texture instead of a blue color texture in the visualization scene.
Does anyone know how to read the data of the UTexture2D Object?
I figured that out. You have to get the UTexture2D's RHI texture reference first, and then use RHILockTexture2D to read it's data, and you have to do it in the RenderThread. The following code just an example:
FTexture2DResource* uTex2DRes = (FTexture2DResource*)(RealSenseDelegator->VideoTexturePixelDepth)->Resource;
float* cpuDataPtr = (float*)RHILockTexture2D(
uTex2DRes->GetTexture2DRHI(),
0,
RLM_ReadOnly,
destStride,
false);
for (uint32 j = 0; j < 480; j++) {
for (uint32 i = 0; i < 640; i++) {
uint32 idx = j * 640 + i;
// TODO Read the pixel data right here
}
}
RHIUnlockTexture2D(uTex2DRes->GetTexture2DRHI(), 0, false);
To do this in the Render Thread, you have to use the Macro such as ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER // If you just one to pass one parameter to the render thread, use this one.+

How do I pass an OpenCV Mat into a C++ Tensorflow graph?

In Tensorflow C++ I can load an image file into the graph using
tensorflow::Node* file_reader = tensorflow::ops::ReadFile(tensorflow::ops::Const(IMAGE_FILE_NAME, b.opts()),b.opts().WithName(input_name));
tensorflow::Node* image_reader = tensorflow::ops::DecodePng(file_reader, b.opts().WithAttr("channels", 3).WithName("png_reader"));
tensorflow::Node* float_caster = tensorflow::ops::Cast(image_reader, tensorflow::DT_FLOAT, b.opts().WithName("float_caster"));
tensorflow::Node* dims_expander = tensorflow::ops::ExpandDims(float_caster, tensorflow::ops::Const(0, b.opts()), b.opts());
tensorflow::Node* resized = tensorflow::ops::ResizeBilinear(dims_expander, tensorflow::ops::Const({input_height, input_width},b.opts().WithName("size")),b.opts());
For an embedded application I would like to instead pass an OpenCV Mat into this graph.
How would I convert the Mat to a tensor that could be used as input to tensorflow::ops::Cast or tensorflow::ops::ExpandDims?
It's not directly from a CvMat, but you can see an example of how to initialize a Tensor from an in-memory array in the TensorFlow Android example:
https://github.com/tensorflow/tensorflow/blob/0.6.0/tensorflow/examples/android/jni/tensorflow_jni.cc#L173
You would start off by creating a new tensorflow::Tensor object, with something like this (all code untested):
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, height, width, depth}));
This creates a Tensor object with float values, with a batch size of 1, and a size of widthxheight, and with depth channels. For example a 128 wide by 64 high image with 3 channels would pass in a shape of {1, 64, 128, 3}. The batch size is just used when you need to pass in multiple images in a single call, and for simple uses you can leave it as 1.
Then you would get the underlying array behind the tensor using a line like this:
auto input_tensor_mapped = input_tensor.tensor<float, 4>();
The input_tensor_mapped object is an interface to the data in your newly-created tensor, and you can then copy your own data into it. Here I'm assuming you've set source_data as a pointer to your source data, for example:
const float* source_data = some_structure.imageData;
You can then loop through your data and copy it over:
for (int y = 0; y < height; ++y) {
const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x) {
const float* source_pixel = source_row + (x * depth);
for (int c = 0; c < depth; ++c) {
const float* source_value = source_pixel + c;
input_tensor_mapped(0, y, x, c) = *source_value;
}
}
}
There are obvious opportunities to optimize this naive approach, and I don't have sample code on hand to show how to deal with the OpenCV side of getting the source data, but hopefully this is helpful to get you started.
Here is complete example to read and feed:
Mat image;
image = imread("flowers.jpg", CV_LOAD_IMAGE_COLOR);
cv::resize(image, image, cv::Size(input_height, input_width), 0, 0, CV_INTER_CUBIC);
int depth = 3;
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, image.rows, image.cols, depth}));
for (int y = 0; y < image.rows; y++) {
for (int x = 0; x < image.cols; x++) {
Vec3b pixel = image.at<Vec3b>(y, x);
input_tensor_mapped(0, y, x, 0) = pixel.val[2]; //R
input_tensor_mapped(0, y, x, 1) = pixel.val[1]; //G
input_tensor_mapped(0, y, x, 2) = pixel.val[0]; //B
}
}
auto result = Sub(root.WithOpName("subtract_mean"), input_tensor, {input_mean});
ClientSession session(root);
TF_CHECK_OK(session.Run({result}, out_tensors));
I had tried to run inception model on the opencv Mat file and following code worked for me https://gist.github.com/kyrs/9adf86366e9e4f04addb. Although there are some issue with integration of opencv and tensorflow. Code worked without any issue for .png files but failed to load .jpg and .jpeg. You can follow this for more info https://github.com/tensorflow/tensorflow/issues/1924
Tensor convertMatToTensor(Mat &input)
{
int height = input.rows;
int width = input.cols;
int depth = input.channels();
Tensor imgTensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({height, width, depth}));
float* p = imgTensor.flat<float>().data();
Mat outputImg(height, width, CV_32FC3, p);
input.convertTo(outputImg, CV_32FC3);
return imgTensor;
}