Why QDBMP fail to write 128*128 images? - c++

I am developing a c++ application that reads some bitmap and work with them and then save them as bitmap . I use QDBMP library for working with bitmap file and every thing is good for 512*512 bitmap images . but when working with 128*128 bitmap files it just write some striped line in output . here is my code for reading and writing bitmap files :
int readBitmapImage(const char *file_name,UCHAR* r, UCHAR* g, UCHAR* b)
{
BMP* bmp;
UINT width, height;
bmp = BMP_ReadFile(file_name);
BMP_GetDepth(bmp);
BMP_CHECK_ERROR(stderr, -1);
width = BMP_GetWidth(bmp); height = BMP_GetHeight(bmp);
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
BMP_GetPixelRGB(bmp, x, y, &r[x*width+y], &g[x*width + y], &b[x*width + y]);
}
}
BMP_CHECK_ERROR(stderr, -2);
return 0;
}
void writeImageData(const char *file_name, UCHAR* r, UCHAR* g, UCHAR* b,int width,int height,int bitDepth)
{
BMP* bmp=BMP_Create(width,height,bitDepth);
width = BMP_GetWidth(bmp); height = BMP_GetHeight(bmp);
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
BMP_SetPixelRGB(bmp, x, y, r[x*width + y], g[x*width + y], b[x*width + y]);
}
}
BMP_WriteFile(bmp, file_name);
}
Tank's for your help
UPDATE1
The source image is :
The result of save source image is :
UPDATE2
The value of bitDepth is 24 and code block for alocate memory is :
UCHAR* WimageDataR = (UCHAR*)calloc(128* 128, sizeof(UCHAR));
UCHAR* WimageDataG = (UCHAR*)calloc(128 * 128, sizeof(UCHAR));
UCHAR* WimageDataB = (UCHAR*)calloc(128 * 128, sizeof(UCHAR));

After while i finally found out what is wrong . in BMP_ReadFile() function of QDBMP when the image has size of 128*128 , the header parameter ImageDataSize will not read from the file and has 0 size . so i add this block of code to it to prevent this problem and every thing is just fine.
if (bmp->Header.ImageDataSize == 0)
{
bmp->Header.ImageDataSize = bmp->Header.FileSize - bmp->Header.DataOffset;
}

Related

Getting PixelData from CGImageRef returns wrong value

Im trying to get the Pixel-Values from a CGImageRef, but somehow it always returns the wrong values.
My Test-Image is a complete red picture, just so i can test it easier. Its this: https://i.imgur.com/ZfLtzsl.png
Im getting the CGImageRef with the following Code:
CGImageRef UICreateScreenImage();
If i add Code to save the Picture to the Album, it saves correctly a complete red screen. So the ImageRef should be correct.
IOSurfaceRef ioSurfaceRef = (__bridge IOSurfaceRef)([UIWindow performSelector:#selector(createScreenIOSurface)]);
CGImageRef cgImageRef = UICreateCGImageFromIOSurface(ioSurfaceRef);
UIImage *uiImage = [UIImage imageWithCGImage:cgImageRef];
UIImageWriteToSavedPhotosAlbum(uiImage, nil, nil, nil);
So, i found code to get the Pixel-Buffer from a ImageRef. Im using the following Code:
size_t width = CGImageGetWidth(cgImageRef);
size_t height = CGImageGetHeight(cgImageRef);
size_t bpr = CGImageGetBytesPerRow(cgImageRef);
size_t bpp = CGImageGetBitsPerPixel(cgImageRef);
size_t bpc = CGImageGetBitsPerComponent(cgImageRef);
size_t bytes_per_pixel = bpp / bpc;
CGDataProviderRef provider = CGImageGetDataProvider(cgImageRef);
CFDataRef rawData = CGDataProviderCopyData(provider);
UInt8 * bytes = (UInt8 *) CFDataGetBytePtr(rawData);
This gives me the width, the height, the buffer
Now, i simply loop through it and print (as test) the Pixels at [100, 100] and [100, 101], both should be red (FF0000FF)
struct SPixel {
int r = 0;
int g = 0;
int b = 0;
int a = 0;
};
for(size_t y = 0; y < height; y++) {
for(size_t x = 0; x < width; x++) {
const uint8_t* pixel = &bytes[y * bpr + x * bytes_per_pixel];
SPixel Pixel;
Pixel.a = pixel[0];
Pixel.r = pixel[1];
Pixel.g = pixel[2];
Pixel.b = pixel[3];
if(y == 100 && x == 100){
Log::Color("red", "Pixel[100, 100]: %d, %d, %d, %d", Pixel.r, Pixel.g, Pixel.b, Pixel.a);
}
if(y == 100 && x == 101){
Log::Color("red", "Pixel[100, 101]: %d, %d, %d, %d", Pixel.r, Pixel.g, Pixel.b, Pixel.a);
}
}
}
And now, if i let that code run, and open my complete red screenshot on fullscreen, the Pixel that get printed are the following:
Pixel[100, 100]: 147, 56, 26, 255
Pixel[100, 101]: 20, 255, 255, 144
They are complete off from Red, and are not even then same, even though they are just one pixel from each other.
What am doing wrong?

Setting pixel color of 8-bit grayscale image using pointer

I have this code:
QImage grayImage = image.convertToFormat(QImage::Format_Grayscale8);
int size = grayImage.width() * grayImage.height();
QRgb *data = new QRgb[size];
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
QRgb *ptr = data;
QRgb *end = ptr + size;
for (; ptr < end; ++ptr) {
int gray = qGray(*ptr);
}
delete[] data;
It is based on this: https://stackoverflow.com/a/40740985/8257882
How can I set the color of a pixel using that pointer?
In addition, using qGray() and loading a "bigger" image seem to crash this.
This works:
int width = image.width();
int height = image.height();
for (int y = 0; y < height; ++y) {
for (int x = 0; x < width; ++x) {
image.setPixel(x, y, qRgba(0, 0, 0, 255));
}
}
But it is slow when compared to explicitly manipulating the image data.
Edit
Ok, I have this code now:
for (int y = 0; y < height; ++y) {
uchar *line = grayImage.scanLine(y);
for (int x = 0; x < width; ++x) {
int gray = qGray(line[x]);
*(line + x) = uchar(gray);
qInfo() << gray;
}
}
And it seems to work. However, when I use an image that has only black and white colors and print the gray value, black color gives me 0 and white gives 39. How can I get the gray value in a range of 0-255?
First of all you are copying too much data in this line:
memmove(data, grayImage.constBits(), size * sizeof(QRgb));
The size ob Qrgb is 4 bytes, but according to the documentation, the size of a Format_Grayscale8 pixel is only 8 bits or 1 byte. If you remove sizeof(QRgb) you should be copying the correct amount of bytes, assuming all the lines in the bitmap are consecutive (which, according to the documentation, they are not -- they are aligned to at minimum 32-bits, so you would have to account for that in size). The array data should not be of type Qrgb[size] but ucahr[size]. You can then modify data as you like. Finally, you will probably have to create a new QImage with one of the constructors that accept image bits as uchar and assign the new image to the old image:
auto newImage = QImage( data, image.width(), image.height(), QImage::Format_Grayscale8, ...);
grayImage = std::move( newImage );
But instead of copying image data, you could probably just modify grayImage directly by accessing its data through bits(), or even better, through scanLine(), maybe something like this:
int line, column;
auto pLine = grayImage.scanLine(line);
*(pLine + column) = uchar(grayValue);
EDIT:
According to scanLine documentation, the image is at least 32-bit aligned. So if your 8-bit grayScale image is 3 pixels wide, a new scan line will start every 4 bytes. If you have a 3x3 image, the total size of the memory required to hold the image pixels will be 12. The following code shows the required memory size:
int main() {
auto image = QImage(3, 3, QImage::Format_Grayscale8);
std::cout << image.bytesPerLine() * image.height() << "\n";
return 0;
}
The fill method (setting all gray values to 0xC0) could be implemented like this:
auto image = QImage(3, 3, QImage::Format_Grayscale8);
uchar gray = 0xc0;
for ( int i = 0; i < image.height(); ++i ) {
auto pLine = image.scanLine( i );
for ( int j = 0; j < image.width(); ++j )
*pLine++ = gray;
}

Freetype renders crap in bitmap

I'm trying to create monochrome glyph atlas but encountered a problem. Freetype renders 'crap' in glyph's bitmap. I blame freetype because some of the glyphs are still rendered correctly.
The resulting texture atlas:
Why could it be and how can i fix it?
However i still could be wrong and here is bitmap processing code:
static std::vector<unsigned char> generateBitmap(FT_Face &face, unsigned int glyph, size_t *width, size_t *height) {
FT_Load_Glyph(face, FT_Get_Char_Index(face, glyph), FT_LOAD_RENDER | FT_LOAD_MONOCHROME );
FT_Bitmap bitmap;
FT_Bitmap_New(&bitmap);
FT_Bitmap_Convert(ftLib, &face->glyph->bitmap, &bitmap, 1);
*width = bitmap.width;
*height = bitmap.rows;
std::vector<unsigned char> result(bitmap.width * bitmap.rows);//
for (size_t y = 0; y < bitmap.rows; ++y)
{
for (size_t x = 0; x < bitmap.width; ++x)
{
result[(bitmap.width * y) + x] = bitmap.buffer[(bitmap.width * y) + x];
}
}
FT_Bitmap_Done(ftLib, &bitmap);
return result;
}
And code for putting it on main buffer:
static void putOnBuffer(std::vector<unsigned char> &buffer, std::vector<unsigned char> &bitmap, size_t height, size_t width) {
int r = 0;
while (r < height) {
int w = 0;
while (w < width) {
//assume buffer is enough large
size_t mainBufPos = ((currentBufferPositionY + r) * imageWidth) + (currentBufferPositionX + w);
size_t bitmapBufPos = (r * width) + w;
buffer[mainBufPos] = clamp(int(bitmap[bitmapBufPos] * 0x100), 0xff);
w++;
}
r++;
}
}
From documentation:
Convert a bitmap object with depth 1bpp, 2bpp, 4bpp, 8bpp or 32bpp to a bitmap object with depth 8bpp, making the number of used bytes [per] line (a.k.a. the ‘pitch’) a multiple of ‘alignment’.
In your code, you pass 1 as the value of the alignment parameter in the call to FT_Bitmap_Convert. In monochrome, one byte will be eight pixels, so the horizontal render loop needs to enforce a multiple of eight for the width.
Reference: https://www.freetype.org/freetype2/docs/reference/ft2-bitmap_handling.html

How do I pass an OpenCV Mat into a C++ Tensorflow graph?

In Tensorflow C++ I can load an image file into the graph using
tensorflow::Node* file_reader = tensorflow::ops::ReadFile(tensorflow::ops::Const(IMAGE_FILE_NAME, b.opts()),b.opts().WithName(input_name));
tensorflow::Node* image_reader = tensorflow::ops::DecodePng(file_reader, b.opts().WithAttr("channels", 3).WithName("png_reader"));
tensorflow::Node* float_caster = tensorflow::ops::Cast(image_reader, tensorflow::DT_FLOAT, b.opts().WithName("float_caster"));
tensorflow::Node* dims_expander = tensorflow::ops::ExpandDims(float_caster, tensorflow::ops::Const(0, b.opts()), b.opts());
tensorflow::Node* resized = tensorflow::ops::ResizeBilinear(dims_expander, tensorflow::ops::Const({input_height, input_width},b.opts().WithName("size")),b.opts());
For an embedded application I would like to instead pass an OpenCV Mat into this graph.
How would I convert the Mat to a tensor that could be used as input to tensorflow::ops::Cast or tensorflow::ops::ExpandDims?
It's not directly from a CvMat, but you can see an example of how to initialize a Tensor from an in-memory array in the TensorFlow Android example:
https://github.com/tensorflow/tensorflow/blob/0.6.0/tensorflow/examples/android/jni/tensorflow_jni.cc#L173
You would start off by creating a new tensorflow::Tensor object, with something like this (all code untested):
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, height, width, depth}));
This creates a Tensor object with float values, with a batch size of 1, and a size of widthxheight, and with depth channels. For example a 128 wide by 64 high image with 3 channels would pass in a shape of {1, 64, 128, 3}. The batch size is just used when you need to pass in multiple images in a single call, and for simple uses you can leave it as 1.
Then you would get the underlying array behind the tensor using a line like this:
auto input_tensor_mapped = input_tensor.tensor<float, 4>();
The input_tensor_mapped object is an interface to the data in your newly-created tensor, and you can then copy your own data into it. Here I'm assuming you've set source_data as a pointer to your source data, for example:
const float* source_data = some_structure.imageData;
You can then loop through your data and copy it over:
for (int y = 0; y < height; ++y) {
const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x) {
const float* source_pixel = source_row + (x * depth);
for (int c = 0; c < depth; ++c) {
const float* source_value = source_pixel + c;
input_tensor_mapped(0, y, x, c) = *source_value;
}
}
}
There are obvious opportunities to optimize this naive approach, and I don't have sample code on hand to show how to deal with the OpenCV side of getting the source data, but hopefully this is helpful to get you started.
Here is complete example to read and feed:
Mat image;
image = imread("flowers.jpg", CV_LOAD_IMAGE_COLOR);
cv::resize(image, image, cv::Size(input_height, input_width), 0, 0, CV_INTER_CUBIC);
int depth = 3;
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT,
tensorflow::TensorShape({1, image.rows, image.cols, depth}));
for (int y = 0; y < image.rows; y++) {
for (int x = 0; x < image.cols; x++) {
Vec3b pixel = image.at<Vec3b>(y, x);
input_tensor_mapped(0, y, x, 0) = pixel.val[2]; //R
input_tensor_mapped(0, y, x, 1) = pixel.val[1]; //G
input_tensor_mapped(0, y, x, 2) = pixel.val[0]; //B
}
}
auto result = Sub(root.WithOpName("subtract_mean"), input_tensor, {input_mean});
ClientSession session(root);
TF_CHECK_OK(session.Run({result}, out_tensors));
I had tried to run inception model on the opencv Mat file and following code worked for me https://gist.github.com/kyrs/9adf86366e9e4f04addb. Although there are some issue with integration of opencv and tensorflow. Code worked without any issue for .png files but failed to load .jpg and .jpeg. You can follow this for more info https://github.com/tensorflow/tensorflow/issues/1924
Tensor convertMatToTensor(Mat &input)
{
int height = input.rows;
int width = input.cols;
int depth = input.channels();
Tensor imgTensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({height, width, depth}));
float* p = imgTensor.flat<float>().data();
Mat outputImg(height, width, CV_32FC3, p);
input.convertTo(outputImg, CV_32FC3);
return imgTensor;
}

How to get image data into arrary by ImageMagicK?

I don't know where to start from.
I want to read an image with ImageMagick in an array.
At this point, there is no error:
My_image.read("c:\\a.jpg");
I want to put in the array what I have already read of the image data.
And I want to write to a file using the ImageMagick library.
Here is my code:
...
master.read("c:\\a.jpg");
Image my_image("640x480", "white");
my_image.modifyImage();
Pixels my_pixel_cache(my_image);
PixelPacket* pixels;
int start_x = 0, start_y = 0, size_x = 640, size_y = 480;
*pixels = Color("black");
*(pixels+200) = Color("green");
my_pixel_cache.sync();
...
But I can't get the array of a.jpg. How to get a.jpg image data to an array to modify?
You have to initialize your PixelPacket:
PixelPacket *pixels = my_image.getPixels(0, 0, 640, 480);
then you can modify your image pixel by pixel with a nested loop:
int w = 640;
for (int y = 0; y != h; ++y)
for (int x = 0; x != w; ++x)
{
pixels[w * y + x].red = 255; // if MAGICKCORE_QUANTUM_DEPTH=8
pixels[w * y + x].green = 0;
pixels[w * y + x].blue = 0;
}
Magick::PixelPacket is a struct which contains red, green and blue members (plus another one for the 4th channel). Finally, to write changes to disk:
my_image.syncPixels();
my_image.write("c:\\temp\\output.jpg");