Freetype renders crap in bitmap - c++

I'm trying to create monochrome glyph atlas but encountered a problem. Freetype renders 'crap' in glyph's bitmap. I blame freetype because some of the glyphs are still rendered correctly.
The resulting texture atlas:
Why could it be and how can i fix it?
However i still could be wrong and here is bitmap processing code:
static std::vector<unsigned char> generateBitmap(FT_Face &face, unsigned int glyph, size_t *width, size_t *height) {
FT_Load_Glyph(face, FT_Get_Char_Index(face, glyph), FT_LOAD_RENDER | FT_LOAD_MONOCHROME );
FT_Bitmap bitmap;
FT_Bitmap_New(&bitmap);
FT_Bitmap_Convert(ftLib, &face->glyph->bitmap, &bitmap, 1);
*width = bitmap.width;
*height = bitmap.rows;
std::vector<unsigned char> result(bitmap.width * bitmap.rows);//
for (size_t y = 0; y < bitmap.rows; ++y)
{
for (size_t x = 0; x < bitmap.width; ++x)
{
result[(bitmap.width * y) + x] = bitmap.buffer[(bitmap.width * y) + x];
}
}
FT_Bitmap_Done(ftLib, &bitmap);
return result;
}
And code for putting it on main buffer:
static void putOnBuffer(std::vector<unsigned char> &buffer, std::vector<unsigned char> &bitmap, size_t height, size_t width) {
int r = 0;
while (r < height) {
int w = 0;
while (w < width) {
//assume buffer is enough large
size_t mainBufPos = ((currentBufferPositionY + r) * imageWidth) + (currentBufferPositionX + w);
size_t bitmapBufPos = (r * width) + w;
buffer[mainBufPos] = clamp(int(bitmap[bitmapBufPos] * 0x100), 0xff);
w++;
}
r++;
}
}

From documentation:
Convert a bitmap object with depth 1bpp, 2bpp, 4bpp, 8bpp or 32bpp to a bitmap object with depth 8bpp, making the number of used bytes [per] line (a.k.a. the ‘pitch’) a multiple of ‘alignment’.
In your code, you pass 1 as the value of the alignment parameter in the call to FT_Bitmap_Convert. In monochrome, one byte will be eight pixels, so the horizontal render loop needs to enforce a multiple of eight for the width.
Reference: https://www.freetype.org/freetype2/docs/reference/ft2-bitmap_handling.html

Related

Image subtraction with CUDA and textures

My goal is to use C++ with CUDA to subtract a dark frame from a raw image. I want to use textures for acceleration. The input of the images is cv::Mat with the type CV_8UC4 (I use the pointer to the data of the cv::Mat). This is the kernel I came up with, but I have no idea how to eventually subtract the textures from each other:
__global__ void DarkFrameSubtractionKernel(unsigned char* outputImage, size_t pitchOutputImage,
cudaTextureObject_t inputImage, cudaTextureObject_t darkImage, int width, int height)
{
const int x = blockIdx.x * blockDim.x + threadIdx.x;
const int y = blockDim.y * blockIdx.y + threadIdx.y;
const float tx = (x + 0.5f);
const float ty = (y + 0.5f);
if (x >= width || y >= height) return;
uchar4 inputImageTemp = tex2D<uchar4>(inputImage, tx, ty);
uchar4 darkImageTemp = tex2D<uchar4>(darkImage, tx, ty);
outputImage[y * pitchOutputImage + x] = inputImageTemp - darkImageTemp; // this line will throw an error
}
This is the function that calls the kernel (you can see that I create the textures from unsigned char):
void subtractDarkImage(unsigned char* inputImage, size_t pitchInputImage, unsigned char* outputImage,
size_t pitchOutputImage, unsigned char* darkImage, size_t pitchDarkImage, int width, int height,
cudaStream_t stream)
{
cudaResourceDesc resDesc = {};
resDesc.resType = cudaResourceTypePitch2D;
resDesc.res.pitch2D.width = width;
resDesc.res.pitch2D.height = height;
resDesc.res.pitch2D.devPtr = inputImage;
resDesc.res.pitch2D.pitchInBytes = pitchInputImage;
resDesc.res.pitch2D.desc = cudaCreateChannelDesc(8, 8, 8, 8, cudaChannelFormatKindUnsigned);
cudaTextureDesc texDesc = {};
texDesc.readMode = cudaReadModeElementType;
texDesc.addressMode[0] = cudaAddressModeBorder;
texDesc.addressMode[1] = cudaAddressModeBorder;
cudaTextureObject_t imageInputTex, imageDarkTex;
CUDA_CHECK(cudaCreateTextureObject(&imageInputTex, &resDesc, &texDesc, 0));
resDesc.res.pitch2D.devPtr = darkImage;
resDesc.res.pitch2D.pitchInBytes = pitchDarkImage;
CUDA_CHECK(cudaCreateTextureObject(&imageDarkTex, &resDesc, &texDesc, 0));
dim3 block(32, 8);
dim3 grid = paddedGrid(block.x, block.y, width, height);
DarkImageSubtractionKernel << <grid, block, 0, stream >> > (reinterpret_cast<uchar4*>(outputImage), pitchOutputImage / sizeof(uchar4),
imageInputTex, imageDarkTex, width, height);
CUDA_CHECK(cudaDestroyTextureObject(imageInputTex));
CUDA_CHECK(cudaDestroyTextureObject(imageDarkTex));
}
The code does not compile as I can not subtract a uchar4 from another one (in the kernel). Is there an easy way of subtraction here?
Help is very much appreciated.
Is there an easy way of subtraction here?
There are no arithmetic operators defined for CUDA built-in vector types. If you replace
outputImage[y * pitchOutputImage + x] = inputImageTemp - darkImageTemp;
with
uchar4 val;
val.x = inputImageTemp.x - darkImageTemp.x;
val.y = inputImageTemp.y - darkImageTemp.y;
val.z = inputImageTemp.z - darkImageTemp.z;
val.w = inputImageTemp.w - darkImageTemp.w;
outputImage[y * pitchOutputImage + x] = val;
things will work. If this offends you, I suggest writing a small library of helper functions to hide the mess.

Optimize image buffer

Here is a code that decodes a WebM frame and put them in a buffer
image->planes[p] = pointer to the top left pixel
image->linesize[p] = strides betwen rows
framesArray = vector of unsigned char*
while ( videoDec->getImage(*image) == VPXDecoder::NO_ERROR)
{
const int w = image->getWidth(p);
const int h = image->getHeight(p);
int offset = 0;
for (int y = 0; y < h; y++)
{
// fwrite(image->planes[p] + offset, 1, w, pFile);
for(int i=0;i<w;i++){
framesArray.at(count)[i+(w*y)] = *(image->planes[p]+offset+ i) ;
}
offset += image->linesize[p];
}
}
.............................
How can I write intro buffer line by line not pixel by pixel or optimize the writing of frame intro buffer?
if the source image and destination buffer share the same Width, Height and bit per pixel, you can use std::copy to copy the whole image into it.
std::copy(image->planes[p] + offset, image->planes[p] + (image->getHeight(p) * image->linesize[p], framesArray.begin()) ;
if it is same bit per pixel but different width and height, you can use std::copy by line.

What is the highest bit depth greyscale image I can export from FreeImage?

As context, I'm working with building a topographic program which needs relatively extreme detail. I do not expect the files to be small, and they do not formally need to be viewed on a monitor, they just need to have very high resolution.
I know that most image formats are limited to 8 bpp, on account of the standard limits on both monitors (at a reasonable price) and on human perception. However, 2⁸ is just 256 possible values, which induces plateauing artifacts in a reconstructed displacement. 2¹⁶ may be close enough at 65,536 possible values, which I have achieved.
I'm using FreeImage and DLang to construct the data, currently on a Linux Mint machine.
However, when I went on to 2³², software support seemed to fade on me. I tried a TIFF of this form and nothing seemed to be able to interpret it, either showing a completely (or mostly) transparent image (remembering that I didn't expect any monitor to really support 2³² shades of a channel) or complaining about being unable to decode the RGB data. I imagine that it's because it was assumed to be an RGB or RGBA image.
FreeImage is reasonably well documented for most purposes, but I'm now wondering, what is the highest-precision single-channel format I can export, and how would I do it? Can anyone provide an example? Am I really limited, in any typical and not-home-rolled image format, to 16-bit? I know that's high enough for, say, medical imaging, but I'm sure I'm not the first person to try to aim higher and we science-types can be pretty ambitious about our precision-level…
Did I make a glaring mistake in my code? Is there something else I should try instead for this kind of precision?
Here's my code.
The 16-bit TIFF that worked
void writeGrayscaleMonochromeBitmap(const double width, const double height) {
FIBITMAP *bitmap = FreeImage_AllocateT(FIT_UINT16, cast(int)width, cast(int)height);
for(int y = 0; y < height; y++) {
ubyte *scanline = FreeImage_GetScanLine(bitmap, y);
for(int x = 0; x < width; x++) {
ushort v = cast(ushort)((x * 0xFFFF)/width);
ubyte[2] bytes = nativeToLittleEndian(cast(ushort)(x/width * 0xFFFF));
scanline[x * ushort.sizeof + 0] = bytes[0];
scanline[x * ushort.sizeof + 1] = bytes[1];
}
}
FreeImage_Save(FIF_TIFF, bitmap, "test.tif", TIFF_DEFAULT);
FreeImage_Unload(bitmap);
}
The 32-bit TIFF that didn't really work
void writeGrayscaleMonochromeBitmap32(const double width, const double height) {
FIBITMAP *bitmap = FreeImage_AllocateT(FIT_UINT32, cast(int)width, cast(int)height);
writeln(width, ", ", height);
writeln("Width: ", FreeImage_GetWidth(bitmap));
for(int y = 0; y < height; y++) {
ubyte *scanline = FreeImage_GetScanLine(bitmap, y);
writeln(y, ": ", scanline);
for(int x = 0; x < width; x++) {
//writeln(x, " < ", width);
uint v = cast(uint)((x/width) * 0xFFFFFFFF);
writeln("V: ", v);
ubyte[4] bytes = nativeToLittleEndian(v);
scanline[x * uint.sizeof + 0] = bytes[0];
scanline[x * uint.sizeof + 1] = bytes[1];
scanline[x * uint.sizeof + 2] = bytes[2];
scanline[x * uint.sizeof + 3] = bytes[3];
}
}
FreeImage_Save(FIF_TIFF, bitmap, "test32.tif", TIFF_NONE);
FreeImage_Unload(bitmap);
}
Thanks for any pointers.
For a single channel, the highest available from FreeImage is 32-bit, as FIT_UINT32. However, the file format must be capable of this, and as of the moment, only TIFF appears to be up to the task (See page 104 of the Stanford Documentation). Additionally, most monitors are incapable of representing more than 8-bits-per-sample, 12 in extreme cases, so it is very difficult to read data back out and have it render properly.
A unit test involving comparing bytes before marshaling to the bitmap, and sampled from the same bitmap afterward, show that the data is in fact being encoded.
To imprint data to a 16-bit gray scale (currently supported by J2K, JP2, PGM, PGMRAW, PNG and TIF), you would do something like this:
void toFreeImageUINT16PNG(string fileName, const double width, const double height, double[] data) {
FIBITMAP *bitmap = FreeImage_AllocateT(FIT_UINT16, cast(int)width, cast(int)height);
for(int y = 0; y < height; y++) {
ubyte *scanline = FreeImage_GetScanLine(bitmap, y);
for(int x = 0; x < width; x++) {
//This magic has to happen with the y-coordinate in order to keep FreeImage from following its default behavior, and generating
//the image upside down.
ushort v = cast(ushort)(data[cast(ulong)(((height - 1) - y) * width + x)] * 0xFFFF); //((x * 0xFFFF)/width);
ubyte[2] bytes = nativeToLittleEndian(v);
scanline[x * ushort.sizeof + 0] = bytes[0];
scanline[x * ushort.sizeof + 1] = bytes[1];
}
}
FreeImage_Save(FIF_PNG, bitmap, fileName.toStringz);
FreeImage_Unload(bitmap);
}
Of course you would want to make adjustments for your target file type. To export as 48-bit RGB16, you would do this.
void toFreeImageColorPNG(string fileName, const double width, const double height, double[] data) {
FIBITMAP *bitmap = FreeImage_AllocateT(FIT_RGB16, cast(int)width, cast(int)height);
uint pitch = FreeImage_GetPitch(bitmap);
uint bpp = FreeImage_GetBPP(bitmap);
for(int y = 0; y < height; y++) {
ubyte *scanline = FreeImage_GetScanLine(bitmap, y);
for(int x = 0; x < width; x++) {
ulong offset = cast(ulong)((((height - 1) - y) * width + x) * 3);
ushort r = cast(ushort)(data[(offset + 0)] * 0xFFFF);
ushort g = cast(ushort)(data[(offset + 1)] * 0xFFFF);
ushort b = cast(ushort)(data[(offset + 2)] * 0xFFFF);
ubyte[6] bytes = nativeToLittleEndian(r) ~ nativeToLittleEndian(g) ~ nativeToLittleEndian(b);
scanline[(x * 3 * ushort.sizeof) + 0] = bytes[0];
scanline[(x * 3 * ushort.sizeof) + 1] = bytes[1];
scanline[(x * 3 * ushort.sizeof) + 2] = bytes[2];
scanline[(x * 3 * ushort.sizeof) + 3] = bytes[3];
scanline[(x * 3 * ushort.sizeof) + 4] = bytes[4];
scanline[(x * 3 * ushort.sizeof) + 5] = bytes[5];
}
}
FreeImage_Save(FIF_PNG, bitmap, fileName.toStringz);
FreeImage_Unload(bitmap);
}
Lastly, to encode a UINT32 greyscale image (limited purely to TIFF at the moment), you would do this.
void toFreeImageTIF32(string fileName, const double width, const double height, double[] data) {
FIBITMAP *bitmap = FreeImage_AllocateT(FIT_UINT32, cast(int)width, cast(int)height);
//DEBUG
int xtest = cast(int)(width/2);
int ytest = cast(int)(height/2);
uint comp1a = cast(uint)(data[cast(ulong)(((height - 1) - ytest) * width + xtest)] * 0xFFFFFFFF);
writeln("initial: ", nativeToLittleEndian(comp1a));
for(int y = 0; y < height; y++) {
ubyte *scanline = FreeImage_GetScanLine(bitmap, y);
for(int x = 0; x < width; x++) {
//This magic has to happen with the y-coordinate in order to keep FreeImage from following its default behavior, and generating
//the image upside down.
ulong i = cast(ulong)(((height - 1) - y) * width + x);
uint v = cast(uint)(data[i] * 0xFFFFFFFF);
ubyte[4] bytes = nativeToLittleEndian(v);
scanline[x * uint.sizeof + 0] = bytes[0];
scanline[x * uint.sizeof + 1] = bytes[1];
scanline[x * uint.sizeof + 2] = bytes[2];
scanline[x * uint.sizeof + 3] = bytes[3];
}
}
//DEBUG
ulong index = cast(ulong)(xtest * uint.sizeof);
writeln("Final: ", FreeImage_GetScanLine(bitmap, ytest)
[index .. index + uint.sizeof]);
FreeImage_Save(FIF_TIFF, bitmap, fileName.toStringz);
FreeImage_Unload(bitmap);
}
I've yet to find a program, built by anyone else, which will readily render a 32-bit gray-scale image on a monitor's available palette. However, I left my checking code in which will consistently write out the same array both at the top DEBUG and the bottom one, and that's consistent enough for me.
Hopefully this will help someone else out in the future.

Why QDBMP fail to write 128*128 images?

I am developing a c++ application that reads some bitmap and work with them and then save them as bitmap . I use QDBMP library for working with bitmap file and every thing is good for 512*512 bitmap images . but when working with 128*128 bitmap files it just write some striped line in output . here is my code for reading and writing bitmap files :
int readBitmapImage(const char *file_name,UCHAR* r, UCHAR* g, UCHAR* b)
{
BMP* bmp;
UINT width, height;
bmp = BMP_ReadFile(file_name);
BMP_GetDepth(bmp);
BMP_CHECK_ERROR(stderr, -1);
width = BMP_GetWidth(bmp); height = BMP_GetHeight(bmp);
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
BMP_GetPixelRGB(bmp, x, y, &r[x*width+y], &g[x*width + y], &b[x*width + y]);
}
}
BMP_CHECK_ERROR(stderr, -2);
return 0;
}
void writeImageData(const char *file_name, UCHAR* r, UCHAR* g, UCHAR* b,int width,int height,int bitDepth)
{
BMP* bmp=BMP_Create(width,height,bitDepth);
width = BMP_GetWidth(bmp); height = BMP_GetHeight(bmp);
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
BMP_SetPixelRGB(bmp, x, y, r[x*width + y], g[x*width + y], b[x*width + y]);
}
}
BMP_WriteFile(bmp, file_name);
}
Tank's for your help
UPDATE1
The source image is :
The result of save source image is :
UPDATE2
The value of bitDepth is 24 and code block for alocate memory is :
UCHAR* WimageDataR = (UCHAR*)calloc(128* 128, sizeof(UCHAR));
UCHAR* WimageDataG = (UCHAR*)calloc(128 * 128, sizeof(UCHAR));
UCHAR* WimageDataB = (UCHAR*)calloc(128 * 128, sizeof(UCHAR));
After while i finally found out what is wrong . in BMP_ReadFile() function of QDBMP when the image has size of 128*128 , the header parameter ImageDataSize will not read from the file and has 0 size . so i add this block of code to it to prevent this problem and every thing is just fine.
if (bmp->Header.ImageDataSize == 0)
{
bmp->Header.ImageDataSize = bmp->Header.FileSize - bmp->Header.DataOffset;
}

Broken BMP when save bitmap by SOIL. Screenshot area

This is continuation of my last question about saving screenshot to SOIL .here Now I wonder, how to make screenshot of part of screen and eliminate the reason that strange behaviour. My code:
bool saveTexture(string path, glm::vec2 startPos, glm::vec2 endPos)
{
const char *charPath = path.c_str();
GLuint widthPart = abs(endPos.x - startPos.x);
GLuint heightPart = abs(endPos.y - startPos.y);
BITMAPINFO bmi;
auto& hdr = bmi.bmiHeader;
hdr.biSize = sizeof(bmi.bmiHeader);
hdr.biWidth = widthPart;
hdr.biHeight = -1.0 * heightPart;
hdr.biPlanes = 1;
hdr.biBitCount = 24;
hdr.biCompression = BI_RGB;
hdr.biSizeImage = 0;
hdr.biXPelsPerMeter = 0;
hdr.biYPelsPerMeter = 0;
hdr.biClrUsed = 0;
hdr.biClrImportant = 0;
unsigned char* bitmapBits = (unsigned char*)malloc(3 * widthPart * heightPart);
HDC hdc = GetDC(NULL);
HDC hBmpDc = CreateCompatibleDC(hdc);
HBITMAP hBmp = CreateDIBSection(hdc, &bmi, DIB_RGB_COLORS, (void**)&bitmapBits, nullptr, 0);
SelectObject(hBmpDc, hBmp);
BitBlt(hBmpDc, 0, 0, widthPart, heightPart, hdc, startPos.x, startPos.y, SRCCOPY);
//UPDATE:
- int bytes = widthPart * heightPart * 3;
- // invert R and B chanels
- for (unsigned i = 0; i< bytes - 2; i += 3)
- {
- int tmp = bitmapBits[i + 2];
- bitmapBits[i + 2] = bitmapBits[i];
- bitmapBits[i] = tmp;
- }
+ unsigned stride = (widthPart * (hdr.biBitCount / 8) + 3) & ~3;
+ // invert R and B chanels
+ for (unsigned row = 0; row < heightPart; ++row) {
+ for (unsigned col = 0; col < widthPart; ++col) {
+ // Calculate the pixel index into the buffer, taking the
alignment into account
+ const size_t index{ row * stride + col * hdr.biBitCount / 8 };
+ std::swap(bitmapBits[index], bitmapBits[index + 2]);
+ }
+ }
int texture = SOIL_save_image(charPath, SOIL_SAVE_TYPE_BMP, widthPart, heightPart, 3, bitmapBits);
return texture;
}
When I run this if widthPart and heightPart is even number, that works perfect. But if something from this is odd number I get this BMP's.:
I checked any converting and code twice, but it seems to me the reason is in my wrong blit functions. Function of converting RGB is not affect on problem. What can be a reason? It's the right way blitting of area in BitBlt ?
Update No difference even or odd numbers. Correct picture produces when this numbers is equal. I don't know where is a problem.((
Update2
SOIL_save_image functions check parameters for errors and send to stbi_write_bmp:
int stbi_write_bmp(char *filename, int x, int y, int comp, void *data)
{
int pad = (-x*3) & 3;
return outfile(filename,-1,-1,x,y,comp,data,0,pad,
"11 4 22 4" "4 44 22 444444",
'B', 'M', 14+40+(x*3+pad)*y, 0,0, 14+40, // file header
40, x,y, 1,24, 0,0,0,0,0,0); // bitmap header
}
outfile function:
static int outfile(char const *filename, int rgb_dir, int vdir, int x, int
y, int comp, void *data, int alpha, int pad, char *fmt, ...)
{
FILE *f = fopen(filename, "wb");
if (f) {
va_list v;
va_start(v, fmt);
writefv(f, fmt, v);
va_end(v);
write_pixels(f,rgb_dir,vdir,x,y,comp,data,alpha,pad);
fclose(f);
}
return f != NULL;
}
The broken bitmap images are the result of a disagreement of data layout between Windows bitmaps and what the SOIL library expects1. The pixel buffer returned from CreateDIBSection follows the Windows rules (see Bitmap Header Types):
The scan lines are DWORD aligned [...]. They must be padded for scan line widths, in bytes, that are not evenly divisible by four [...].
In other words: The width, in bytes, of each scanline is (biWidth * (biBitCount / 8) + 3) & ~3. The SOIL library, on the other hand, doesn't expect pixel buffers to be DWORD aligned.
To fix this, the pixel data needs to be converted before being passed to SOIL, by stripping (potential) padding and exchanging the R and B color channels. The following code does so in-place2:
unsigned stride = (widthPart * (hdr.biBitCount / 8) + 3) & ~3;
for (unsigned row = 0; row < heightPart; ++row) {
for (unsigned col = 0; col < widthPart; ++col) {
// Calculate the source pixel index, taking the alignment into account
const size_t index_src{ row * stride + col * hdr.biBitCount / 8 };
// Calculate the destination pixel index (no alignment)
const size_t index_dst{ (row * width + col) * (hdr.biBitCount / 8) };
// Read color channels
const unsigned char b{ bitmapBits[index_src] };
const unsigned char g{ bitmapBits[index_src + 1] };
const unsigned char r{ bitmapBits[index_src + 2] };
// Write color channels switching R and B, and remove padding
bitmapBits[index_dst] = r;
bitmapBits[index_dst + 1] = g;
bitmapBits[index_dst + 2] = b;
}
}
With this code, index_src is the index into the pixel buffer, which includes padding to enforce proper DWORD alignment. index_dst is the index without any padding applied. Moving pixels from index_src to index_dst removes (potential) padding.
1 The tell-tale sign is scanlines moving to the left or right by one or two pixels (or individual color channels at different speeds). This is usually a safe indication, that there is a disagreement of scanline alignment.
2 This operation is destructive, i.e. the pixel buffer can no longer be passed to Windows GDI functions once converted, although the original data can be reconstructed, even if a bit more involved.