I've been trying to load compressed images with S3TC (BC/DXT) compression in Vulkan, but so far I haven't had much luck.
Here is what the Vulkan specification says about compressed images:
https://www.khronos.org/registry/dataformat/specs/1.1/dataformat.1.1.html#S3TC:
Compressed texture images stored using the S3TC compressed image formats are represented as a collection of 4×4 texel blocks, where each block contains 64 or 128 bits of texel data. The image is encoded as a normal 2D raster image in which each 4×4 block is treated as a single pixel.
https://www.khronos.org/registry/vulkan/specs/1.0/xhtml/vkspec.html#resources-images:
For images created with linear tiling, rowPitch, arrayPitch and depthPitch describe the layout of the subresource in linear memory. For uncompressed formats, rowPitch is the number of bytes between texels with the same x coordinate in adjacent rows (y coordinates differ by one). arrayPitch is the number of bytes between texels with the same x and y coordinate in adjacent array layers of the image (array layer values differ by one). depthPitch is the number of bytes between texels with the same x and y coordinate in adjacent slices of a 3D image (z coordinates differ by one). Expressed as an addressing formula, the starting byte of a texel in the subresource has address:
// (x,y,z,layer) are in texel coordinates
address(x,y,z,layer) = layerarrayPitch + zdepthPitch + yrowPitch + xtexelSize + offset
For compressed formats, the rowPitch is the number of bytes between compressed blocks in adjacent rows. arrayPitch is the number of bytes between blocks in adjacent array layers. depthPitch is the number of bytes between blocks in adjacent slices of a 3D image.
// (x,y,z,layer) are in block coordinates
address(x,y,z,layer) = layerarrayPitch + zdepthPitch + yrowPitch + xblockSize + offset;
arrayPitch is undefined for images that were not created as arrays. depthPitch is defined only for 3D images.
For color formats, the aspectMask member of VkImageSubresource must be VK_IMAGE_ASPECT_COLOR_BIT. For depth/stencil formats, aspect must be either VK_IMAGE_ASPECT_DEPTH_BIT or VK_IMAGE_ASPECT_STENCIL_BIT. On implementations that store depth and stencil aspects separately, querying each of these subresource layouts will return a different offset and size representing the region of memory used for that aspect. On implementations that store depth and stencil aspects interleaved, the same offset and size are returned and represent the interleaved memory allocation.
My image is a normal 2D image (0 layers, 1 mipmap), so there's no arrayPitch or depthPitch. Since S3TC compression is directly supported by the hardware, it should be possible to use the image data without decompressing it first. In OpenGL this can be done using glCompressedTexImage2D, and this has worked for me in the past.
In OpenGL I've used GL_COMPRESSED_RGBA_S3TC_DXT1_EXT as image format, for Vulkan I'm using VK_FORMAT_BC1_RGBA_UNORM_BLOCK, which should be equivalent.
Here's my code for mapping the image data:
auto dds = load_dds("img.dds");
auto *srcData = static_cast<uint8_t*>(dds.data());
auto *destData = static_cast<uint8_t*>(vkImageMapPtr); // Pointer to mapped memory of VkImage
destData += layout.offset(); // layout = VkImageLayout of the image
assert((w %4) == 0);
assert((h %4) == 0);
assert(blockSize == 8); // S3TC BC1
auto wBlocks = w /4;
auto hBlocks = h /4;
for(auto y=decltype(hBlocks){0};y<hBlocks;++y)
{
auto *rowDest = destData +y *layout.rowPitch(); // rowPitch is 0
auto *rowSrc = srcData +y *(wBlocks *blockSize);
for(auto x=decltype(wBlocks){0};x<wBlocks;++x)
{
auto *pxDest = rowDest +x *blockSize;
auto *pxSrc = rowSrc +x *blockSize; // 4x4 image block
memcpy(pxDest,pxSrc,blockSize); // 64Bit per block
}
}
And here's the code for initializing the image:
vk::Device device = ...; // Initialization
vk::AllocationCallbacks allocatorCallbacks = ...; // Initialization
[...] // Load the dds data
uint32_t width = dds.width();
uint32_t height = dds.height();
auto format = dds.format(); // = vk::Format::eBc1RgbaUnormBlock;
vk::Extent3D extent(width,height,1);
vk::ImageCreateInfo imageInfo(
vk::ImageCreateFlagBits(0),
vk::ImageType::e2D,format,
extent,1,1,
vk::SampleCountFlagBits::e1,
vk::ImageTiling::eLinear,
vk::ImageUsageFlagBits::eSampled | vk::ImageUsageFlagBits::eColorAttachment,
vk::SharingMode::eExclusive,
0,nullptr,
vk::ImageLayout::eUndefined
);
vk::Image img = nullptr;
device.createImage(&imageInfo,&allocatorCallbacks,&img);
vk::MemoryRequirements memRequirements;
device.getImageMemoryRequirements(img,&memRequirements);
uint32_t typeIndex = 0;
get_memory_type(memRequirements.memoryTypeBits(),vk::MemoryPropertyFlagBits::eHostVisible,typeIndex); // -> typeIndex is set to 1
auto szMem = memRequirements.size();
vk::MemoryAllocateInfo memAlloc(szMem,typeIndex);
vk::DeviceMemory mem;
device.allocateMemory(&memAlloc,&allocatorCallbacks,&mem); // Note: Using the default allocation (nullptr) doesn't change anything
device.bindImageMemory(img,mem,0);
uint32_t mipLevel = 0;
vk::ImageSubresource resource(
vk::ImageAspectFlagBits::eColor,
mipLevel,
0
);
vk::SubresourceLayout layout;
device.getImageSubresourceLayout(img,&resource,&layout);
auto *srcData = device.mapMemory(mem,0,szMem,vk::MemoryMapFlagBits(0));
[...] // Map the dds-data (See code from first post)
device.unmapMemory(mem);
The code runs without issues, however the resulting image isn't correct. This is the source image:
And this is the result:
I'm certain that the problem lies in the first code snipped I've posted, however, in case it doesn't, I've written a small adaption of the triangle demo from the Vulkan SDK which produces the same result. It can be downloaded here. The source-code is included, all I've changed from the triangle demo are the "demo_prepare_texture_image"-function in tri.c (Lines 803 to 903) and the "dds.cpp" and "dds.h" files. "dds.cpp" contains the code for loading the dds, and mapping the image memory.
I'm using gli to load the dds-data (Which is supposed to "work perfectly with Vulkan"), which is also included in the download above. To build the project, the Vulkan SDK include directory has to be added to the "tri" project, and the path to the dds has to be changed (tri.c, Line 809).
The source image ("x64/Debug/test.dds" in the project) uses DXT1 compression. I've tested in on different hardware as well, with the same result.
Any example code for initializing/mapping compressed images would also help a lot.
Your problem is actually quite simple - in the demo_prepare_textures function, the first line, there is a variable tex_format, which is set to VK_FORMAT_B8G8R8A8_UNORM (which is what it is in the original sample). This eventually gets used to create the VkImageView. If you just change this to VK_FORMAT_BC1_RGBA_UNORM_BLOCK, it displays the texture correctly on the triangle.
As an aside - you can verify that your texture loaded correctly, with RenderDoc, which comes with the Vulkan SDK installation. Doing a capture of it, the and looking in the TextureViewer tab, the Inputs tab shows that your texture looks identical to the one on disk, even with the incorrect format.
Related
In a camera application bitmap pixel arrays are retrieved from a streaming camera.
The pixel arrays are captured by writing them to a named pipe, where on the other end of the pipe, ffmpeg retrieves them and creates an AVI file.
I will need to create one custom frame (with custom text on), and pipe its pixels as the first frame in the resulting movie.
The question is how can I use a TBitmap (for convenience) to
Create a X by Y monochrome (8 bit) bitmap from scratch, with
custom text on. I want the background to be white, and the text to
be black. (Mostly figured this step out, see below.)
Retrieve the pixel array that I can send/write to the pipe
Step 1: The following code creates a TBitmap and writes text on it:
int w = 658;
int h = 492;
TBitmap* bm = new TBitmap();
bm->Width = w;
bm->Height = h;
bm->HandleType = bmDIB;
bm->PixelFormat = pf8bit;
bm->Canvas->Font->Name = "Tahoma";
bm->Canvas->Font->Size = 8;
int textY = 10;
string info("some Text");
bm->Canvas->TextOut(10, textY, info.c_str());
The above basically concludes step 1.
The writing/piping code expects a byte array with the bitmaps pixels; e.g.
unsigned long numWritten;
WriteFile(mPipeHandle, pImage, size, &numWritten, NULL);
where pImage is a pointer to a unsigned char buffer (the bitmaps pixels), and the size is the length of this buffer.
Update:
Using the generated TBitmap and a TMemoryStream for transferring data to the ffmpeg pipeline does not generate the proper result. I get a distorted image with 3 diagonal lines on it.
The buffersize for the camera frame buffers that I receive are are exactly 323736, which is equal to the number of pixels in the image, i.e. 658x492.
NOTE I have concluded that this 'bitmap' is not padded. 658 is not divisible by four.
The buffersize I get after dumping my generated bitmap to a memory stream, however, has the size 325798, which is 2062 bytes larger than it is supposed to be. As #Spektre pointed out below, this discrepancy may be padding?
Using the following code for getting the pixel array;
ByteBuffer CustomBitmap::getPixArray()
{
// --- Local variables --- //
unsigned int iInfoHeaderSize=0;
unsigned int iImageSize=0;
BITMAPINFO *pBitmapInfoHeader;
unsigned char *pBitmapImageBits;
// First we call GetDIBSizes() to determine the amount of
// memory that must be allocated before calling GetDIB()
// NB: GetDIBSizes() is a part of the VCL.
GetDIBSizes(mTheBitmap->Handle,
iInfoHeaderSize,
iImageSize);
// Next we allocate memory according to the information
// returned by GetDIBSizes()
pBitmapInfoHeader = new BITMAPINFO[iInfoHeaderSize];
pBitmapImageBits = new unsigned char[iImageSize];
// Call GetDIB() to convert a device dependent bitmap into a
// Device Independent Bitmap (a DIB).
// NB: GetDIB() is a part of the VCL.
GetDIB(mTheBitmap->Handle,
mTheBitmap->Palette,
pBitmapInfoHeader,
pBitmapImageBits);
delete []pBitmapInfoHeader;
ByteBuffer buf;
buf.buffer = pBitmapImageBits;
buf.size = iImageSize;
return buf;
}
So final challenge seem to be to get a bytearray that has the same size as the ones coming from the camera. How to find and remove the padding bytes from the TBitmap code??
TBitmap has a PixelFormat property to set the bit depth.
TBitmap has a HandleType property to control whether a DDB or a DIB is created. DIB is the default.
Since you are passing BMPs around between different systems, you really should be using DIBs instead of DDBs, to avoid any corruption/misinterpretation of the pixel data.
Also, this line of code:
Image1->Picture->Bitmap->Handle = bm->Handle;
Should be changed to this instead:
Image1->Picture->Bitmap->Assign(bm);
// or:
// Image1->Picture->Bitmap = bm;
Or this:
Image1->Picture->Assign(bm);
Either way, don't forget to delete bm; afterwards, since the TPicture makes a copy of the input TBitmap, it does not take ownership.
To get the BMP data as a buffer of bytes, you can use the TBitmap::SaveToStream() method, saving to a TMemoryStream. Or, if you just want the pixel data, not the complete BMP data (ie, without BMP headers - see Bitmap Storage), you can use the Win32 GetDiBits() function, which outputs the pixels in DIB format. You can't obtain a byte buffer of the pixels for a DDB, since they depend on the device they are rendered to. DDBs are only usable in-memory in conjunction with HDCs, you can't pass them around. But you can convert a DIB to a DDB once you have a final device to render it to.
In other words, get the pixels from the camera, save them to a DIB, pass that around as needed (ie, over the pipe), and then do whatever you need with it - save to a file, convert to DDB to render onscreen, etc.
This is just an addon to existing answer (with additional info after the OP edit)
Bitmap file-format has align bytes on each row (so there usually are some bytes at the end of each line that are not pixels) up to some ByteLength (present in bmp header). Those create the skew and diagonal like lines. In your case the size discrepancy is 4 bytes per row:
(xs + align)*ys + header = size
(658+ 4)*492 + 94 = 325798
but beware the align size depends on image width and bmp header ...
Try this instead:
// create bmp
Graphics::TBitmap *bmp=new Graphics::TBitmap;
// bmp->Assign(???); // a) copy image from ???
bmp->SetSize(658,492); // b) in case you use Assign do not change resolution
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf8bit;
// bmp->Canvas->Draw(0,0,???); // b) copy image from ???
// here render your text using
bmp->Canvas->Brush->Style=bsSolid;
bmp->Canvas->Brush->Color=clWhite;
bmp->Canvas->Font->Color=clBlack;
bmp->Canvas->Font->Name = "Tahoma";
bmp->Canvas->Font->Size = 8;
bmp->Canvas->TextOutA(5,5,"Text");
// Byte data
for (int y=0;y<bmp->Height;y++)
{
BYTE *p=(BYTE*)bmp->ScanLine[y]; // pf8bit -> BYTE*
// here send/write/store ... bmp->Width bytes from p[]
}
// Canvas->Draw(0,0,bmp); // just renfder it on Form
delete bmp; bmp=NULL;
mixing GDI winapi calls for pixel array access (bitblt etc...) with VCL bmDIB bitmap might cause problems and resource leaks (hence the error on exit) and its also slower then usage of ScanLine[] (if coded right) so I strongly advice to use native VCL functions (as I did in above example) instead of the GDI/winapi calls where you can.
for more info see:
#4. GDI Bitmap
Delphi / C++ builder Windows 10 1709 bitmap operations extremely slow
Draw tbitmap with scale and alpha channel faster
Also you mention your image source is camera. If you use pf8bit it mean its palette indexed color which is relatively slow and ugly if native GDI algo is used (to convert from true/hi color camera image) for better transform see:
Effective gif/image color quantization?
simple dithering
We already have a highly optimized class in our API to read 3D Lut(Nuke format) files and apply the transform to the image. So instead of iterating pixel-by-pixel and converting RGB values to Lab (RGB->XYZ->Lab) values using the complex formulae, I think it would be better if I generated a lookup table for RGB to LAB (or XYZ to LAB) transform. Is this possible?
I understood how the 3D Lut works for transformations from RGB to RGB, but I am confused about RGB to Lab as L, a and b have different ranges. Any hints ?
EDIT:
Can you please explain me how the Lut will work ?
Heres one explanation: link
e.g Below is my understanding for a 3D Lut for RGB->RGB transform:
a sample Nuke 3dl Lut file:
0 64 128 192 256 320 384 448 512 576 640 704 768 832 896 960 1023
R, G, B
0, 0, 0
0, 0, 64
0, 0, 128
0, 0, 192
0, 0, 256
.
.
.
0, 64, 0
0, 64, 64
0, 64, 128
.
.
Here instead of generating a 1024*1024*1024 table for the source 10-bit RGB values, each R,G and B range is quantized to 17 values generating a 4913 row table.
The first line gives the possible quantized values (I think here only the length and the max value matter ). Now suppose, if the source RGB value is (20, 20, 190 ), the output would be line # 4 (0, 0, 192) (using some interpolation techniques). Is that correct?
This one is for 10-bit source, you could generate a smiliar one for 8-bit by changing the range from 0 to 255?
Similarly, how would you proceed for sRGB->Lab conversion ?
An alternative approach makes use of graphics hardware, aka "general purpose GPU computing". There are some different tools for this, e.g. OpenGL GLSL, OpenCL, CUDA, ... You should gain an incredible speedup of about 100x and more compared to a CPU solution.
The most "compatible" solution is to use OpenGL with a special fragment shader with which you can perform computations. This means: upload your input image as a texture to the GPU, render it in a (target) framebuffer with a special shader program which converts your RGB data to Lab (or it can also make use of a lookup table, but most float computations on the GPU are faster than table / texture lookups, so we won't do this here).
First, port your RGB to Lab conversion function to GLSL. It should work on float numbers, so if you used integral values in your original conversion, get rid of them. OpenGL uses "clamp" values, i.e. float values between 0.0 and 1.0. It will look like this:
vec3 rgbToLab(vec3 rgb) {
vec3 lab = ...;
return lab;
}
Then, write the rest of the shader, which will fetch a pixel of the (RGB) texture, calls the conversion function and writes the pixel in the color output variable (don't forget the alpha channel):
uniform sampler2D texture;
varying vec2 texCoord;
void main() {
vec3 rgb = texture2D(texture, texCoord).rgb;
gl_FragColor = vec4(lab, 1.0);
}
The corresponding vertex shader should write texCoord values of (0,0) in the bottom left and (1,1) in the top right of a target quad filling the whole screen (framebuffer).
Finally, use this shader program in your application by rendering on a framebuffer with the same size than your image. Render a quad which fills the whole region (without setting any transformations, just render a quad from the 2D vertices (-1,-1) to (1,1)). Set the uniform value texture to your RGB image which you uploaded as a texture. Then, read back the framebuffer from the device, which should hopefully contain your image in Lab color space.
Assuming your source colorspace is a triplet of bytes (RGB, 8 bits each) and both color spaces are stored in structs with the names SourceColor and TargetColor respectively, and you have a conversion function given like this:
TargetColor convert(SourceColor color) {
return ...
}
Then you can create a table like this:
TargetColor table[256][256][256]; // 16M * sizeof(TargetColor) => put on heap!
for (int r, r < 256; ++r)
for (int g, g < 256; ++g)
for (int b, b < 256; ++b)
table[r][g][b] = convert({r, g, b}); // (construct SourceColor from r,g,b)
Then, for the actual image conversion, use an alternative convert function (I'd suggest that you write a image conversion class which takes a function pointer / std::function in its constructor, so it's easily exchangeable):
TargetColor convertUsingTable(SourceColor source) {
return table[source.r][source.g][source.b];
}
Note that the space consumption is 16M * sizeof(TargetColor) (assuming 32 bit for Lab this will be 64MBytes), so the table should be heap-allocated (it can be stored in-class if your class is going to live on the heap, but better allocate it with new[] in the constructor and store it in a smart pointer).
I am writing text module for OpenGL engine using NVidia Path extension (NV Path). The extension allows loading system and external font files using trutype metrics. Now, I need to be able to set a standard font size (in pixels) for the glyphs when rendering the text. By default the loaded glyph has EMscale = 2048. Searching for glyph metrics-to-pixels conversion I have found this:
Converting FUnits to pixels
Values in the em square are converted to values in the pixel
coordinate system by multiplying them by a scale. This scale is:
pointSize * resolution / ( 72 points per inch * units_per_em )
So units_per_em equals 2048, pointSize and resolution are the unknowns I can't resolve.How do I get the resolution value for the viewport width and height to get into this equation? Also, what should be the point size if my input is the pixel size for the font?
I tried to solve this equation with different kind of input but my rendered text gets always smaller (or bigger) than the reference text (AfterEffects).
NV_Path docs refer to FreeType2 metrics. The reference says:
The metrics found in face->glyph->metrics are normally expressed in
26.6 pixel format (i.e., 1/64th of pixels), unless you use the FT_LOAD_NO_SCALE flag when calling FT_Load_Glyph or FT_Load_Char. In
this case, the metrics will be expressed in original font units.
I tried to scale down the text model matrix by 1/64. It approximates to the correct size but still not perfect.
Here is how I currently setup the text rendering in the code:
emScale=2048;
glyphBase = glGenPathsNV(1+numChars);
pathTemplate= ~0;
glPathGlyphRangeNV(glyphBase,GL_SYSTEM_FONT_NAME_NV,
"Verdana",GL_BOLD_BIT_NV,0,numChars,GL_SKIP_MISSING_GLYPH_NV,pathTemplate,emScale);
/* Query font and glyph metrics. */
glGetPathMetricRangeNV(
GL_FONT_Y_MIN_BOUNDS_BIT_NV|
GL_FONT_Y_MAX_BOUNDS_BIT_NV|
GL_FONT_X_MIN_BOUNDS_BIT_NV|
GL_FONT_X_MAX_BOUNDS_BIT_NV|
GL_FONT_UNDERLINE_POSITION_BIT_NV|
GL_FONT_UNDERLINE_THICKNESS_BIT_NV,glyphBase+' ' ,1 ,6*sizeof(GLfloat),font_data);
glGetPathMetricRangeNV(GL_GLYPH_HORIZONTAL_BEARING_ADVANCE_BIT_NV,
glyphBase,numChars,0,horizontalAdvance);
/* Query spacing information for example's message. */
messageLen = strlen(message);
xtranslate =(GLfloat*)malloc(sizeof(GLfloat) *messageLen);
if(!xtranslate){
fprintf(stderr, "%s: malloc of xtranslate failed\n", "Text3D error");
exit(1);
}
xtranslate[0] = 0.0f; /* Initial xtranslate is zero. */
/* Use 100% spacing; use 0.9 for both for 90% spacing. */
GLfloat advanceScale = 1.0f,
kerningScale = 1.0f;
glGetPathSpacingNV(GL_ACCUM_ADJACENT_PAIRS_NV,
(GLsizei)messageLen,GL_UNSIGNED_BYTE,message,glyphBase,
advanceScale,kerningScale,GL_TRANSLATE_X_NV,&xtranslate[1]); /* messageLen-1 accumulated translates are written here. */
const unsigned char *message_ub = (const unsigned char*)message;
totalAdvance = xtranslate[messageLen-1] +
horizontalAdvance[message_ub[messageLen-1]];
xBorder = totalAdvance / messageLen;
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_NOTEQUAL ,0 ,~0);
glStencilOp(GL_KEEP,GL_KEEP,GL_ZERO);
////////// init matrices /////////////
translate(model ,vec3(0));
translate(view ,vec3(0));
float nearF=1 ,farF = 1200;
glViewport(0,0,_viewPortW,_viewPortH);
glMatrixLoadIdentityEXT(GL_PROJECTION);
float aspect_ratio =(float) _viewPortW /(float) _viewPortH;
glMatrixFrustumEXT(GL_PROJECTION ,-aspect_ratio,aspect_ratio,-1 ,1 ,nearF,farF);
model=translate(model,vec3(0.0f,384.0,0.0f));//move up
//scale by 26.6 also doesn't work:
model=scale(model,vec3((1.0f/26.6f),1.0f/26.6f,1.0f/26.6f));
view=lookAt(vec3(0,0,0),vec3(0,0,-1),vec3(0,1,0));
}
The resolution is device dependent and using your equation given as DPI (dots per inch). point_size is the size of the font, choosen by the user in Points. A Point = 1/72 Inch (actually this is the size of a Point as used in PostScript, the actual unit "Point" as used by typesetters is slightly different).
The device resolution can be queried using OS dependent methods. Google for "Display DPI $NAMEOFOPERATINGSYSTEM". Using a size in points this gives you constant physical font sizes independent of the used display device.
Note that when rendering with OpenGL you'll still go through the transformation pipeline which must be accounted for.
i have an image(208x8) and i would like to copy 8x8 squares from it at different areas then join all the squares to create one IDirect3DTexture9*
Depending on exactly what you are trying to do IDirect3DDevice9::UpdateSurface or IDirect3DDevice9::StretchRect might help you.
For simple operations on very small textures like you are describing, it can be advantageous to manipulate them using the CPU (i.e. with IDirect3DTexture9::LockRect). With D3D9 this usually implies that the texture be re-uploaded to VRAM, so it is generally only useful for small or infrequently modified textures. But sometimes if you are render-bound and you are careful about where you update the texture within your loop, it's possible to hide the cost of operations like this and get them "for free".
To avoid the VRAM upload, you can use a POOL_MANAGED resource combined with the appropriate usage and lock flags to situate the resource within the AGP aperture which allows for high-speed access from both the CPU and GPU, see: http://msdn.microsoft.com/en-us/library/windows/desktop/ee418784(v=vs.85).aspx
If you are manipulating on the CPU, be aware of the tiling and alignment restrictions for the various texture formats. The best information about this is within the documentation that comes with the SDK (includes several whitepapers), the online documentation is incomplete.
Here's a basic example:
IDirect3DTexture9* m_tex = getYourTexture();
m_tex->LockRect(0, &outRect, d3dRect, D3DLOCK_DISCARD);
// Stride depends on your texture format - this is the number of bytes per texel.
// Note that this may be less than 1 for DXT1 textures in which case you'll need
// some bit swizzling logic. Can be inferred from Pitch and width.
int stride = 1;
int rowPitch = outRect.Pitch;
// Choose a pointer type that suits your stride.
unsigned char* pixels = (unsigned char*)outRect.pBits;
// Clear to black.
for (int y=0; y < d3dRect.height; ++y)
{
for (int x=0; x < d3dRect.width; ++x)
{
pixels[x + rowPitch * y] = 0x0;
}
}
m_tex->UnlockRect(0);
I'm using JNI to obtain raw image data in the following format:
The image data is returned in the format of a DATA32 (32 bits) per pixel in a linear array ordered from the top left of the image to the bottom right going from left to right each line. Each pixel has the upper 8 bits as the alpha channel and the lower 8 bits are the blue channel - so a pixel's bits are ARGB (from most to least significant, 8 bits per channel). You must put the data back at some point.
The DATA32 format is essentially an unsigned int in C.
So I obtain an int[] array and then try to create a Buffered Image out of it by
int w = 1920;
int h = 1200;
BufferedImage b = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
int[] f = (new Capture()).capture();
for(int i = 0; i < f.length; i++){;
b.setRGB(x, y, f[i]);
}
f is the array with the pixel data.
According to the Java documentation this should work since BufferedImage.TYPE_INT_ARGB is:
Represents an image with 8-bit RGBA color components packed into integer pixels. The image has a DirectColorModel with alpha. The color data in this image is considered not to be premultiplied with alpha. When this type is used as the imageType argument to a BufferedImage constructor, the created image is consistent with images created in the JDK1.1 and earlier releases.
Unless by 8-bit RGBA, them mean that all components added together are encoded in 8bits? But this is impossible.
This code does work, but the image that is produced is not at all like the image that it should produce. There are tonnes of artifacts. Can anyone see something obviously wrong in here?
Note I obtain my pixel data with
imlib_context_set_image(im);
data = imlib_image_get_data();
in my C code, using the library imlib2 with api http://docs.enlightenment.org/api/imlib2/html/imlib2_8c.html#17817446139a645cc017e9f79124e5a2
i'm an idiot.
This is merely a bug.
I forgot to include how I calculate x,y above.
Basically I was using
int x = i%w;
int y = i/h;
in the for loop, which is wrong. SHould be
int x = i%w;
int y = i/w;
Can't believe I made this stupid mistake.