How to efficiently render a 24-bpp image on a 32-bpp display? - c++

First of all, I'm programming in the kernel context so no existing libraries exist. In fact this code is going to go into a library of my own.
Two questions, one more important than the other:
As the title suggests, how can I efficiently render a 24-bpp image onto a 32-bpp device, assuming that I have the address of the frame buffer?
Currently I have this code:
void BitmapImage::Render24(uint16_t x, uint16_t y, void (*r)(uint16_t, uint16_t, uint32_t))
{
uint32_t imght = Math::AbsoluteValue(this->DIB->GetBitmapHeight());
uint64_t ptr = (uint64_t)this->ActualBMP + this->Header->BitmapArrayOffset;
uint64_t rowsize = ((this->DIB->GetBitsPerPixel() * this->DIB->GetBitmapWidth() + 31) / 32) * 4;
uint64_t oposx = x;
uint64_t posx = oposx;
uint64_t posy = y + (this->DIB->Type == InfoHeaderV1 && this->DIB->GetBitmapHeight() < 0 ? 0 : this->DIB->GetBitmapHeight());
for(uint32_t d = 0; d < imght; d++)
{
for(uint32_t w = 0; w < rowsize / (this->DIB->GetBitsPerPixel() / 8); w++)
{
r(posx, posy, (*((uint32_t*)ptr) & 0xFFFFFF));
ptr += this->DIB->GetBitsPerPixel() / 8;
posx++;
}
posx = oposx;
posy--;
}
}
r is a function pointer to a PutPixel-esque thing that accepts x, y, and colour parameters.
Obviously this code is terribly slow, since plotting pixels one at a time is never a good idea.
For my 32-bpp rendering code (which I also have a question about, more on that later) I can easily Memory::Copy() the bitmap array (I'm loading bmp files here) to the frame buffer.
However, how do I do this with 24bpp images? On a 24bpp display this would be fine but I'm working with a 32bpp one.
One solution I can think of right now is to create another bitmap array which essentially contains values of 0x00(colour) and the use that to draw to the screen -- I don't think this is very good though, so I'm looking for a better alternative.
Next question:
2. Given, for obvious reasons, one cannot simply Memory::Copy() the entire array at once onto the frame buffer, the next best thing would be to copy them row by row.
Is there a better way?

Basically something like this:
for (uint32_t l = 0; l < h; ++l) // l line index in pixels
{
// srcPitch is distance between lines in bytes
char* srcLine = (char*)srcBuffer + l * srcPitch;
unsigned* trgLine = ((unsigned*)trgBuffer) + l * trgPitch;
for (uint32_t c = 0; c < w; ++c) // c is column index in pixels
{
// build target pixel. arrange indexes to fit your render target (0, 1, 2)
++(*trgLine) = (srcLine[0] << 16) | (srcLine[1] << 8)
| srcLine[2] | (0xff << 24);
srcLine += 3;
}
}
A few notes:
- better to write to a different buffer than the render buffer so the image is displayed at once.
- using functions for pixel placement like you did is very (very very) slow.

Related

UE4 capture frame using ID3D11Texture2D and convert to R8G8B8 bitmap

I'm working on a streaming prototype using UE4.
My goal here (in this post) is solely about capturing frames and saving one as a bitmap, just to visually ensure frames are correctly captured.
I'm currently capturing frames converting the backbuffer to a ID3D11Texture2D then mapping it.
Note : I tried the ReadSurfaceData approach in the render thread, but it didn't perform well at all regarding performances (FPS went down to 15 and I'd like to capture at 60 FPS), whereas the DirectX texture mapping from the backbuffer currently takes 1 to 3 milliseconds.
When debugging, I can see the D3D11_TEXTURE2D_DESC's format is DXGI_FORMAT_R10G10B10A2_UNORM, so red/green/blues are stored on 10 bits each, and alpha on 2 bits.
My questions :
How to convert the texture's data (using the D3D11_MAPPED_SUBRESOURCE pData pointer) to a R8G8B8(A8), that is, 8 bit per color (a R8G8B8 without the alpha would also be fine for me there) ?
Also, am I doing anything wrong about capturing the frame ?
What I've tried :
All the following code is executed in a callback function registered to OnBackBufferReadyToPresent (code below).
void* NativeResource = BackBuffer->GetNativeResource();
if (NativeResource == nullptr)
{
UE_LOG(LogTemp, Error, TEXT("Couldn't retrieve native resource"));
return;
}
ID3D11Texture2D* BackBufferTexture = static_cast<ID3D11Texture2D*>(NativeResource);
D3D11_TEXTURE2D_DESC BackBufferTextureDesc;
BackBufferTexture->GetDesc(&BackBufferTextureDesc);
// Get the device context
ID3D11Device* d3dDevice;
BackBufferTexture->GetDevice(&d3dDevice);
ID3D11DeviceContext* d3dContext;
d3dDevice->GetImmediateContext(&d3dContext);
// Staging resource
ID3D11Texture2D* StagingTexture;
D3D11_TEXTURE2D_DESC StagingTextureDesc = BackBufferTextureDesc;
StagingTextureDesc.Usage = D3D11_USAGE_STAGING;
StagingTextureDesc.BindFlags = 0;
StagingTextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
StagingTextureDesc.MiscFlags = 0;
HRESULT hr = d3dDevice->CreateTexture2D(&StagingTextureDesc, nullptr, &StagingTexture);
if (FAILED(hr))
{
UE_LOG(LogTemp, Error, TEXT("CreateTexture failed"));
}
// Copy the texture to the staging resource
d3dContext->CopyResource(StagingTexture, BackBufferTexture);
// Map the staging resource
D3D11_MAPPED_SUBRESOURCE mapInfo;
hr = d3dContext->Map(
StagingTexture,
0,
D3D11_MAP_READ,
0,
&mapInfo);
if (FAILED(hr))
{
UE_LOG(LogTemp, Error, TEXT("Map failed"));
}
// See https://dev.to/muiz6/c-how-to-write-a-bitmap-image-from-scratch-1k6m for the struct definitions & the initialization of bmpHeader and bmpInfoHeader
// I didn't copy that code here to avoid overloading this post, as it's identical to the article's code
// Just making clear the reassigned values below
bmpHeader.sizeOfBitmapFile = 54 + StagingTextureDesc.Width * StagingTextureDesc.Height * 4;
bmpInfoHeader.width = StagingTextureDesc.Width;
bmpInfoHeader.height = StagingTextureDesc.Height;
std::ofstream fout("output.bmp", std::ios::binary);
fout.write((char*)&bmpHeader, 14);
fout.write((char*)&bmpInfoHeader, 40);
// TODO : convert to R8G8B8 (see below for my attempt at this)
fout.close();
StagingTexture->Release();
d3dContext->Unmap(StagingTexture, 0);
d3dContext->Release();
d3dDevice->Release();
BackBufferTexture->Release();
(As mentioned in the code comments, I followed this article about the BMP headers for saving the bitmap to a file)
Texture data
One thing I'm concerned about is the retrieved data with this method.
I used a temporary array to check with the debugger what's inside.
// Just noted which width and height had the texture and hardcoded it here to allocate the right size
uint32_t data[1936 * 1056];
// Multiply by 4 as there are 4 bytes (32 bits) per pixel
memcpy(data, mapInfo.pData, StagingTextureDesc.Width * StagingTextureDesc.Height * 4);
Turns out the 1935 first uint32 in this array all contain the same value ; 3595933029. And after that, the same values are often seen hundred times in a row.
This makes me think the frame isn't captured as it should, because the UE4 editor's window doesn't have the exact same color on its first row all along (whether it's top or bottom).
R10G10B10A2 to R8G8B8(A8)
So I tried to guess how to convert from R10G10B10A2 to R8G8B8. I started from this value that appears 1935 times in a row at the beginning of the data buffer : 3595933029.
When I color pick an editor's window screenshot (using the Windows tool, which gets me an image with the exact same dimensions as the DirectX texture, that is 1936x1056), I get the following different colors:
R=56, G=57, B=52 (top left & bottom left)
R=0, G=0, B=0 (top right)
R=46, G=40, B=72 (bottom right - it overlaps the task bar, thus the color)
So I tried to manually convert the color to check if it matches any of those I color picked.
I thought about bit shifting to simply compare the values
3595933029 (value in retrieved buffer) in binary : 11010110010101011001010101100101
Can already see the pattern : 11 followed 3 times by the 10-bit value 0101100101, and none of the picked colors follow this (except the black corner, which would be only made of zeros though)
Anyway, assuming RRRRRRRRRR GGGGGGGGGG BBBBBBBBBB AA order (ditched bits are marked with an x) :
11010110xx01010110xx01010110xxxx
R=214, G=86, B=86 : doesn't match
Assuming AA RRRRRRRRRR GGGGGGGGGG BBBBBBBBBB :
xx01011001xx01011001xx01011001xx
R=89, G=89, B=89 : doesn't match
If that can help, here's the editor window that should be captured (it really is a Third person template, didn't add anything to it except this capture code)
Here's the generated bitmap when shifting bits :
Code to generate bitmap's pixels data :
struct Pixel {
uint8_t blue = 0;
uint8_t green = 0;
uint8_t red = 0;
} pixel;
uint32_t* pointer = (uint32_t*)mapInfo.pData;
size_t numberOfPixels = bmpInfoHeader.width * bmpInfoHeader.height;
for (int i = 0; i < numberOfPixels; i++) {
uint32_t value = *pointer;
// Ditch the color's 2 last bits, keep the 8 first
pixel.blue = value >> 2;
pixel.green = value >> 12;
pixel.red = value >> 22;
++pointer;
fout.write((char*)&pixel, 3);
}
It somewhat seems similar in the present colors, however that doesn't look at all like the editor.
What am I missing ?
First of all, you are assuming that the mapInfo.RowPitch is exactly StagicngTextureDesc.Width * 4. This is often not true. When copying to/from Direct3D resources, you need to do 'row-by-row' copies. Also, allocating 2 MBytes on the stack is not good practice.
#include <cstdint>
#include <memory>
// Assumes our staging texture is 4 bytes-per-pixel
// Allocate temporary memory
auto data = std::unique_ptr<uint32_t[]>(
new uint32_t[StagingTextureDesc.Width * StagingTextureDesc.Height]);
auto src = static_cast<uint8_t*>(mapInfo.pData);
uint32_t* dest = data.get();
for(UINT y = 0; y < StagingTextureDesc.Height; ++y)
{
// Multiply by 4 as there are 4 bytes (32 bits) per pixel
memcpy(dest, src, StagingTextureDesc.Width * sizeof(uint32_t));
src += mapInfo.RowPitch;
dest += StagingTextureDesc.Width;
}
For C++11, using std::unique_ptr ensures the memory is eventually released automatically. You can transfer ownership of the memory to something else with uint32_t* ptr = data.release(). See cppreference.
With C++14, the better way to write the allocation is: auto data = std::make_unique<uint32_t[]>(StagingTextureDesc.Width * StagingTextureDesc.Height);. This assumes you are fine with a C++ exception being thrown for out-of-memory.
If you want to return an error code for out-of-memory instead of a C++ exception, use: auto data = std::unique_ptr<uint32_t[]>(new (std::nothrow) uint32_t[StagingTextureDesc.Width * StagingTextureDesc.Height]); if (!data) // return error
Converting 10:10:10:2 content to 8:8:8:8 content can be done efficiently on the CPU with bit-shifting.
The tricky bit is dealing with the up-scaling of the 2-bit alpha to 8-bits. For example, you want the Alpha of 11 to map to 255, not 192.
Here's a replacement for the loop above
// Assumes our staging texture is DXGI_FORMAT_R10G10B10A2_UNORM
for(UINT y = 0; y < StagingTextureDesc.Height; ++y)
{
auto sptr = reinterpret_cast<uint32_t*>(src);
for(UINT x = 0; x < StagingTextureDesc.Width; ++x)
{
uint32_t t = *(sptr++);
uint32_t r = (t & 0x000003ff) >> 2;
uint32_t g = (t & 0x000ffc00) >> 12;
uint32_t b = (t & 0x3ff00000) >> 22;
// Upscale alpha
// 11xxxxxx -> 11111111 (255)
// 10xxxxxx -> 10101010 (170)
// 01xxxxxx -> 01010101 (85)
// 00xxxxxx -> 00000000 (0)
t &= 0xc0000000;
uint32_t a = (t >> 24) | (t >> 26) | (t >> 28) | (t >> 30);
// Convert to DXGI_FORMAT_R8G8B8A8_UNORM
*(dest++) = r | (g << 8) | (b << 16) | (a << 24);
}
src += mapInfo.RowPitch;
}
Of course we can combine the shifting operations since we move them down and then back up in the previous loop. We do need to update the masks to remove the bits that are normally shifted off by the full shifts. This replaces the inner body of the loop above:
// Convert from 10:10:10:2 to 8:8:8:8
uint32_t t = *(sptr++);
uint32_t r = (t & 0x000003fc) >> 2;
uint32_t g = (t & 0x000ff000) >> 4;
uint32_t b = (t & 0x3fc00000) >> 6;
t &= 0xc0000000;
uint32_t a = t | (t >> 2) | (t >> 4) | (t >> 6);
*(dest++) = r | g | b | a;
Any time you reduce the bit-depth you will introduce error. Techniques like ordered dithering and error-diffusion dithering are commonly used in pixels conversions of this nature. These introduce a bit of noise to the image to reduce the visual impact of the lost low bits.
For examples of conversions for all DXGI_FORMAT types, see DirectXTex which makes use of DirectXMath for all the various packed vector types. DirectXTex also implements both 4x4 ordered dithering and Floyd-Steinberg error-diffusion dithering when reducing bit-depth.

Convert FreeType GlyphSlot Bitmap To Vulkan BGRA

I'm trying to convert a FreeType GlyphSlot Bitmap to Vulkan BGRA format.
void DrawText(const std::string &text) {
// WIDTH & HEIGHT == dst image dimensions
FT_GlyphSlot Slot = face->glyph;
buffer.resize(WIDTH*HEIGHT*4);
int dst_Pitch = WIDTH * 4;
for (auto c : text) {
FT_Error error = FT_Load_Char(face, c, FT_LOAD_RENDER);
if (error) {
printf("FreeType: Load Char Error\n");
continue;
}
auto char_width = Slot->bitmap.width;
auto char_height = Slot->bitmap.rows;
uint8_t* src = Slot->bitmap.buffer;
uint8_t* startOfLine = src;
for (int y = 0; y < char_height; ++y) {
src = startOfLine;
for (int x = 0; x < char_width; ++x) {
// y * dst_Pitch == Destination Image Row
// x * 4 == Destination Image Column
int dst = (y*dst_Pitch) + (x*4);
// Break if we have no more space to draw on our
// destination texture.
if (dst + 4 > buffer.size()) { break; }
auto value = *src;
src++;
buffer[dst] = 0xff; // +0 == B
buffer[dst+1] = 0xff; // +1 == G
buffer[dst+2] = 0xff; // +2 == R
buffer[dst+3] = value; // +3 == A
}
startOfLine += Slot->bitmap.pitch;
}
}
}
This is giving me garbled output. I'm not sure what I need to do to properly convert to Vulkan B8G8R8A8. I feel like moving from left to right in the buffer we write to our Vulkan texture is incorrect and maybe Vulkan is expecting I add the pixels into the buffer in a different way?
I understand this code will write each letter on top of one another, I will implement taking advantage of Slot->advance after I can properly draw at least a single letter.
One problem is that you resize buffer with every character (which will leave the previous data at the start of the newly allocated space) but when storing the data for the new character c you overwrite the start of the buffer since dst is 0. You probably want to set dst the buffer.size() from before the resize call.
int dst = /*previous buffer size*/;
The issue was due to the fact that I had VkImageCreateInfo tiling set to VK_IMAGE_TILING_OPTIMAL. After changing it to VK_IMAGE_TILING_LINEAR I received the correct output.
Taken straight from https://www.khronos.org/registry/vulkan/specs/1.1-extensions/man/html/VkImageTiling.html
VK_IMAGE_TILING_OPTIMAL specifies optimal tiling (texels are laid out
in an implementation-dependent arrangement, for more optimal memory
access).
VK_IMAGE_TILING_LINEAR specifies linear tiling (texels are laid out in
memory in row-major order, possibly with some padding on each row).
While I may not be rendering garbage now, my letters are still backwards and seemingly drawing from right to left instead of left to right.
You can see the green 'the' in the top right corner.

Weird but close fft and ifft of image in c++

I wrote a program that loads, saves, and performs the fft and ifft on black and white png images. After much debugging headache, I finally got some coherent output only to find that it distorted the original image.
input:
fft:
ifft:
As far as I have tested, the pixel data in each array is stored and converted correctly. Pixels are stored in two arrays, 'data' which contains the b/w value of each pixel and 'complex_data' which is twice as long as 'data' and stores real b/w value and imaginary parts of each pixel in alternating indices. My fft algorithm operates on an array structured like 'complex_data'. After code to read commands from the user, here's the code in question:
if (cmd == "fft")
{
if (height > width) size = height;
else size = width;
N = (int)pow(2.0, ceil(log((double)size)/log(2.0)));
temp_data = (double*) malloc(sizeof(double) * width * 2); //array to hold each row of the image for processing in FFT()
for (i = 0; i < (int) height; i++)
{
for (j = 0; j < (int) width; j++)
{
temp_data[j*2] = complex_data[(i*width*2)+(j*2)];
temp_data[j*2+1] = complex_data[(i*width*2)+(j*2)+1];
}
FFT(temp_data, N, 1);
for (j = 0; j < (int) width; j++)
{
complex_data[(i*width*2)+(j*2)] = temp_data[j*2];
complex_data[(i*width*2)+(j*2)+1] = temp_data[j*2+1];
}
}
transpose(complex_data, width, height); //tested
free(temp_data);
temp_data = (double*) malloc(sizeof(double) * height * 2);
for (i = 0; i < (int) width; i++)
{
for (j = 0; j < (int) height; j++)
{
temp_data[j*2] = complex_data[(i*height*2)+(j*2)];
temp_data[j*2+1] = complex_data[(i*height*2)+(j*2)+1];
}
FFT(temp_data, N, 1);
for (j = 0; j < (int) height; j++)
{
complex_data[(i*height*2)+(j*2)] = temp_data[j*2];
complex_data[(i*height*2)+(j*2)+1] = temp_data[j*2+1];
}
}
transpose(complex_data, height, width);
free(temp_data);
free(data);
data = complex_to_real(complex_data, image.size()/4); //tested
image = bw_data_to_vector(data, image.size()/4); //tested
cout << "*** fft success ***" << endl << endl;
void FFT(double* data, unsigned long nn, int f_or_b){ // f_or_b is 1 for fft, -1 for ifft
unsigned long n, mmax, m, j, istep, i;
double wtemp, w_real, wp_real, wp_imaginary, w_imaginary, theta;
double temp_real, temp_imaginary;
// reverse-binary reindexing to separate even and odd indices
// and to allow us to compute the FFT in place
n = nn<<1;
j = 1;
for (i = 1; i < n; i += 2) {
if (j > i) {
swap(data[j-1], data[i-1]);
swap(data[j], data[i]);
}
m = nn;
while (m >= 2 && j > m) {
j -= m;
m >>= 1;
}
j += m;
};
// here begins the Danielson-Lanczos section
mmax = 2;
while (n > mmax) {
istep = mmax<<1;
theta = f_or_b * (2 * M_PI/mmax);
wtemp = sin(0.5 * theta);
wp_real = -2.0 * wtemp * wtemp;
wp_imaginary = sin(theta);
w_real = 1.0;
w_imaginary = 0.0;
for (m = 1; m < mmax; m += 2) {
for (i = m; i <= n; i += istep) {
j = i + mmax;
temp_real = w_real * data[j-1] - w_imaginary * data[j];
temp_imaginary = w_real * data[j] + w_imaginary * data[j-1];
data[j-1] = data[i-1] - temp_real;
data[j] = data[i] - temp_imaginary;
data[i-1] += temp_real;
data[i] += temp_imaginary;
}
wtemp = w_real;
w_real += w_real * wp_real - w_imaginary * wp_imaginary;
w_imaginary += w_imaginary * wp_real + wtemp * wp_imaginary;
}
mmax=istep;
}}
My ifft is the same only with the f_or_b set to -1 instead of 1. My program calls FFT() on each row, transposes the image, calls FFT() on each row again, then transposes back. Is there maybe an error with my indexing?
Not an actual answer as this question is Debug only so some hints instead:
your results are really bad
it should look like this:
first line is the actual DFFT result
Re,Im,Power is amplified by a constant otherwise you would see a black image
the last image is IDFFT of the original not amplified Re,IM result
the second line is the same but the DFFT result is wrapped by half size of image in booth x,y to match the common results in most DIP/CV texts
As you can see if you IDFFT back the wrapped results the result is not correct (checker board mask)
You have just single image as DFFT result
is it power spectrum?
or you forget to include imaginary part? to view only or perhaps also to computation somewhere as well?
is your 1D **DFFT working?**
for real data the result should be symmetric
check the links from my comment and compare the results for some sample 1D array
debug/repair your 1D FFT first and only then move to the next level
do not forget to test Real and complex data ...
your IDFFT looks BW (no gray) saturated
so did you amplify the DFFT results to see the image and used that for IDFFT instead of the original DFFT result?
also check if you do not round to integers somewhere along the computation
beware of (I)DFFT overflows/underflows
If your image pixel intensities are big and the resolution of image too then your computation could loss precision. Newer saw this in images but if your image is HDR then it is possible. This is a common problem with convolution computed by DFFT for big polynomials.
Thank you everyone for your opinions. All that stuff about memory corruption, while it makes a point, is not the root of the problem. The sizes of data I'm mallocing are not overly large, and I am freeing them in the right places. I had a lot of practice with this while learning c. The problem was not the fft algorithm either, nor even my 2D implementation of it.
All I missed was the scaling by 1/(M*N) at the very end of my ifft code. Because the image is 512x512, I needed to scale my ifft output by 1/(512*512). Also, my fft looks like white noise because the pixel data was not rescaled to fit between 0 and 255.
Suggest you look at the article http://www.yolinux.com/TUTORIALS/C++MemoryCorruptionAndMemoryLeaks.html
Christophe has a good point but he is wrong about it not being related to the problem because it seems that in modern times using malloc instead of new()/free() does not initialise memory or select best data type which would result in all problems listed below:-
Possibly causes are:
Sign of a number changing somewhere, I have seen similar issues when a platform invoke has been used on a dll and a value is passed by value instead of reference. It is caused by memory not necessarily being empty so when your image data enters it will have boolean maths performed on its values. I would suggest that you make sure memory is empty before you put your image data there.
Memory rotating right (ROR in assembly langauge) or left (ROL) . This will occur if data types are being used which do not necessarily match, eg. a signed value entering an unsigned data type or if the number of bits is different in one variable to another.
Data being lost due to an unsigned value entering a signed variable. Outcomes are 1 bit being lost because it will be used to determine negative or positive, or at extremes if twos complement takes place the number will become inverted in meaning, look for twos complement on wikipedia.
Also see how memory should be cleared/assigned before use. http://www.cprogramming.com/tutorial/memory_debugging_parallel_inspector.html

Can someone explain how I am to access this array? (image processing program)

I am working on the implementation of functions for an already written image processing program. I am given explanations of functions, but not sure how they are designating pixels of the image.
In this case, I need to flip the image horizontally, i.e., rotates 180 degrees around the vertical axis
Is this what makes the "image" i am to flip?
void Image::createImage(int width_x, int height_y)
{
width = width_x;
height = height_y;
if (pixelData!=NULL)
freePixelData();
if (width <= 0 || height <= 0) {
return;
}
pixelData = new Color* [width]; // array of Pixel*
for (int x = 0; x < width; x++) {
pixelData[x] = new Color [height]; // this is 2nd dimension of pixelData
}
}
I do not know if all the functions I have written are correct.
Also, the Image class calls on a Color class
So to re-ask: what am I "flipping" here?
Prototype for function is:
void flipLeftRight();
As there is no input into the function, and I am told it modifies pixelData, how do I flip left to right?
A quick in place flip. Untested, but the idea is there.
void flipHorizontal(u8 *image, u32 width, u32 height)
{
for(int i=0; i < height; i++)
{
for(int j=0; j < width/2; j++)
{
int sourceIndex = i * width + j;
int destIndex = (i+1) * width - j - 1;
image[sourceIndex] ^= image[destIndex];
image[destIndex] ^= image[sourceIndex];
image[sourceIndex] ^= image[destIndex];
}
}
}
well, the simplest approach would be to read it 1 row at a time into a temporary buffer the same size as 1 row.
Then you could use something like std::reverse on the temporary buffer and write it back.
You could also do it in place, but this is the simplest approach.
EDIT: what i;ve described is a mirror, not a flip, to mirror you also need to reverse the order of the rows. Nothing too bad, to do that I would create a buffer the same size as the image, copy the image and then write it back with the coordinates adjusted. Something like y = height - x and x = width - x.

exchanging 2 memory positions

I am working with OpenCV and Qt, Opencv use BGR while Qt uses RGB , so I have to swap those 2 bytes for very big images.
There is a better way of doing the following?
I can not think of anything faster but looks so simple and lame...
int width = iplImage->width;
int height = iplImage->height;
uchar *iplImagePtr = (uchar *) iplImage->imageData;
uchar buf;
int limit = height * width;
for (int y = 0; y < limit; ++y) {
buf = iplImagePtr[2];
iplImagePtr[2] = iplImagePtr[0];
iplImagePtr[0] = buf;
iplImagePtr += 3;
}
QImage img((uchar *) iplImage->imageData, width, height,
QImage::Format_RGB888);
We are currently dealing with this issue in a Qt application. We've found that the Intel Performance Primitives to be be fastest way to do this. They have extremely optimized code. In the html help files at Intel ippiSwapChannels Documentation they have an example of exactly what you are looking for.
There are couple of downsides
Is the size of the library, but you can link static link just the library routines you need.
Running on AMD cpus. Intel libs run VERY slow by default on AMD. Check out www.agner.org/optimize/asmlib.zip for details on how do a work around.
I think this looks absolutely fine. That the code is simple is not something negative. If you want to make it shorter you could use std::swap:
std::swap(iplImagePtr[0], iplImagePtr[2]);
You could also do the following:
uchar* end = iplImagePtr + height * width * 3;
for ( ; iplImagePtr != end; iplImagePtr += 3) {
std::swap(iplImagePtr[0], iplImagePtr[2]);
}
There's cvConvertImage to do the whole thing in one line, but I doubt it's any faster either.
Couldn't you use one of the following methods ?
void QImage::invertPixels ( InvertMode mode = InvertRgb )
or
QImage QImage::rgbSwapped () const
Hope this helps a bit !
I would be inclined to do something like the following, working on the basis of that RGB data being in three byte blocks.
int i = 0;
int limit = (width * height); // / 3;
while(i != limit)
{
buf = iplImagePtr[i]; // should be blue colour byte
iplImagePtr[i] = iplImagaePtr[i + 2]; // save the red colour byte in the blue space
iplImagePtr[i + 2] = buf; // save the blue color byte into what was the red slot
// i++;
i += 3;
}
I doubt it is any 'faster' but at end of day, you just have to go through the entire image, pixel by pixel.
You could always do this:
int width = iplImage->width;
int height = iplImage->height;
uchar *start = (uchar *) iplImage->imageData;
uchar *end = start + width * height;
for (uchar *p = start ; p < end ; p += 3)
{
uchar buf = *p;
*p = *(p+2);
*(p+2) = buf;
}
but a decent compiler would do this anyway.
Your biggest overhead in these sorts of operations is going to be memory bandwidth.
If you're using Windows then you can probably do this conversion using the BitBlt and two appropriately set up DIBs. If you're really lucky then this could be done in the graphics hardware.
I hate to ruin anyone's day, but if you don't want to go the IPP route (see photo_tom) or pull in an optimized library, you might get better performance from the following (modifying Andreas answer):
uchar *iplImagePtr = (uchar *) iplImage->imageData;
uchar buf;
size_t limit = height * width;
for (size_t y = 0; y < limit; ++y) {
std::swap(iplImagePtr[y * 3], iplImagePtr[y * 3 + 2]);
}
Now hold on, folks, I hear you yelling "but all those extra multiplies and adds!" The thing is, this form of the loop is far easier for a compiler to optimize, especially if they get smart enough to multithread this sort of algorithm, because each pass through the loop is independent of those before or after. In the other form, the value of iplImagePtr was dependent on the value in previous pass. In this form, it is constant throughout the whole loop; only y changes, and that is in a very, very common "count from 0 to N-1" loop construct, so it's easier for an optimizer to digest.
Or maybe it doesn't make a difference these days because optimizers are insanely smart (are they?). I wonder what a benchmark would say...
P.S. If you actually benchmark this, I'd also like to see how well the following performs:
uchar *iplImagePtr = (uchar *) iplImage->imageData;
uchar buf;
size_t limit = height * width;
for (size_t y = 0; y < limit; ++y) {
uchar *pixel = iplImagePtr + y * 3;
std::swap(pix[0], pix[2]);
}
Again, pixel is defined in the loop to limit its scope and keep the optimizer from thinking there's a cycle-to-cycle dependency. If the compiler increments and decrements the stack pointer each time through the loop to "create" and "destroy" pixel, well, it's stupid and I'll apologize for wasting your time.
cvCvtColor(iplImage, iplImage, CV_BGR2RGB);