WICConvertBitmapSource + CopyPixels results in blue image - c++

I'm trying to use WIC to load an image into an in-memory buffer for further processing then write it back to a file when done. Specifically:
Load the image into an IWICBitmapFrameDecode.
The loaded IWICBitmapFrameDecode reports that its pixel format is GUID_WICPixelFormat24bppBGR. I want to work in 32bpp RGBA, so I call WICConvertBitmapSource.
Call CopyPixels on the converted frame to get a memory buffer.
Write the memory buffer back into an IWICBitmapFrameEncode using WritePixels.
This results in a recognizable image, but the resulting image is mostly blueish, as if the red channel is being interpreted as blue.
If I call WriteSource to write the converted frame directly, instead of writing the memory buffer, it works. If I call CopyPixels from the original unconverted frame (and update my stride and pixel formats accordingly), it works. It's only the combination of WICConvertBitmapSource plus the use of a memory buffer (CopyPixels + WritePixels) that causes the problem, but I can't figure out what I'm doing wrong.
Here's my code.
int main() {
IWICImagingFactory *pFactory;
IWICBitmapDecoder *pDecoder = NULL;
CoInitializeEx(NULL, COINIT_MULTITHREADED);
CoCreateInstance(
CLSID_WICImagingFactory,
NULL,
CLSCTX_INPROC_SERVER,
IID_IWICImagingFactory,
(LPVOID*)&pFactory
);
// Load the image.
pFactory->CreateDecoderFromFilename(L"input.png", NULL, GENERIC_READ, WICDecodeMetadataCacheOnDemand, &pDecoder);
IWICBitmapFrameDecode *pFrame = NULL;
pDecoder->GetFrame(0, &pFrame);
// pFrame->GetPixelFormat shows that the image is 24bpp BGR.
// Convert to 32bpp RGBA for easier processing.
IWICBitmapSource *pConvertedFrame = NULL;
WICConvertBitmapSource(GUID_WICPixelFormat32bppRGBA, pFrame, &pConvertedFrame);
// Copy the 32bpp RGBA image to a buffer for further processing.
UINT width, height;
pConvertedFrame->GetSize(&width, &height);
const unsigned bytesPerPixel = 4;
const unsigned stride = width * bytesPerPixel;
const unsigned bitmapSize = width * height * bytesPerPixel;
BYTE *buffer = new BYTE[bitmapSize];
pConvertedFrame->CopyPixels(nullptr, stride, bitmapSize, buffer);
// Insert image buffer processing here. (Not currently implemented.)
// Create an encoder to turn the buffer back into an image file.
IWICBitmapEncoder *pEncoder = NULL;
pFactory->CreateEncoder(GUID_ContainerFormatPng, nullptr, &pEncoder);
IStream *pStream = NULL;
SHCreateStreamOnFileEx(L"output.png", STGM_WRITE | STGM_CREATE, FILE_ATTRIBUTE_NORMAL, true, NULL, &pStream);
pEncoder->Initialize(pStream, WICBitmapEncoderNoCache);
IWICBitmapFrameEncode *pFrameEncode = NULL;
pEncoder->CreateNewFrame(&pFrameEncode, NULL);
pFrameEncode->Initialize(NULL);
WICPixelFormatGUID pixelFormat = GUID_WICPixelFormat32bppRGBA;
pFrameEncode->SetPixelFormat(&pixelFormat);
pFrameEncode->SetSize(width, height);
pFrameEncode->WritePixels(height, stride, bitmapSize, buffer);
pFrameEncode->Commit();
pEncoder->Commit();
pStream->Commit(STGC_DEFAULT);
return 0;
}

The PNG encoder only supports GUID_WICPixelFormat32bppBGRA (BGR) for 32bpp as specified in PNG Native Codec official documentation. When you call it with GUID_WICPixelFormat32bppRGBA, it will not do channel switching. The pervert will just use your pixels as they were BGR, not RGB, and will not tell you there's a problem.
I don't know what you're trying to do, but in your example, you could just replace GUID_WICPixelFormat32bppRGBA by GUID_WICPixelFormat32bppBGRA in the call to WICConvertBitmapSource (and also replace the definition of the last pixelFormat variable to make sure your source code is correct, but it doesn't change anything).
PS: you can use Wic to save files, not need to create stream using another API, see my answer here: Capture screen using DirectX

Related

Playback speed of video encoded with IMFSinkWriter changes based on width

I'm making a screen recorder (without audio) using Win32s Sink Writer to encode a series of bitmaps into an MP4 file.
For some reason, the video playback speed increases (seemingly) proportionally with the video width.
From this post, I've gathered that it's most likely because I'm calculating the buffer size incorrectly. The difference here is that their video playback issue was fixed once the calculation for the audio buffer size was correct, but since I don't encode any audio at all, I'm not sure what to take from it.
I've also tried to read about how the buffer works, but I'm really at a loss as to exactly how the buffer size is causing different playback speeds.
Here is a pastebin for the entirity of the code, I really can't track down the problem any more than the buffer size and/or the frame index/duration.
i.e.:
Depending on the width of the member variable m_width (measured in pixels), the playback speed changes. That is; the higher the width, the faster the video plays, and vice versa.
Here are two video examples:
3840x1080 and 640x1080, notice the system clock.
Imugr does not retain the original resolution of the files, but I double checked before uploading, and the program does indeed create files of the claimed resolutions.
rtStart and rtDuration are defined as such, and are both private members of the MP4File class.
LONGLONG rtStart = 0;
UINT64 rtDuration;
MFFrameRateToAverageTimePerFrame(m_FPS, 1, &rtDuration);
This is where rtStart is updated, and the individual bits of the bitmap is passed to the frame writer.
Moved the LPVOID object to private members to hopefully increase performance. Now there's no need for heap allocation every time a frame is appended.
HRESULT MP4File::AppendFrame(HBITMAP frame)
{
HRESULT hr = NULL;
if (m_isInitialFrame)
{
hr = InitializeMovieCreation();
if (FAILED(hr))
return hr;
m_isInitialFrame = false;
}
if (m_hHeap && m_lpBitsBuffer) // Makes sure buffer is initialized
{
BITMAPINFO bmpInfo;
bmpInfo.bmiHeader.biBitCount = 0;
bmpInfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
// Get individual bits from bitmap and loads it into the buffer used by `WriteFrame`
GetDIBits(m_hDC, frame, 0, 0, NULL, &bmpInfo, DIB_RGB_COLORS);
bmpInfo.bmiHeader.biCompression = BI_RGB;
GetDIBits(m_hDC, frame, 0, bmpInfo.bmiHeader.biHeight, m_lpBitsBuffer, &bmpInfo, DIB_RGB_COLORS);
hr = WriteFrame();
if (SUCCEEDED(hr))
{
rtStart += rtDuration;
}
}
return m_writeFrameResult = hr;
}
And lastly, the frame writer which actually loads the bits into the buffer, and then writes to the Sink Writer.
HRESULT MP4File::WriteFrame()
{
IMFSample *pSample = NULL;
IMFMediaBuffer *pBuffer = NULL;
const LONG cbWidth = 4 * m_width;
const DWORD cbBufferSize = cbWidth * m_height;
BYTE *pData = NULL;
// Create a new memory buffer.
HRESULT hr = MFCreateMemoryBuffer(cbBufferSize, &pBuffer);
// Lock the buffer and copy the video frame to the buffer.
if (SUCCEEDED(hr))
{
hr = pBuffer->Lock(&pData, NULL, NULL);
}
if (SUCCEEDED(hr))
{
hr = MFCopyImage(
pData, // Destination buffer.
cbWidth, // Destination stride.
(BYTE*)m_lpBitsBuffer, // First row in source image.
cbWidth, // Source stride.
cbWidth, // Image width in bytes.
m_height // Image height in pixels.
);
}
if (pBuffer)
{
pBuffer->Unlock();
}
// Set the data length of the buffer.
if (SUCCEEDED(hr))
{
hr = pBuffer->SetCurrentLength(cbBufferSize);
}
// Create a media sample and add the buffer to the sample.
if (SUCCEEDED(hr))
{
hr = MFCreateSample(&pSample);
}
if (SUCCEEDED(hr))
{
hr = pSample->AddBuffer(pBuffer);
}
// Set the time stamp and the duration.
if (SUCCEEDED(hr))
{
hr = pSample->SetSampleTime(rtStart);
}
if (SUCCEEDED(hr))
{
hr = pSample->SetSampleDuration(rtDuration);
}
// Send the sample to the Sink Writer and update the timestamp
if (SUCCEEDED(hr))
{
hr = m_pSinkWriter->WriteSample(m_streamIndex, pSample);
}
SafeRelease(&pSample);
SafeRelease(&pBuffer);
return hr;
}
A couple details about the encoding:
Framerate: 30FPS
Bitrate: 15 000 000
Output encoding format: H264 (MP4)
To me, this behavior makes sense.
See https://github.com/mofo7777/Stackoverflow/tree/master/ScreenCaptureEncode
My program uses DirectX9 instead of GetDIBits, but the behaviour is the same. Try this program with different screen resolutions, to confirm this behaviour.
And I confirm, with my program, the video playback speed increases proportionally with the video width (and also with the video height).
Why ?
More data to copy, more time to pass. And wrong sample time/sample duration.
Using 30 FPS, means one frame each 33.3333333 ms :
Do GetDIBits, MFCopyImage, WriteSample end exactly at 33.3333333 ms... no.
Do you write each frame exactly at 33.3333333 ms... no.
So just doing rtStart += rtDuration is wrong, because you don't capture and write screen exactly at this time. And GetDIBits/DirectX9 are not able to process at 30 FPS, trust me. And why Microsoft provided Windows Desktop Duplication (for windows 8/10 only) ?
The key is latency.
Do you know how long GetDIBits, MFCopyImage and WriteSample take ? You should know, to understand the problem. Usually, it takes more than 33.3333333 ms. But it is variable.
You must know it to adjust the correct FPS to the encoder. But you also will need to WriteSample at the right time.
If you use MF_MT_FRAME_RATE with 5-10 FPS instead of 30 FPS, You will see it is more realistic, but not optimal.
For example, use an IMFPresentationClock to handle the correct WriteSample time.

Rotating a CImage and preserving its alpha/transparency channel

I have some existing code that uses a CImage which has an alpha channel, and I need to rotate it.
I have found the following suggestion which converts the CImage to a GDI+ Bitmap and then rotates it, and the rotated result ends up back in the CImage.
Bitmap* gdiPlusBitmap=Bitmap::FromHBITMAP(atlBitmap.Detach());
gdiPlusBitmap->RotateFlip(Rotate90FlipNone);
HBITMAP hbmp;
gdiPlusBitmap->GetHBITMAP(Color::White, &hbmp);
atlBitmap.Attach(hbmp);
Apparently it works without actually copying the bitmap bytes, which is great, but the problem is that if you create a Bitmap object from a HBITMAP it throws away the alpha channel.
Apparently to preserve the alpha channel you must instead create the Bitmap using the constructor
Bitmap(
[in] INT width,
[in] INT height,
[in] INT stride,
[in] PixelFormat format,
[in] BYTE *scan0
);
So I'm trying to adapt the above to use this constructor, but the interaction between CImage and Bitmap is a bit confusing. I think I need to create the Bitmap like this
Bitmap* gdiPlusBitmap = new Bitmap(
pCImage->GetWidth(),
pCImage->GetHeight(),
pCImage->GetPitch(),
PixelFormat32bppARGB,
(BYTE *)pCImage->GetBits());
nGDIStatus = gdiPlusBitmap->RotateFlip(Rotate90FlipNone);
but I'm not sure how to make the CImage pick up the changes (so that I end up with the original CImage rotated), or where to delete the Bitmap object.
Does anyone know the correct way to do this, preserving the alpha channel ?
Ideally I'd like to avoid copying the bitmap data, but it's not mandatory.
You can use Gdiplus::Graphics to draw bitmap on CImage.
Note, hard coding PixelFormat32bppARGB can cause problems if image doesn't support alpha channel. I added some basic error check.
CImage image;
if (S_OK != image.Load(L"c:\\test\\test.png"))
{
AfxMessageBox(L"can't open");
return 0;
}
int bpp = image.GetBPP();
//get pixel format:
HBITMAP hbmp = image.Detach();
Gdiplus::Bitmap* bmpTemp = Gdiplus::Bitmap::FromHBITMAP(hbmp, 0);
Gdiplus::PixelFormat pixel_format = bmpTemp->GetPixelFormat();
if (bpp == 32)
pixel_format = PixelFormat32bppARGB;
image.Attach(hbmp);
//rotate:
Gdiplus::Bitmap bmp(image.GetWidth(), image.GetHeight(), image.GetPitch(), pixel_format, static_cast<BYTE*>(image.GetBits()));
bmp.RotateFlip(Gdiplus::Rotate90FlipNone);
//convert back to image:
image.Destroy();
if (image.Create(bmp.GetWidth(), bmp.GetHeight(), 32, CImage::createAlphaChannel))
{
Gdiplus::Bitmap dst(image.GetWidth(), image.GetHeight(), image.GetPitch(), PixelFormat32bppARGB, static_cast<BYTE*>(image.GetBits()));
Gdiplus::Graphics graphics(&dst);
graphics.DrawImage(&bmp, 0, 0);
}

WIC Direct2D CreateBitmapFromMemory: limitations on width and height?

CreateBitmapFromMemory executes successfully when _nWidth is equal to or less than 644.
If the value exceeds this value, the HRESULT value is -2003292276
Do limits exist on the width and height?
#include <d2d1.h>
#include <d2d1helper.h>
#include <wincodecsdk.h> // Use this for WIC Direct2D functions
void test()
{
IWICImagingFactory *m_pIWICFactory;
ID2D1Factory *m_pD2DFactory;
IWICBitmap *m_pEmbeddedBitmap;
ID2D1Bitmap *m_pD2DBitmap;
unsigned char *pImageBuffer = new unsigned char[1024*1024];
HRESULT hr = S_OK;
int _nHeight = 300;
int _nWidth = 644;
If nWidth exceeds 644, CreateBitmapFromMemory returns an Error.
//_nWidth = 648;
if (m_pIWICFactory == 0 )
{
hr = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED | COINIT_DISABLE_OLE1DDE);
// Create WIC factory
hr = CoCreateInstance(
CLSID_WICImagingFactory,
NULL,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&m_pIWICFactory)
);
if (SUCCEEDED(hr))
{
// Create D2D factory
hr = D2D1CreateFactory( D2D1_FACTORY_TYPE_SINGLE_THREADED, &m_pD2DFactory );
}
}
hr = m_pIWICFactory->CreateBitmapFromMemory(
_nHeight, // height
_nWidth, // width
GUID_WICPixelFormat24bppRGB, // pixel format of the NEW bitmap
_nWidth*3, // calculated from width and bpp information
1024*1024, // height x width
pImageBuffer, // name of the .c array
&m_pEmbeddedBitmap // pointer to pointer to whatever an IWICBitmap is.
);
if (!SUCCEEDED(hr)) {
char *buffer = "Error in CreateBitmapFromMemory\n";
}
}
Error code is 0x88982F8C WINCODEC_ERR_INSUFFICIENTBUFFER and the reason is now obvious?
The first parameter is width, and the second is height. You have them in wrong order. All in all you provide incorrect arguments resulting in bad buffer.
Are you sure you passed in the correct pixelFormat for function CreateBitmapFromMemory? you hard code it to GUID_WICPixelFormat24bppRGB, I think this is the root cause, you should make sure this format same as the format with the source bitmap which you are copy the data from. try use the GetPixelFormat function to get the correct format instead of hard code.
There is an upper limit on the dimensions of images on the GPU.
Call GetMaximumBitmapSize on the render target.
http://msdn.microsoft.com/query/dev11.query?appId=Dev11IDEF1&l=EN-US&k=k(GetMaximumBitmapSize);k(DevLang-C%2B%2B);k(TargetOS-Windows)&rd=true
What you get back is the max pixels of either vertical or horiz.
For larger images you'd have to load them into a software render target such as a bitmap render target and then render what you want from that.

Win32 C/C++ Load Image from memory buffer

I want to load an image (.bmp) file on a Win32 application, but I do not want to use the standard LoadBitmap/LoadImage from Windows API: I want it to load from a buffer that is already in memory. I can easily load a bitmap directly from a file and print it on the screen, but this issue is making me stuck.
What I'm looking for is a function that works like this:
HBITMAP LoadBitmapFromBuffer(char* buffer, int width, int height);
Try CreateBitmap():
HBITMAP LoadBitmapFromBuffer(char *buffer, int width, int height)
{
return CreateBitmap(width, height, 1, 24, buffer);
}
Nevermind, I found my solution! Here's the initializing code:
std::ifstream is;
is.open("Image.bmp", std::ios::binary);
is.seekg (0, std::ios::end);
length = is.tellg();
is.seekg (0, std::ios::beg);
pBuffer = new char [length];
is.read (pBuffer,length);
is.close();
tagBITMAPFILEHEADER bfh = *(tagBITMAPFILEHEADER*)pBuffer;
tagBITMAPINFOHEADER bih = *(tagBITMAPINFOHEADER*)(pBuffer+sizeof(tagBITMAPFILEHEADER));
RGBQUAD rgb = *(RGBQUAD*)(pBuffer+sizeof(tagBITMAPFILEHEADER)+sizeof(tagBITMAPINFOHEADER));
BITMAPINFO bi;
bi.bmiColors[0] = rgb;
bi.bmiHeader = bih;
char* pPixels = (pBuffer+bfh.bfOffBits);
char* ppvBits;
hBitmap = CreateDIBSection(NULL, &bi, DIB_RGB_COLORS, (void**) &ppvBits, NULL, 0);
SetDIBits(NULL, hBitmap, 0, bih.biHeight, pPixels, &bi, DIB_RGB_COLORS);
GetObject(hBitmap, sizeof(BITMAP), &cBitmap);
CreateDIBSection can be a little complicated to use, but one of the things it can do is create a device-independent bitmap and give you a pointer to the buffer for the bitmap bits. Granted, you already have a buffer full of bitmap bits, but at least you could copy the data.
Speculating a bit: CreateDIBSection can also create bitmaps from file objects, and there's probably a way to get Windows to give you a file object representing a chunk of memory, which might trick CreateDIBSection into giving you a bitmap built directly from your buffer.
No, but you can create a new bitmap the size of the current one in memory, and write your memory structure onto it.
You're looking for the CreateBitmap function. Set lpvBits to your data.

Display 32bit bitmap - Palette

I have an image data in a buffer(type - long) from a scanner which is 32 bit.
For example, buffer[0]'s corresponding pixel value is 952 which is [184, 3, 0, 0] <-[R,G,B,A];
I want to display/Paint/draw on to the screen; I am confused when i tried to read about displying bitmaps. I looked at win32 functions, CBitmap class, windows forms (picture box) etc I am hard to understand the general idea/appraoch for displaying this buffer data on to a application window.
I have constructed the BITMAPFILEHEADER AND BITMAPINFOHEADER; Has the pixel data in a buffer, (unsigned char *)vInBuff whose size is vImageSz;
//construct the BMP file Header
vBmfh.bfType = 19778;
vBmfh.bfSize = 54+vImageSz;//size of the whole image
vBmfh.bfReserved2 = 0;
vBmfh.bfReserved1 = 0;
vBmfh.bfOffBits = 54;//offset from where the pixel data can be found
//Construct the BMP info header
vBmih.biSize = 40;//size of header from this point
vBmih.biWidth = 1004;
vBmih.biHeight = 1002;
vBmih.biPlanes = 1;
vBmih.biCompression = BI_RGB;
vBmih.biSizeImage = vBmih.biWidth*vBmih.biHeight*4;
vBmih.biBitCount = 32;
vBmih.biClrUsed = 0;
vBmih.biClrUsed = 0;
1.What is that i should be doing next to display this?
2 What should i be using to display the 32bit bitmap? I see people using createwindows functions, windows forms, MFC etc;
3.I also understand that BitBlt,createDIBSection, OnPaint etc are being used? I am confused by these various functions and coding platforms? Please suggest me a simple approach.
4.How can i create a palette to display a 32 bit image?
Thanks
Raj
EDITED TRYING TO IMPLEMENT DAVE'S APPROACH, CAN SOMEBODY COMMENT ON MY IMPLEMTATION. I couldn't continue to the bitblt as i donot have two HDC, i donot know how to get this one? Any help please
DisplayDataToImageOnScreen(unsigned char* vInBuff, int vImageSz) // buffer with pixel data, Size of pixel data
{
unsigned char* vImageBuff = NULL;
HDC hdcMem=CreateCompatibleDC(NULL);
HBITMAP hBitmap = CreateDIBSection(hdcMem,
(BITMAPINFO*)&vBmih,
DIB_RGB_COLORS,
(void **)&vImageBuff,
NULL, 0);
GetDIBits(hdcMem,
hBitmap,
0,
1,
(void**)&vImageBuff,
(BITMAPINFO*)&vBmih,
DIB_RGB_COLORS);
memcpy(vImageBuff,vInBuff,vImageSz);
}
An alternative if you just want to plot it on screen is to use TinyPTC ( http://sourceforge.net/projects/tinyptc/files/ ). It's just 3 functions and very simple if you just want to plot some pixels.
EDIT: Seems like http://www.pixeltoaster.com is a continuation of TinyPTC, probably preffered.
If you have image's bytes already in a buffer you can use:
a CBitmap object (MFC) and the method CBitmap::CreateBitmapIndirect
or
win32 routine CreateBitmapIndirect.
Now you can use BitBlt to draw it on a DC. To get a window DC use GetDC.
There is no need to create a pallete for 32 bit images.
Here's a simplified approach you can try, broken down into steps:
BITMAPINFO bitmapinfo = { 0 };
bitmapinfo.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bitmapinfo.bmiHeader.biWidth = 1004;
bitmapinfo.bmiHeader.biHeight = -1002;
bitmapinfo.bmiHeader.biPlanes = 1;
bitmapinfo.bmiHeader.biCompression = BI_RGB;
HBITMAP hBitmap = CreateDIBSection(NULL,
&bitmapinfo,
DIB_RGB_COLORS,
(void **)&vImageBuff,
NULL,
0);
Now party on vImageBuff and then cache hBitmap somewhere so that within your wndproc you can then in the WM_PAINT handler:
select hBitmap into temp compatible HDC
call BitBlt(..., SRCCOPY) from the compatible HDC to the window's HDC. the other parameters should be obvious. don't try to stretch or do anything fancy at first.
remember to select the original dummy bitmap into the temp HDC before destroying it.
If you aren't seeing results try looping through vImageBuff and just set every pixel to RGB(255, 0, 0), or something like that, just to sanity check the rest of the logic.
If nothing is drawing make sure that the alpha component for each pixel is 255.
If you're getting a garbled image then you need to double-check pixelformat, stride, etc.
Here's a strategy you might like:
Create a bitmap with the same size as your scanned data, and the same format (use CreateDIBSection).
Use GetDIBits to get the base address of the pixel data.
Copy your data (from the scanner) to the address GetDIBits returns.
Now render your bitmap! (use BitBlt, or somesuch).
Regarding palettes- a 32bit image does not, generally, have an explicit palette - you'd need 16.7million (assuming 8bits of alpha) values in the palette. Generally the palette is assumed to be 8bits red, 8bits green, 8bits blue, as you've described above.