Create a divx-encoded avi from frames using opencv - c++

This question is similar to this one and particularly this one but my desired output is different. I'm trying to capture the desktop to video using opencv. The preferred output is an avi file with divx encoding. I'm new to opencv and bitmap programming in general.
As a first step, to make sure the divx codec is present, I create a single frame (cv::Mat) of a solid color (yellow) and write that 100 times to the video file, as shown here:
int main(int argc, char* argv[])
{
cv::Mat frame(1200, 1920, CV_8UC3, cv::Scalar(0, 50000, 50000));
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, cv::Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
This works perfectly - the video file is created and can be played on my Win 10 machine with VLC, Windows Media Player or the Films&TV app. It's 100 frames of solid yellow, but it shows the video is being created properly.
Next step: replace the dummy cv::Mat frame in the code above with a series of screenshots of the desktop. I get a handle to the desktop window using GetDesktopWindow(), and then use the function hwnd2mat (taken from this SO question - thanks!) to convert the bitmap obtained from the desktop handle to a cv::Mat that I can write to my video.
I copied the hwnd2mat function verbatim except I don't scale the image - the desktop bitmap is already 1920x1200, and also the cv::Mat I create is CV_8UC3 instead of CV_8UC4 (CV_8UC4 causes my app to crash).
Here's the code, including a reprint of hwnd2mat:
int main(int argc, char* argv[])
{
cv::VideoWriter* videoWriter = new cv::VideoWriter(
"C:/videos/desktop.avi",
CV_FOURCC('D','I','V','3'),
5, Size(1920, 1200), true);
int frameCount = 0;
while (frameCount < 100)
{
HWND hDsktopWindow = ::GetDesktopWindow();
cv::Mat frame = hwnd2mat(hDsktopWindow);
videoWriter->write(frame);
::Sleep(100);
frameCount++;
}
delete videoWriter;
return 0;
}
cv::Mat hwnd2mat(HWND hwnd)
{
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow;
cv::Mat src;
BITMAPINFOHEADER bi;
hwindowDC = GetDC(hwnd);
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom / 1; //change this to whatever size you want to resize to
width = windowsize.right / 1;
src.create(height, width, CV_8UC3);
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0,srcwidth, srcheight, SRCCOPY);
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS);
// avoid memory leak
DeleteObject(hbwindow); DeleteDC(hwindowCompatibleDC); ReleaseDC(hwnd,hwindowDC);
return src;
}
The result of this is that the video file is created and can be played without errors, but it's just solid grey. It seems like the bitmap of the desktop is not getting copied correctly into the cv::Mat frame. I've tried a zillion combinations of the values in the BITMAPINFOHEADER, but nothing works and I don't know what I'm doing to be honest. I know opencv has conversion functions but again, I don't even really know what I'm trying to convert to/from.
Any help appreciated!

Figured out a way to make it work - I have no idea if this is the best way, so comments or alternative solutions are still welcome.
It seems like for GetDIBits to work, the cv::Mat has to be 4-channel, i.e. CV_8UC4, like the original code of hwnd2mat before I changed it. If it is not CV_8UC4, no data is copied (GetDIBits returns 0 scan lines copied) and that's why my avi was just gray. So the first change was to create the src cv::Mat as 4-channel:
src.create(height, width, CV_8UC4);
But for the divx-encoded avi file that I'm trying to create, the frames should be 3-channel (don't ask me why). I added a call to an opencv conversion function after calling GetDIBits(), as follows:
cv::Mat dst;
dst.create(height, width, CV_8UC3);
cv::cvtColor(src, dst, CV_RGBA2RGB);
And then I return dst from hwnd2mat instead of src. The call to cvtColor removes the alpha channel (the A in RGBA) and dst ends up with just the R,G,B channels.

You can get bitmap with no alpha channel from GetDIBits and write it straight to cv::VideoWriter. Just change biBitCount to 24. Leave Mat format to CV_8IC3. This worked for me.
src.create(height, width, CV_8UC3);
bi.biBitCount = 24; // this is where to change

Related

Video recording of the window on OpenCV C++

I am trying to write a program for recording windows, but for some reason, after the program finishes, I get a corrupted .avi file.
I don't understand what the problem is. The hwnd2mat() and windowNames() functions work correctly, the error is clearly not in it. The code looks massive, but in fact, most of the code is occupied by the translation of the image from the HWND to the Mat. Also it should be noted that the resulting image after recording, always has the same size (irrespective of the recording time).
#include <iostream>
#include "opencv2/highgui.hpp"
#include "opencv2/imgproc.hpp"
#include <opencv2/videoio.hpp>
#include <Windows.h>
BOOL CALLBACK windowNames(HWND hwnd, LPARAM lParam) {
const DWORD TITLE_SIZE = 1024;
WCHAR windowTitle[TITLE_SIZE];
GetWindowTextW(hwnd, windowTitle, TITLE_SIZE);
int length = ::GetWindowTextLength(hwnd);
std::wstring title(&windowTitle[0]);
if (!IsWindowVisible(hwnd) || length == 0 || title == L"Program Manager") {
return TRUE;
}
// Retrieve the pointer passed into this callback, and re-'type' it.
// The only way for a C API to pass arbitrary data is by means of a void*.
std::vector<std::wstring>& titles = *reinterpret_cast<std::vector<std::wstring>*>(lParam);
titles.push_back(title);
return TRUE;
}
cv::Mat hwnd2mat(HWND hwnd)
{
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow;
cv::Mat src;
BITMAPINFOHEADER bi;
HBITMAP bi2;
hwindowDC = GetDC(hwnd);
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom / 1; //change this to whatever size you want to resize to
width = windowsize.right / 1;
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER); //http://msdn.microsoft.com/en-us/library/windows/window/dd183402%28v=vs.85%29.aspx
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 1;
bi.biYPelsPerMeter = 2;
bi.biClrUsed = 3;
bi.biClrImportant = 4;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0, srcwidth, srcheight, SRCCOPY); //change SRCCOPY to NOTSRCCOPY for wacky colors !
src.create(height, width, CV_8UC4);
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS); //copy from hwindowCompatibleDC to hbwindow
// avoid memory leak
DeleteObject(hbwindow);
DeleteDC(hwindowCompatibleDC);
ReleaseDC(hwnd, hwindowDC);
return src;
}
int main(int argc, char** argv)
{
std::vector<std::wstring> titles; // we use std::wstring in place of std::string. This is necessary so that the entire character set can be represented.
EnumWindows(windowNames, reinterpret_cast<LPARAM>(&titles));
HWND hwndDesktop = GetDesktopWindow();
size_t number = 0;
int i = 0;
for (const auto& title : titles)
std::wcout << L"Title: " << i++ << title << std::endl;
std::cin >> number;
HWND hwndWindow = FindWindow(NULL, titles[number].c_str());
cv::namedWindow("output", cv::WINDOW_NORMAL);
cv::Mat src = hwnd2mat(/*hwndDesktop*/hwndWindow);
cv::VideoWriter outputVideo("output.avi", cv::VideoWriter::fourcc('M', 'J', 'P', 'G'), 1, cv::Size(src.cols, src.rows));
outputVideo.write(src);
int key = 0;
while (key != 27)
{
src = hwnd2mat(hwndWindow);
outputVideo.write(src);
cv::imshow("output", src);
key = cv::waitKey(60); //press ESC to end
}
return 0;
}
The problem lies in the fact that we need to transfer the Mat from BGRA to BGR during the transfer of the frame to the VideoWriter object.
For correct operation, it is necessary to write
Mat bgrImg; cvtColor(src, bgrImg, COLOR_BGRA2BGR);
in the range before sending a frame and send bgrImg as a frame.
VideoWriter instances need to be closed using the release() method. that finalizes the video container file.
OpenCV has no support for modern screen capture AFAIK. You’ll need to use platform-specific means of doing this (maybe encapsulated in some library). The problem here is that the screen data is already in the GPU, and by using OpenCV you’re forcing it to be copied to the main memory and processed with a relatively slow CPU. It won’t perform well. Instead, a platform-specific approach will process the data on the GPU, using it to both extract the window’s frames and encode them. It’ll be very efficient both in terms of speed as well as energy consumption (you’ll vastly improve battery life while the capture is running, and will prevent the fans from being annoying on notebooks).

Capturing camera(USB) via D2XX library or OPENCV

I want to write an application(in c++) in order to capture images from a camera which is using in a acquisition system. The camera is connected to a box (acquisition system) and I've found that the chip that is used is FTDI. The chip is located in the box between camera and PC. The camera is connected to this box. A USB cable is connected to a PC and the box. Some other tools are connected to the box which are not important.
Moreover, there is a simple commercial application which is written by MFC and I want to do exactly the same. In the folder of the application there are D2XX driver files(ftd2xx.h, etc) and an information file(*.inf) of the camera.
Also, the camera is not recording video but taking photo in short intervals(<0.1s) and the interval is determined by the acquisition system not the commercial application(the acquisition system detect when camera has to take photo)
Here is my question:
Since the information file of the USB device is provided, could I just utilize the Open-CV lib to capture the camera OR do I have to only use D2XX library?
If I have to use D2XX library in order to read the data, how could I convert the raw data to Image format (in Qt)?
I can't simply write application and test on the device over and over to find the solution since the device is located far from my location and for every test I have to travel this distance. So, I want to make sure that my application will work.
A company from China made the device for us and they won't support it any more :(
The camera uses a custom communications protocol, it doesn't implement an imaging device class. OpenCV won't see it, neither will any other multimedia library. No matter what, you need to implement that protocol. You can then expose it to OpenCV if you wish.
To convert into image, try this:
Mat hwnd2mat(HWND hwnd){
HDC hwindowDC, hwindowCompatibleDC;
int height, width, srcheight, srcwidth;
HBITMAP hbwindow; // <-- The image represented by hBitmap
cv::Mat src; // <-- The image represented by mat
BITMAPINFOHEADER bi;
// Initialize DCs
hwindowDC = GetDC(hwnd); // Get DC of the target capture..
hwindowCompatibleDC = CreateCompatibleDC(hwindowDC); // Create compatible DC
SetStretchBltMode(hwindowCompatibleDC, COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom *2/ 2; //change this to whatever size you want to resize to
width = windowsize.right *2/ 2;
src.create(height, width, CV_8UC4);
// create a bitmap
hbwindow = CreateCompatibleBitmap(hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER);
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt(hwindowCompatibleDC, 0, 0, width, height, hwindowDC, 0, 0, srcwidth, srcheight, SRCCOPY); //change SRCCOPY to NOTSRCCOPY for wacky colors !
GetDIBits(hwindowCompatibleDC, hbwindow, 0, height, src.data, (BITMAPINFO *)&bi, DIB_RGB_COLORS); //copy from hwindowCompatibleDC to hbwindow
// avoid memory leak
DeleteObject(hbwindow); DeleteDC(hwindowCompatibleDC); ReleaseDC(hwnd, hwindowDC);
return src;
}

StretchBlt only works when nHeightDest is negative

I'm trying to use StretchBlt in order to copy pixels from a memory hdc to the window hdc.
The memory hdc gets the image from an invisible window which renders a stream using openGL.
Here's my code:
BITMAPINFOHEADER createBitmapHeader(int width, int height) {
BITMAPINFOHEADER header;
header.biSize = sizeof(BITMAPINFOHEADER);
header.biWidth = width;
header.biHeight = height;
header.biPlanes = 1;
header.biBitCount = 32;
header.biCompression = BI_RGB;
header.biSizeImage = 0;
header.biXPelsPerMeter = 0;
header.biYPelsPerMeter = 0;
header.biClrUsed = 0;
header.biClrImportant = 0;
return header;
}
...
HDC memoryHdc = CreateCompatibleDC(windowHdc);
BITMAPINFO bitmapInfo;
bitmapInfo.bmiHeader = createBitmapHeader(targetDimensions.width, targetDimensions.height);
HBITMAP bitmap = CreateDIBitmap(windowHdc, &bitmapInfo.bmiHeader, CBM_INIT, offscreenBuffer, &bitmapInfo, DIB_RGB_COLORS);
SelectObject(memoryHdc, bitmap);
DeleteObject(bitmap);
SetStretchBltMode(windowHdc, COLORONCOLOR);
StretchBlt(windowHdc,
targetDimensions.x, targetDimensions.y,
targetDimensions.width, -targetDimensions.height,
memoryHdc,
sourceDimensions.x, sourceDimensions.y,
sourceDimensions.width, sourceDimensions.height,
SRCCOPY);
DeleteDC(memoryHdc);
Where windowHdc is the hdc of the window to which I want the StretchBlt to copy the pixels to, and offscreenBuffer is a void* to the pixels copied from the offscreen window in which the openGL is rendering.
This code works great, except that the image is upside down and I want it vertically flipped.
I know that this happens because:
If nHeightSrc and nHeightDest have different signs, the function
creates a mirror image of the bitmap along the y-axis
But when I remove the minus sign and both target and source heights are the same then I see no image in the window.
Just to check, I tried to put the minus on the sourceDimensions.height but that also results in no image, and the same if I try to negate the widths (both target and source).
Any idea why?
Thanks.

Template matching from a screenshot of a window

What I've done
I have a small template image which is meant to be used to find coordinates of matching subimages within a larger screenshot image. The screenshot itself is captured into a memory DC with the help of BitBlt, then converted into a cv::Mat via GetDIBits, like so:
HDC windowDc = GetWindowDC(hwndTarget);
HDC memDc = CreateCompatibleDC(windowDc);
// ...
HBITMAP hbmp = CreateCompatibleBitmap(windowDc, width, height);
SelectObject(memDc, hbmp);
BITMAPINFOHEADER bi =
{
sizeof(BITMAPINFOHEADER), // biSize
width, // biWidth
-height, // biHeight
1, // biPlanes
32, // biBitCount
BI_RGB, // biCompression
0, // biSizeImage
0, // biXPelsPerMeter
0, // biYPelsPerMeter
0, // biClrUser
0 // biClrImportant
};
// ...
BitBlt(memDc, 0, 0, width, height, windowDc, offsetX, offsetY, SRCCOPY);
matScreen.create(height, width, CV_8UC4);
GetDIBits(memDc, hbmp, 0, (UINT)height, matScreen.data, (BITMAPINFO*)&bi, DIB_RGB_COLORS);
This appears to work fine, and a quick imshow("Source", matScreen) displays the image correctly.
However...
Since this screenshot image has been created as a 32-bit RGB memory bitmap, and placed inside a cv::Mat using the CV_8UC4 flag, it fails several assertions in OpenCV (as well as outputs whacky results when utilizing several OpenCV methods). Most notably, matchTemplate always fails on the following line:
CV_Assert( (depth == CV_8U || depth == CV_32F) && type == _templ.type() && _img.dims() <= 2 );
I've tried matching up the depth flags of the screenshot and the template/result Mats, and the best case scenario is that I am able to display all 3 images, but the template matching doesn't work because I'm assuming the source (screenshot) image is displayed in color whilst the others are in grayscale. Other conversions either fail, or display strange results and fail the template matching (or simply raise an error)... And changing the screenshot image's Mat to use any other depth flag ends up displaying the image incorrectly.
Question
What can I do to utilize OpenCV's template matching with a screenshot taken in C++? Is there some sort of manual conversion I should be doing to the screenshot image, or something?
Code:
Screenshot code taken from: github.com/acdx/opencv-screen-capture
My main code: codeshare.io/vLio1
I have included my code for finding a Template image from your Desktop image. Hope this solves your problem!
#include <fstream>
#include <memory>
#include <string>
#include <iostream>
#include <strstream>
#include <functional>
#include <Windows.h>
#include <iostream>
#include <string>
using namespace std;
using namespace cv;
Mat hwnd2mat(HWND hwnd){
HDC hwindowDC,hwindowCompatibleDC;
int height,width,srcheight,srcwidth;
HBITMAP hbwindow;
Mat src;
BITMAPINFOHEADER bi;
hwindowDC=GetDC(hwnd);
hwindowCompatibleDC=CreateCompatibleDC(hwindowDC);
SetStretchBltMode(hwindowCompatibleDC,COLORONCOLOR);
RECT windowsize; // get the height and width of the screen
GetClientRect(hwnd, &windowsize);
srcheight = windowsize.bottom;
srcwidth = windowsize.right;
height = windowsize.bottom; //change this to whatever size you want to resize to
width = windowsize.right;
src.create(height,width,CV_8UC4);
// create a bitmap
hbwindow = CreateCompatibleBitmap( hwindowDC, width, height);
bi.biSize = sizeof(BITMAPINFOHEADER); //http://msdn.microsoft.com/en-us/library/windows/window/dd183402%28v=vs.85%29.aspx
bi.biWidth = width;
bi.biHeight = -height; //this is the line that makes it draw upside down or not
bi.biPlanes = 1;
bi.biBitCount = 32;
bi.biCompression = BI_RGB;
bi.biSizeImage = 0;
bi.biXPelsPerMeter = 0;
bi.biYPelsPerMeter = 0;
bi.biClrUsed = 0;
bi.biClrImportant = 0;
// use the previously created device context with the bitmap
SelectObject(hwindowCompatibleDC, hbwindow);
// copy from the window device context to the bitmap device context
StretchBlt( hwindowCompatibleDC, 0,0, width, height, hwindowDC, 0, 0,srcwidth,srcheight, SRCCOPY); //change SRCCOPY to NOTSRCCOPY for wacky colors !
GetDIBits(hwindowCompatibleDC,hbwindow,0,height,src.data,(BITMAPINFO *)&bi,DIB_RGB_COLORS); //copy from hwindowCompatibleDC to hbwindow
// avoid memory leak
DeleteObject (hbwindow); DeleteDC(hwindowCompatibleDC); ReleaseDC(hwnd, hwindowDC);
return src;
}
bool NMultipleTemplateMatching(Mat mInput,Mat mTemplate,float Threshold,float Closeness,vector<Point2f> &List_Matches)
{
Mat mResult;
Size szTemplate= mTemplate.size();
Size szTemplateCloseRadius((szTemplate.width/2)* Closeness,(szTemplate.height/2)* Closeness);
matchTemplate(mInput, mTemplate, mResult, TM_CCOEFF_NORMED);
threshold(mResult, mResult, Threshold, 1.0, THRESH_TOZERO);
while (true)
{
double minval, maxval ;
Point minloc, maxloc;
minMaxLoc(mResult, &minval, &maxval, &minloc, &maxloc);
if (maxval >= Threshold)
{
List_Matches.push_back(maxloc);
rectangle(mResult,Point2f(maxloc.x-szTemplateCloseRadius.width,maxloc.y-szTemplateCloseRadius.height),Point2f(maxloc.x+szTemplateCloseRadius.width,maxloc.y+szTemplateCloseRadius.height),Scalar(0),-1);
}
else
break;
}
//imshow("reference", mDebug_Bgr);
return true;
}
int main(int argc, char** argv)
{
Mat mTemplate_Bgr,mTemplate_Gray;
mTemplate_Bgr= imread("Template.png",1);
imshow("mTemplate_Bgr",mTemplate_Bgr);
HWND hDesktopWnd;
hDesktopWnd=GetDesktopWindow();
Mat mScreenShot= hwnd2mat(hDesktopWnd);
Mat mSource_Gray,mResult_Bgr= mScreenShot.clone();
float Threshold= 0.9;
float Closeness= 0.9;
vector<Point2f> List_Matches;
cvtColor(mScreenShot,mSource_Gray,COLOR_BGR2GRAY);
cvtColor(mTemplate_Bgr,mTemplate_Gray,COLOR_BGR2GRAY);
namedWindow("Screen Shot",WINDOW_AUTOSIZE);
imshow("Screen Shot",mSource_Gray);
NMultipleTemplateMatching(mSource_Gray,mTemplate_Gray,Threshold,Closeness,List_Matches);
for (int i = 0; i < List_Matches.size(); i++)
{
rectangle(mResult_Bgr,List_Matches[i],Point(List_Matches[i].x + mTemplate_Bgr.cols, List_Matches[i].y + mTemplate_Bgr.rows),Scalar(0,255,0), 2);
}
imshow("Final Results",mResult_Bgr);
waitKey(0);
return 0;
}
To expand on Balaji's very useful answer, make sure that all the Mats being passed to the function are of the same type with the .type() function. You can change src.create(height,width,CV_8UC4); to the same type as template (or vice versa) and the error should be gone. Here's another version of the solution.

How to draw 32-bit alpha channel bitmaps?

I need to create a custom control to display bmp images with alpha channel. The background can be painted in different colors and the images have shadows so I need to truly "paint" the alpha channel.
Does anybody know how to do it?
I also want if possible to create a mask using the alpha channel information to know whether the mouse has been click on the image or on the transparent area.
Any kind of help will be appreciated!
Thanks.
Edited(JDePedro): As some of you have suggested I've been trying to use alpha blend to paint the bitmap with alpha channel. This just a test I've implemented where I load a 32-bit bitmap from resources and I try to paint it using AlphaBlend function:
void CAlphaDlg::OnPaint()
{
CClientDC dc(this);
CDC dcMem;
dcMem.CreateCompatibleDC(&dc);
CBitmap bitmap;
bitmap.LoadBitmap(IDB_BITMAP);
BITMAP BitMap;
bitmap.GetBitmap(&BitMap);
int nWidth = BitMap.bmWidth;
int nHeight = BitMap.bmHeight;
CBitmap *pOldBitmap = dcMem.SelectObject(&bitmap);
BLENDFUNCTION m_bf;
m_bf.BlendOp = AC_SRC_OVER;
m_bf.BlendFlags = 0;
m_bf.SourceConstantAlpha = 255;
m_bf.AlphaFormat = AC_SRC_ALPHA;
AlphaBlend(dc.GetSafeHdc(), 100, 100, nWidth, nHeight, dcMem.GetSafeHdc(), 0, 0,nWidth, nHeight,m_bf);
dcMem.SelectObject(pOldBitmap);
CDialog::OnPaint();
}
This is just a test so I put the code in the OnPaint of the dialog (I also tried the AlphaBlend function of the CDC object).
The non-transparent areas are being painted correctly but I get white where the bitmap should be transparent.
Any help???
This is a screenshot..it's not easy to see but there is a white rectangle around the blue circle:
alt text http://img385.imageshack.us/img385/7965/alphamh8.png
Ok. I got it! I have to pre-multiply every pixel for the alpha value. Someone can suggest the optimized way to do that?
For future google users, here is a working pre-multiply function. Note that this was taken from http://www.viksoe.dk/code/alphatut1.htm .
inline void PremultiplyBitmapAlpha(HDC hDC, HBITMAP hBmp)
{
BITMAP bm = { 0 };
GetObject(hBmp, sizeof(bm), &bm);
BITMAPINFO* bmi = (BITMAPINFO*) _alloca(sizeof(BITMAPINFOHEADER) + (256 * sizeof(RGBQUAD)));
::ZeroMemory(bmi, sizeof(BITMAPINFOHEADER) + (256 * sizeof(RGBQUAD)));
bmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
BOOL bRes = ::GetDIBits(hDC, hBmp, 0, bm.bmHeight, NULL, bmi, DIB_RGB_COLORS);
if( !bRes || bmi->bmiHeader.biBitCount != 32 ) return;
LPBYTE pBitData = (LPBYTE) ::LocalAlloc(LPTR, bm.bmWidth * bm.bmHeight * sizeof(DWORD));
if( pBitData == NULL ) return;
LPBYTE pData = pBitData;
::GetDIBits(hDC, hBmp, 0, bm.bmHeight, pData, bmi, DIB_RGB_COLORS);
for( int y = 0; y < bm.bmHeight; y++ ) {
for( int x = 0; x < bm.bmWidth; x++ ) {
pData[0] = (BYTE)((DWORD)pData[0] * pData[3] / 255);
pData[1] = (BYTE)((DWORD)pData[1] * pData[3] / 255);
pData[2] = (BYTE)((DWORD)pData[2] * pData[3] / 255);
pData += 4;
}
}
::SetDIBits(hDC, hBmp, 0, bm.bmHeight, pBitData, bmi, DIB_RGB_COLORS);
::LocalFree(pBitData);
}
So then your OnPaint becomes:
void MyButton::OnPaint()
{
CPaintDC dc(this);
CRect rect(0, 0, 16, 16);
static bool pmdone = false;
if (!pmdone) {
PremultiplyBitmapAlpha(dc, m_Image);
pmdone = true;
}
BLENDFUNCTION bf;
bf.BlendOp = AC_SRC_OVER;
bf.BlendFlags = 0;
bf.SourceConstantAlpha = 255;
bf.AlphaFormat = AC_SRC_ALPHA;
HDC src_dc = m_Image.GetDC();
::AlphaBlend(dc, rect.left, rect.top, 16, 16, src_dc, 0, 0, 16, 16, bf);
m_Image.ReleaseDC();
}
And the loading of the image (in the constructor of your control):
if ((HBITMAP)m_Image == NULL) {
m_Image.LoadFromResource(::AfxGetResourceHandle(), IDB_RESOURCE_OF_32_BPP_BITMAP);
}
The way I usually do this is via a DIBSection - a device independent bitmap that you can modify the pixels of directly. Unfortunately there isn't any MFC support for DIBSections: you have to use the Win32 function CreateDIBSection() to use it.
Start by loading the bitmap as 32-bit RGBA (that is, four bytes per pixel: one red, one green, one blue and one for the alpha channel). In the control, create a suitably sized DIBSection. Then, in the paint routine
Copy the bitmap data into the DIBSection's bitmap data, using the alpha channel byte to blend the bitmap image with the background colour.
Create a device context and select the DIBSection into it.
Use BitBlt() to copy from the new device context to the paint device context.
You can create a mask given the raw bitmap data simply by looking at the alpha channel values - I'm not sure what you're asking here.
You need to do an alpha blend with your background color, then take out the alpha channel to paint it to the control.
The alpha channel should just be every 4th byte of your image. You can use that directly for your mask, or you can just copy every 4th byte to a new mask image.
Painting it is very easy with the AlphaBlend function.
As for you mask, you'll need to get the bits of the bitmap and examine the alpha channel byte for each pixel you're interested in.
An optimised way to pre-multiply the RGB channels with the alpha channel is to set up a [256][256] array containing the calculated multiplication results. The first dimension is the alpha value, the second is the R/G/B value, the values in the array are the pre-multiplied values you need.
With this array set up correctly, you can calculate the value you need like this:
R = multiplicationLookup[alpha][R];
G = multiplicationLookup[alpha][G];
B = multiplicationLookup[alpha][B];
You are on the right track, but need to fix two things.
First use ::LoadImage( .. LR_CREATEDIBSECTION ..) instead of CBitmap::LoadBitmap. Two, you have to "pre-multiply" RGB values of every pixel in a bitmap to their respective A value. This is a requirement of AlphaBlend function, see AlphaFormat description on this MSDN page. T
The lpng has a working code that does the premultiplication of the DIB data.