I'm Working with Kinect SDK 1.5 and found some problem when i want to track multiple face.
In my code I first use
myFaceTracker->DetectFaces(&sensor, NULL, candidates, &nums)
to get the currently detect face, and compare the their position with already tracked faces.
Those are not currently tracked faces will be added to a queue for the tracking in next step.
But when I want to track them using the StartTracking function, it doesn't give expected result.
for example:
//Time for us to track those not tracked
RECT rect = queue_face.front();
//Make the region larger than orginal
int length_vertical = rect.bottom - rect.top;
int length_horizontal = rect.right - rect.left;
rect.top = rect.top - length_vertical * 2;
rect.bottom = rect.bottom + length_vertical * 2;
rect.left = rect.left - length_horizontal * 2;
rect.right = rect.right + length_horizontal * 2;
//Use OpenCV to see if it gets right
CvPoint LT = cvPoint(rect.left, rect.top);
CvPoint LB = cvPoint(rect.left, rect.bottom);
CvPoint RT = cvPoint(rect.right, rect.top);
CvPoint RB = cvPoint(rect.right, rect.bottom);
cvLine(_img, LT, LB, cvScalar(0, 255, 0), 2);
cvLine(_img, LB, RB, cvScalar(0, 255, 0), 2);
cvLine(_img, RB, RT, cvScalar(0, 255, 0), 2);
cvLine(_img, RT, LT, cvScalar(0, 255, 0), 2);
//Here turns out to be some problem
result = facetracker->StartTracking(&sensor, &rect, NULL, faceresult);
//Check Result
is_track = SUCCEEDED(result) && SUCCEEDED(faceresult->GetStatus());
queue_face.pop();
....
The Code above gives me a totally useless result even if the status says succeed
if we use
faceresult->GetFaceRect(rect);
it gives us a rect with (0, 0, 0, 0)!
But if we change
result = facetracker->StartTracking(&sensor, &rect, NULL, faceresult);
to
result = facetracker->StartTracking(&sensor, NULL, NULL, faceresult);
Now it'll give us something useful, but in this way, its nearly impossible to control whom we want to track.
I also tried to see the input rect to (0, 0, width-1, height-1), but even worse, now it says it can't track anything.
Would anyone give me help?
Thanks a lot.
Related
Is there any way to resize a Win32 listbox to fit its content (the minimum size that will show all its content, not needing a scrollbar), whenever its items change?
Thank!
Edit: I need resize both width and height of listbox.
You didn't specify whether you wanted horizontal as well as vertical, but I'm going to assume not. Basically, you need to get the number of items and the item height and multiply them, then add on the space for the control borders (unless the control is borderless, you may need to play around with this):
void AutosizeListBox(HWND hWndLB)
{
int iItemHeight = SendMessage(hWndLB, LB_GETITEMHEIGHT, 0, 0);
int iItemCount = SendMessage(hWndLB, LB_GETCOUNT, 0, 0);
// calculate new desired client size
RECT rc;
GetClientRect(hWndLB, &rc);
rc.bottom = rc.top + iItemHeight * iItemCount;
// grow for borders
rc.right += GetSystemMetrics(SM_CXEDGE) * 2;
rc.bottom += GetSystemMetrics(SM_CXEDGE) * 2;
// resize
SetWindowPos(hWndLB, 0, 0, 0, rc.right, rc.bottom, SWP_NOMOVE | SWP_NOZORDER | SWP_NOACTIVATE);
}
If you want horizontal sizing as well you would need to select the right font into a DC, and loop through all the items to calculate the maximum text length using GetTextExtentPoint32.
EDIT: Added a version that calculates horizontal size as well.
void AutosizeListBox(HWND hWndLB)
{
int iItemHeight = SendMessage(hWndLB, LB_GETITEMHEIGHT, 0, 0);
int iItemCount = SendMessage(hWndLB, LB_GETCOUNT, 0, 0);
// get a DC and set up the font
HDC hDC = GetDC(hWndLB);
HGDIOBJ hOldFont = SelectObject(hDC, (HGDIOBJ)SendMessage(hWndLB, WM_GETFONT, 0, 0));
// calculate width of largest string
int iItemWidth = 0;
for (int i = 0; i < iItemCount; i++)
{
int iLen = SendMessage(hWndLB, LB_GETTEXTLEN, i, 0);
TCHAR* pBuf = new TCHAR[iLen + 1];
SendMessage(hWndLB, LB_GETTEXT, i, (LPARAM)pBuf);
SIZE sz;
GetTextExtentPoint32(hDC, pBuf, iLen, &sz);
if (iItemWidth < sz.cx) iItemWidth = sz.cx;
delete[] pBuf;
}
SelectObject(hDC, hOldFont);
ReleaseDC(hWndLB, hDC);
// calculate new desired client size
RECT rc;
SetRect(&rc, 0, 0, iItemWidth, iItemHeight * iItemCount);
// grow for borders
rc.right += GetSystemMetrics(SM_CXEDGE) * 2;
rc.bottom += GetSystemMetrics(SM_CXEDGE) * 2;
// resize
SetWindowPos(hWndLB, 0, 0, 0, rc.right, rc.bottom, SWP_NOMOVE | SWP_NOZORDER | SWP_NOACTIVATE);
}
I've made a screensaver that simply scrolls user-defined text from right to left, automatically jumping back to the right if it exceeds the left boundary.
It works with multiple monitors flawlessly, barring one exception: if the 'Main Display' is on the right (i.e. Monitor #2 is primary), then I do not get the scrolling text, however the monitor IS blacked out by the code. If the main display is #1, there's no problem.
I've been poring over the code for hours and cannot identify at what stage the issue arises; I can confirm the text is in the right position (I inserted logging code that verifies its current position), but it's as if one of the API calls simply erases it. I've read the documentation for them and all looks ok.
I create a custom DC in WM_CREATE via:
if (( hDC = CreateDC(TEXT("DISPLAY"), NULL, NULL, NULL)) == NULL )
To prevent flicker, I create compatible objects to update:
void
TickerScreensaver::Paint_Prep(HDC hDC)
{
_devcon_mem = CreateCompatibleDC(hDC);
_devcon_orig = hDC;
_bmp_mem = CreateCompatibleBitmap(hDC, _width, _height);
}
and when painting in WM_PAINT (after BeginPaint, etc.), do a bit-block transfer to the actual device context:
void
TickerScreensaver::Paint(HDC hDC, RECT rect)
{
_bmp_orig = (HBITMAP)SelectObject(_devcon_mem, _bmp_mem);
FillRect(_devcon_mem, &rect, (HBRUSH)GetStockObject(BLACK_BRUSH));
if ( _gdiplus_token != NULL )
{
Graphics graphics(_devcon_mem);
SolidBrush brush(cfg.display.font_colour);
FontFamily font_family(cfg.display.font_family.c_str());
Font font(&font_family, cfg.display.font_size, FontStyleRegular, UnitPixel);
PointF point_f((f32)cfg.display.text_pos.x, (f32)cfg.display.text_pos.y);
RectF layout_rect(0, 0, 0, 0);
RectF bound_rect;
graphics.SetTextRenderingHint(TextRenderingHintAntiAlias);
graphics.MeasureString(cfg.display.text.c_str(), cfg.display.text.length(), &font, layout_rect, &bound_rect);
cfg.display.offset.x = (DWORD)(0 - bound_rect.Width);
cfg.display.offset.y = (DWORD)(bound_rect.Height / 2);
graphics.DrawString(cfg.display.text.c_str(), cfg.display.text.length(), &font, point_f, &brush);
}
BitBlt(hDC, 0, 0, _width, _height, _devcon_mem, 0, 0, SRCCOPY);
SelectObject(_devcon_mem, _bmp_orig);
}
I calculate the dimensions like so:
void
TickerScreensaver::GetFullscreenRect(HDC hDC, RECT *rect)
{
RECT s = { 0, 0, 0, 0 };
if ( EnumDisplayMonitors(hDC, NULL, EnumMonitorCallback, (LPARAM)&s) )
{
CopyRect(rect, &s);
s.left < 0 ?
_width = s.right + (0 + -s.left) :
_width = s.right;
s.top < 0 ?
_height = s.bottom + (0 + -s.top) :
_height = s.bottom;
}
}
Please note that the calculated width, height, etc., are all 100% accurate; it is purely the drawing code that doesn't appear to be working on the main display, only when it is on the right (which sets the origin to {0,0}, monitor #1 then being negative values). It is also reproduceable on a tri-display, with the main being in the center.
Well, turns out it is nice and simple - in Paint(), we should use a rect using the real width and height, not the one retrieved containing the negative values (the one actually retrieved from the API functions):
RECT r = { 0, 0, _width, _height };
_bmp_orig = (HBITMAP)SelectObject(_devcon_mem, _bmp_mem);
FillRect(_devcon_mem, &r, (HBRUSH)GetStockObject(BLACK_BRUSH));
I have got a trouble using the StretchDIBits function.
I want to draw a bitmap made from a buffer. However, the colors I define in the buffer are different from the result on screen.
I have read the documentation and I played with the biCompression (BI_RGB and BI_BITFIELDS) and biClrUsed (0 / 3) parameters of the BITMAPINFOHEADER. I can see some differences depending on their values, but the result is still different from what I am expecting.
Here is the code I am using (it can be inserted in the OnDraw method of a template SDI project to demonstrate the problem).
void CTestStretchDIBitsView::OnDraw(CDC* /*pDC*/)
{
...
CClientDC dc(this);
CRect rect;
GetClientRect(&rect);
DWORD* pBuffer = new DWORD[500 * 500];
memset(pBuffer, RGB(255, 255, 0), 500 * 500 * sizeof(DWORD));
LPBITMAPINFO pBmpInfo = (LPBITMAPINFO) new BYTE[sizeof(BITMAPINFOHEADER) + 256 * sizeof(RGBQUAD)];
pBmpInfo->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
pBmpInfo->bmiHeader.biWidth = 500;
pBmpInfo->bmiHeader.biHeight = 500;
pBmpInfo->bmiHeader.biPlanes = 1;
pBmpInfo->bmiHeader.biBitCount = 32;
pBmpInfo->bmiHeader.biCompression = BI_BITFIELDS;
pBmpInfo->bmiHeader.biSizeImage = 500 * 500;
pBmpInfo->bmiHeader.biXPelsPerMeter = 0;
pBmpInfo->bmiHeader.biYPelsPerMeter = 0;
pBmpInfo->bmiHeader.biClrUsed = 0;
pBmpInfo->bmiHeader.biClrImportant = 0;
SetStretchBltMode(dc.m_hDC, STRETCH_DELETESCANS);
StretchDIBits(dc.m_hDC,
0,
rect.Height(),
rect.Width(),
-rect.Height(),
0,
0,
500,
500,
pBuffer,
pBmpInfo,
DIB_RGB_COLORS,
SRCCOPY);
delete[] pBmpInfo;
delete[] pBuffer;
}
You have to use the following mode
SetStretchBltMode(hdcWindow,HALFTONE);
instead of
SetStretchBltMode(dc.m_hDC, STRETCH_DELETESCANS);
because halftone is the best mode according to my research.
The problem didn't come from the StretchDIBits function but from the initialization of the buffer used as the bitmap here.
memset(...) function was misused.
With an initialization such as :
int Color = RGB(255, 0, 0);
for (int i = 0 ; i < 500 * 500 ; i++)
pBuffer[i] = Color;
I get a perfectly blue image as expected.
I'm trying to make an auto-cliker for an windows app. It works well, but it's incredibly slow!
I'm currently using the method "getPixel" which reloads an array everytime it's called.
Here is my current code:
hdc = GetDC(HWND_DESKTOP);
bx = GetSystemMetrics(SM_CXSCREEN);
by = GetSystemMetrics(SM_CYSCREEN);
start_bx = (bx/2) - (MAX_WIDTH/2);
start_by = (by/2) - (MAX_HEIGHT/2);
end_bx = (bx/2) + (MAX_WIDTH/2);
end_by = (by/2) + (MAX_HEIGHT/2);
for(y=start_by; y<end_by; y+=10)
{
for(x=start_bx; x<end_bx; x+=10)
{
pixel = GetPixel(*hdc, x, y);
if(pixel==RGB(255, 0, 0))
{
SetCursorPos(x,y);
mouse_event(MOUSEEVENTF_LEFTDOWN, 0, 0, 0, 0);
Sleep(50);
mouse_event(MOUSEEVENTF_LEFTUP, 0, 0, 0, 0);
Sleep(25);
}
}
}
So basically, it just scan a range of pixel in the screen and starts a mouse event if it detects a red button.
I know there are other ways to get the pixel color, such as bitblt. But I've made some researches, and I don't understand how I'm supposed to do, in order to scan a color array. I need something which scans screen very fast in order to catch the button.
Could you please help me?
Thanks.
I found a perfect way which is clearly faster than the GetPixel one:
HDC hdc, hdcTemp;
RECT rect;
BYTE* bitPointer;
int x, y;
int red, green, blue, alpha;
while(true)
{
hdc = GetDC(HWND_DESKTOP);
GetWindowRect(hWND_Desktop, &rect);
int MAX_WIDTH = rect.right;
int MAX_HEIGHT = rect.bottom;
hdcTemp = CreateCompatibleDC(hdc);
BITMAPINFO bitmap;
bitmap.bmiHeader.biSize = sizeof(bitmap.bmiHeader);
bitmap.bmiHeader.biWidth = MAX_WIDTH;
bitmap.bmiHeader.biHeight = MAX_HEIGHT;
bitmap.bmiHeader.biPlanes = 1;
bitmap.bmiHeader.biBitCount = 32;
bitmap.bmiHeader.biCompression = BI_RGB;
bitmap.bmiHeader.biSizeImage = MAX_WIDTH * 4 * MAX_HEIGHT;
bitmap.bmiHeader.biClrUsed = 0;
bitmap.bmiHeader.biClrImportant = 0;
HBITMAP hBitmap2 = CreateDIBSection(hdcTemp, &bitmap, DIB_RGB_COLORS, (void**)(&bitPointer), NULL, NULL);
SelectObject(hdcTemp, hBitmap2);
BitBlt(hdcTemp, 0, 0, MAX_WIDTH, MAX_HEIGHT, hdc, 0, 0, SRCCOPY);
for (int i=0; i<(MAX_WIDTH * 4 * MAX_HEIGHT); i+=4)
{
red = (int)bitPointer[i];
green = (int)bitPointer[i+1];
blue = (int)bitPointer[i+2];
alpha = (int)bitPointer[i+3];
x = i / (4 * MAX_HEIGHT);
y = i / (4 * MAX_WIDTH);
if (red == 255 && green == 0 && blue == 0)
{
SetCursorPos(x,y);
mouse_event(MOUSEEVENTF_LEFTDOWN, 0, 0, 0, 0);
Sleep(50);
mouse_event(MOUSEEVENTF_LEFTUP, 0, 0, 0, 0);
Sleep(25);
}
}
}
I hope this could help someone else.
The simple answer is that if this is the method you insist on using then there isn't much to optimize. As others have pointed out in comments, you should probably use a different method for locating the area to click. Have a look at using FindWindow, for example.
If you don't want to change your method, then at least sleep your thread for a bit after each complete screen scan.
I'm trying to add transparency to a hbitmap object but it never draw anything :/
this is the code i use to draw the handle
HDC hdcMem = CreateCompatibleDC(hDC);
HBITMAP hbmOld = (HBITMAP) SelectObject(hdcMem, m_hBitmap);
BLENDFUNCTION blender = {AC_SRC_OVER, 0, (int) (2.55 * 100), AC_SRC_ALPHA}; // blend function combines opacity and pixel based transparency
AlphaBlend(hDC, x, y, rect.right - rect.left, rect.bottom - rect.top, hdcMem, rect.left, rect.top, rect.right - rect.left, rect.bottom - rect.top, blender);
SelectObject(hdcMem, hbmOld);
DeleteDC(hdcMem);
and this is the code which should add a alpha channel to the hbitmap
BITMAPINFOHEADER bminfoheader;
::ZeroMemory(&bminfoheader, sizeof(BITMAPINFOHEADER));
bminfoheader.biSize = sizeof(BITMAPINFOHEADER);
bminfoheader.biWidth = m_ResX;
bminfoheader.biHeight = m_ResY;
bminfoheader.biPlanes = 1;
bminfoheader.biBitCount = 32;
bminfoheader.biCompression = BI_RGB;
HDC windowDC = CreateCompatibleDC(0);
unsigned char* pPixels = new unsigned char[m_ResX * m_ResY * 4];
GetDIBits(windowDC, m_hBitmap, 0, m_ResY, pPixels, (BITMAPINFO*) &bminfoheader, DIB_RGB_COLORS); // load pixel info
// add alpha channel values of 255 for every pixel if bmp
for (int count = 0; count < m_ResX * m_ResY; count++)
{
pPixels[count * 4 + 3] = 255; <---- here i've tried to change the value to test different transparency, but it doesn't change anything
}
SetDIBits(windowDC, m_hBitmap, 0, GetHeight(), pPixels, (BITMAPINFO*) &bminfoheader, DIB_RGB_COLORS); // save the pixel info for later manipulation
DeleteDC(windowDC);
edit:
this is the code how I create the bitmap
I fill the pixeldata in later in some code
m_hBuffer = CreateBitmap( m_ResX, m_ResY, 1, 32, nullptr );
This is a fun one!
Guess what this prints out?
#include <stdio.h>
int main()
{
printf("%d\n", (int) (2.55 * 100));
return 0;
}
Answer: 254 - not 255. Two things happening here:
Floats are often inexact - that 2.55 doesn't get represented by a binary value that represents 2.55 exactly - it's probably something like 2.5499963... (I just made that up, but you get the idea - essentially the number is represented as a sum of fractions of 2 - since binary is base 2 - so something like .5 or .25 can likely be represented exactly, but most other numbers will be represented as an approximation. You typically don't notice this because float print routines will convert back to base 10 for display, which essentially introduces another inexactness that ends up cancelling out the first one: so what you see as the assigned value or the printed out value are not exactly the value of the representation as stored in memory).
Casting to int truncates - ie rounds down.
Put these together, and your (2.55 * 100) is getting you 254, not 255 - and you have to have the magic value 255 for per-pixel alpha to work.
So lesson here is stick with pure integers. Or, if you ever do need to convert from float to integers, be aware of what's going on, and work around it (eg. add .5, then truncate - or use a similar technique.)
By the way, this is one of those cases where stepping through code line by line and checking all inputs and outputs at each step (you can never be too paranoid when debugging!) might have shown the issue; right before you step into AlphaBlend, you should see a 254 when you hover over that param (assuming using DevStudio or similar editor) and realize that something's up.