Here's my code:
template<typename G, typename N> void load_xbm(G* g,N w, N h,unsigned char* data)
{
Uint32 rmask, gmask, bmask, amask;
/* SDL interprets each pixel as a 32-bit number, so our masks must depend
on the endianness (byte order) of the machine */
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xff000000;
gmask = 0x00ff0000;
bmask = 0x0000ff00;
amask = 0x000000ff;
#else
rmask = 0x000000ff;
gmask = 0x0000ff00;
bmask = 0x00ff0000;
amask = 0xff000000;
#endif
SDL_Surface* s = g->backend_surface();
s = SDL_CreateRGBSurface(SDL_HWSURFACE,w,h,16,rmask,gmask,bmask,amask);
g->backend_surface( s );
for (N x = 0; x < w; x++)
{
for(N y = 0; y < h; y++)
{
g->put_pixel(x,y,data[y*x]);
}
}
SDL_Flip( s );
}
g->backend_surface() just returns an SDL_Surface* member in G.
w is the width of the xbm bitmap, h is the height, data is an array of unsigned chars containing the colors of every pixel.
g->put_pixel() is a simple wrap over of the putpixel method in SDL docs, which uses it's backend_surface as the first parameter to the example putpixel function.
Now when I execute it the program exits with 0x3. By debugging the code I've found that it exits on the call to the putpixel method, note that the putpixel method is working fine elsewhere. I've also found out that it only exits when the x and y arguments to putpixel are bigger than the original width and height of the Surface, but haven't I resized the surface using SDL_CreateRGBSurface to the required width and height?
Guessing in the wild here...
This SDL tutorial example (point 2.5) says that the Surface must be locked before calling this function. Is yours?
Related
I am trying to create a program that will capture a full screen directx application, look for a specific set of pixels on the screen and if it finds it then draw an image on the screen.
I have been able to set up the application to capture the screen the directx libraries using the code the answer for this question Capture screen using DirectX
In this example the code saves to the harddrive using the IWIC libraries. I would rather manipulate the pixels instead of saving it.
After I have captured the screen and have a LPBYTE of the entire screen pixels I am unsure how to crop it to the region I want and then being able to manipulate the pixel array. Is it just a multi dimensional byte array?
The way I think I should do it is
Capture screen to IWIC bitmap (done).
Convert IWIC bitmap to ID2D1 bitmap using ID2D1RenderTarget::CreateBitmapFromWicBitmap
Create new ID2D1::Bitmap to store partial image.
Copy region of the ID2D1 bitmap to a new bitmap using ID2D1::CopyFromBitmap.
Render back onto screen using ID2D1 .
Any help on any of this would be so much appreciated.
Here is a modified version of the original code that only captures a portion of the screen into a buffer, and also gives back the stride. Then it browses all the pixels, dumps their colors as a sample usage of the returned buffer.
In this sample, the buffer is allocated by the function, so you must free it once you've used it:
// sample usage
int main()
{
LONG left = 10;
LONG top = 10;
LONG width = 100;
LONG height = 100;
LPBYTE buffer;
UINT stride;
RECT rc = { left, top, left + width, top + height };
Direct3D9TakeScreenshot(D3DADAPTER_DEFAULT, &buffer, &stride, &rc);
// In 32bppPBGRA format, each pixel is represented by 4 bytes
// with one byte each for blue, green, red, and the alpha channel, in that order.
// But don't forget this is all modulo endianness ...
// So, on Intel architecture, if we read a pixel from memory
// as a DWORD, it's reversed (ARGB). The macros below handle that.
// browse every pixel by line
for (int h = 0; h < height; h++)
{
LPDWORD pixels = (LPDWORD)(buffer + h * stride);
for (int w = 0; w < width; w++)
{
DWORD pixel = pixels[w];
wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
}
}
// get pixel at 50, 50 in the buffer, as #ARGB
DWORD pixel = GetBGRAPixel(buffer, stride, 50, 50);
wprintf(L"#%02X#%02X#%02X#%02X\n", GetBGRAPixelAlpha(pixel), GetBGRAPixelRed(pixel), GetBGRAPixelGreen(pixel), GetBGRAPixelBlue(pixel));
SavePixelsToFile32bppPBGRA(width, height, stride, buffer, L"test.png", GUID_ContainerFormatPng);
LocalFree(buffer);
return 0;;
}
#define GetBGRAPixelBlue(p) (LOBYTE(p))
#define GetBGRAPixelGreen(p) (HIBYTE(p))
#define GetBGRAPixelRed(p) (LOBYTE(HIWORD(p)))
#define GetBGRAPixelAlpha(p) (HIBYTE(HIWORD(p)))
#define GetBGRAPixel(b,s,x,y) (((LPDWORD)(((LPBYTE)b) + y * s))[x])
int main()
HRESULT Direct3D9TakeScreenshot(UINT adapter, LPBYTE *pBuffer, UINT *pStride, const RECT *pInputRc = nullptr)
{
if (!pBuffer || !pStride) return E_INVALIDARG;
HRESULT hr = S_OK;
IDirect3D9 *d3d = nullptr;
IDirect3DDevice9 *device = nullptr;
IDirect3DSurface9 *surface = nullptr;
D3DPRESENT_PARAMETERS parameters = { 0 };
D3DDISPLAYMODE mode;
D3DLOCKED_RECT rc;
*pBuffer = NULL;
*pStride = 0;
// init D3D and get screen size
d3d = Direct3DCreate9(D3D_SDK_VERSION);
HRCHECK(d3d->GetAdapterDisplayMode(adapter, &mode));
LONG width = pInputRc ? (pInputRc->right - pInputRc->left) : mode.Width;
LONG height = pInputRc ? (pInputRc->bottom - pInputRc->top) : mode.Height;
parameters.Windowed = TRUE;
parameters.BackBufferCount = 1;
parameters.BackBufferHeight = height;
parameters.BackBufferWidth = width;
parameters.SwapEffect = D3DSWAPEFFECT_DISCARD;
parameters.hDeviceWindow = NULL;
// create device & capture surface (note it needs desktop size, not our capture size)
HRCHECK(d3d->CreateDevice(adapter, D3DDEVTYPE_HAL, NULL, D3DCREATE_SOFTWARE_VERTEXPROCESSING, ¶meters, &device));
HRCHECK(device->CreateOffscreenPlainSurface(mode.Width, mode.Height, D3DFMT_A8R8G8B8, D3DPOOL_SYSTEMMEM, &surface, nullptr));
// get pitch/stride to compute the required buffer size
HRCHECK(surface->LockRect(&rc, pInputRc, 0));
*pStride = rc.Pitch;
HRCHECK(surface->UnlockRect());
// allocate buffer
*pBuffer = (LPBYTE)LocalAlloc(0, *pStride * height);
if (!*pBuffer)
{
hr = E_OUTOFMEMORY;
goto cleanup;
}
// get the data
HRCHECK(device->GetFrontBufferData(0, surface));
// copy it into our buffer
HRCHECK(surface->LockRect(&rc, pInputRc, 0));
CopyMemory(*pBuffer, rc.pBits, rc.Pitch * height);
HRCHECK(surface->UnlockRect());
cleanup:
if (FAILED(hr))
{
if (*pBuffer)
{
LocalFree(*pBuffer);
*pBuffer = NULL;
}
*pStride = 0;
}
RELEASE(surface);
RELEASE(device);
RELEASE(d3d);
return hr;
}
I am attempting to program a game engine using SDL and glew with picoPNG as an image loader. I was attempting to make a system to set the icon for the window in my Window class and something strange happened. It appeared the icon worked for some images and it didn't for others. I barely know anything about how SDL_Surface works so I used some websites to find some information. (I can't post links to them because I only have 8 out of 10 required reputation)
My code:
void Window::setWindowIcon(const std::string& filePath) {
//read file
std::vector<unsigned char> in;
std::vector<unsigned char> out;
unsigned long width, height;
//Use my file loading class to read the image file
if (DPE::IOManager::readFileToBuffer(filePath, in) == false) {
fatalError("Failed to open " + filePath);
}
int errorCode = DPE::decodePNG(out, width, height, &(in[0]), in.size());
if (errorCode != 0) {
fatalError("Failed to decode png file!");
}
Uint32 rmask = 0x000000ff;
Uint32 gmask = 0x0000ff00;
Uint32 bmask = 0x00ff0000;
Uint32 amask = 0xff000000;
_sdlSurface = SDL_CreateRGBSurfaceFrom((void*)&out[0], width, height, 32, width * 4, rmask, gmask, bmask, amask);
if (_sdlSurface == NULL) {
std::cout << SDL_GetError() << std::endl;
fatalError("Failed to create surface!");
}
SDL_SetWindowIcon(_sdlWindow, _sdlSurface);
SDL_FreeSurface(_sdlSurface);
}
Finally, here are the two png files
This one Worked.
This one didn't.
The iteration through the code showed everything was fine and the only notification of an error was that the icon wasn't changing.
Edit: I have changed the color masks to be cross-Endian compatible
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
int shift = 0;
rmask = 0xff000000 >> shift;
gmask = 0x00ff0000 >> shift;
bmask = 0x0000ff00 >> shift;
amask = 0x000000ff >> shift;
#else // little endian, like x86
rmask = 0x000000ff;
gmask = 0x0000ff00;
bmask = 0x00ff0000;
amask = 0xff000000;
#endif
I think I found the answer. It appears that when I was using the alpha pixel, it took up 8 more bpp so I decreased the file size to 75x75 and the image worked.
I want to modify ffplay by hiding its SDL video player window. Rather, I want to grab the overlay as pixel-by-pixel bitmaps to be used elsewhere in my program.
Now ffplay can be simplified as below:
Create SDL_Surface *screen from SDL_SetVideoMode()
Create SDL_Overlay *bmp from SDL_CreateYUVOverlay() and associate it with screen
repeat until video ends
Decode movie frames and populate bmp
Render bmp onto screen using SDL_DisplayYUVOverlay()
Following hints from this article, I have replaced Step 1 as below:
/* Don't want video player window showing on screen
* int flags = SDL_HWSURFACE|SDL_ASYNCBLIT|SDL_HWACCEL;
* screen = SDL_SetVideoMode(w, h, 24, flags);
*/
Uint32 rmask, gmask, bmask, amask;
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xff000000;
gmask = 0x00ff0000;
bmask = 0x0000ff00;
amask = 0x00000000;
#else
rmask = 0x000000ff;
gmask = 0x0000ff00;
bmask = 0x00ff0000;
amask = 0x00000000;
#endif
screen = SDL_CreateRGBSurface(SDL_SWSURFACE, w, h, 24, rmask, gmask, bmask, amask);
and Step 4 as
SDL_DisplayYUVOverlay(bmp, &rect);
SDL_SaveBMP(screen, filenameN); N++;
Issue:
If I modify only Step 4, the bitmap files are getting saved properly which is what I want, except that the video playing window is visible. On the other hand, if I modify Step 2 as well, the window gets successfully hidden the bitmaps are all blacked out.
I am new to SDL, so apart from just the solution, an explanation on why my approach does not work will be helpful.
Use SDL_putenv("SDL_VIDEODRIVER=dummy"); to use the dummy video driver, which produces no output.
I've searched around using google but I'm completely confused on how to load an image (PNG in my case) from resource and then converting it to a bitmap in memory for use in my splash screen. I've read about GDI+ and libpng but I don't really know how to do what I want. Could anyone help?
GDI+ supports PNG directly. See here and here.
EDIT: The GDI+ documentation offers some advice for how to use GDI+ in a DLL. In your case, the best solution is probably to define initialisation and teardown functions that the client code is required to call.
I ended up using PicoPNG to convert the PNG to a two dimensional vector which I then manually contructed a bitmap from. My final code looked like this:
HBITMAP LoadPNGasBMP(const HMODULE hModule, const LPCTSTR lpPNGName)
{
/* First we need to get an pointer to the PNG */
HRSRC found = FindResource(hModule, lpPNGName, "PNG");
unsigned int size = SizeofResource(hModule, found);
HGLOBAL loaded = LoadResource(hModule, found);
void* resource_data = LockResource(loaded);
/* Now we decode the PNG */
vector<unsigned char> raw;
unsigned long width, height;
int err = decodePNG(raw, width, height, (const unsigned char*)resource_data, size);
if (err != 0)
{
log_debug("Error while decoding png splash: %d", err);
return NULL;
}
/* Create the bitmap */
BITMAPV5HEADER bmpheader = {0};
bmpheader.bV5Size = sizeof(BITMAPV5HEADER);
bmpheader.bV5Width = width;
bmpheader.bV5Height = height;
bmpheader.bV5Planes = 1;
bmpheader.bV5BitCount = 32;
bmpheader.bV5Compression = BI_BITFIELDS;
bmpheader.bV5SizeImage = width*height*4;
bmpheader.bV5RedMask = 0x00FF0000;
bmpheader.bV5GreenMask = 0x0000FF00;
bmpheader.bV5BlueMask = 0x000000FF;
bmpheader.bV5AlphaMask = 0xFF000000;
bmpheader.bV5CSType = LCS_WINDOWS_COLOR_SPACE;
bmpheader.bV5Intent = LCS_GM_BUSINESS;
void* converted = NULL;
HDC screen = GetDC(NULL);
HBITMAP result = CreateDIBSection(screen, reinterpret_cast<BITMAPINFO*>(&bmpheader), DIB_RGB_COLORS, &converted, NULL, 0);
ReleaseDC(NULL, screen);
/* Copy the decoded image into the bitmap in the correct order */
for (unsigned int y1 = height - 1, y2 = 0; y2 < height; y1--, y2++)
for (unsigned int x = 0; x < width; x++)
{
*((char*)converted+0+4*x+4*width*y2) = raw[2+4*x+4*width*y1]; // Blue
*((char*)converted+1+4*x+4*width*y2) = raw[1+4*x+4*width*y1]; // Green
*((char*)converted+2+4*x+4*width*y2) = raw[0+4*x+4*width*y1]; // Red
*((char*)converted+3+4*x+4*width*y2) = raw[3+4*x+4*width*y1]; // Alpha
}
/* Done! */
return result;
}
I want to read a rectangular area, or whole screen pixels. As if screenshot button was pressed.
How i do this?
Edit: Working code:
void CaptureScreen(char *filename)
{
int nScreenWidth = GetSystemMetrics(SM_CXSCREEN);
int nScreenHeight = GetSystemMetrics(SM_CYSCREEN);
HWND hDesktopWnd = GetDesktopWindow();
HDC hDesktopDC = GetDC(hDesktopWnd);
HDC hCaptureDC = CreateCompatibleDC(hDesktopDC);
HBITMAP hCaptureBitmap = CreateCompatibleBitmap(hDesktopDC, nScreenWidth, nScreenHeight);
SelectObject(hCaptureDC, hCaptureBitmap);
BitBlt(hCaptureDC, 0, 0, nScreenWidth, nScreenHeight, hDesktopDC, 0,0, SRCCOPY|CAPTUREBLT);
BITMAPINFO bmi = {0};
bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
bmi.bmiHeader.biWidth = nScreenWidth;
bmi.bmiHeader.biHeight = nScreenHeight;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
bmi.bmiHeader.biCompression = BI_RGB;
RGBQUAD *pPixels = new RGBQUAD[nScreenWidth * nScreenHeight];
GetDIBits(
hCaptureDC,
hCaptureBitmap,
0,
nScreenHeight,
pPixels,
&bmi,
DIB_RGB_COLORS
);
// write:
int p;
int x, y;
FILE *fp = fopen(filename, "wb");
for(y = 0; y < nScreenHeight; y++){
for(x = 0; x < nScreenWidth; x++){
p = (nScreenHeight-y-1)*nScreenWidth+x; // upside down
unsigned char r = pPixels[p].rgbRed;
unsigned char g = pPixels[p].rgbGreen;
unsigned char b = pPixels[p].rgbBlue;
fwrite(fp, &r, 1);
fwrite(fp, &g, 1);
fwrite(fp, &b, 1);
}
}
fclose(fp);
delete [] pPixels;
ReleaseDC(hDesktopWnd, hDesktopDC);
DeleteDC(hCaptureDC);
DeleteObject(hCaptureBitmap);
}
Starting with your code and omitting error checking ...
// Create a BITMAPINFO specifying the format you want the pixels in.
// To keep this simple, we'll use 32-bits per pixel (the high byte isn't
// used).
BITMAPINFO bmi = {0};
bmi.bmiHeader.biSize = sizeof(bmi.bmiHeader);
bmi.bmiHeader.biWidth = nScreenWidth;
bmi.bmiHeader.biHeight = nScreenHeight;
bmi.bmiHeader.biPlanes = 1;
bmi.bmiHeader.biBitCount = 32;
bmi.bmiHeader.biCompression = BI_RGB;
// Allocate a buffer to receive the pixel data.
RGBQUAD *pPixels = new RGBQUAD[nScreenWidth * nScreenHeight];
// Call GetDIBits to copy the bits from the device dependent bitmap
// into the buffer allocated above, using the pixel format you
// chose in the BITMAPINFO.
::GetDIBits(hCaptureDC,
hCaptureBitmap,
0, // starting scanline
nScreenHeight, // scanlines to copy
pPixels, // buffer for your copy of the pixels
&bmi, // format you want the data in
DIB_RGB_COLORS); // actual pixels, not palette references
// You can now access the raw pixel data in pPixels. Note that they are
// stored from the bottom scanline to the top, so pPixels[0] is the lower
// left pixel, pPixels[1] is the next pixel to the right,
// pPixels[nScreenWidth] is the first pixel on the second row from the
// bottom, etc.
// Don't forget to free the pixel buffer.
delete [] pPixels;
Rereading your question, it sounds like we may have gotten off on a tangent with the screen capture. If you just want to check some pixels on the screen, you can use GetPixel.
HDC hdcScreen = ::GetDC(NULL);
COLORREF pixel = ::GetPixel(hdcScreen, x, y);
ReleaseDC(NULL, hdcScreen);
if (pixel != CLR_INVALID) {
int red = GetRValue(pixel);
int green = GetGValue(pixel);
int blue = GetBValue(pixel);
...
} else {
// Error, x and y were outside the clipping region.
}
If you're going to read a lot of pixels, then you're better off with a screen capture and then using GetDIBits. Calling GetPixel zillions of times will be slow.
You make a screenshot with BitBlt(). The size of the shot is set with the nWidth and nHeight arguments. The upper left corner is set with the nXSrc and nYSrc arguments.
You can use the code below to read the screen pixels:
HWND desktop = GetDesktopWindow();
HDC desktopHdc = GetDC(desktop);
COLORREF color = GetPixel(desktopHdc, x, y);
HBITMAP is not a pointer or an array, it is a handle that is managed by Windows and has meaning only to Windows. You must ask Windows to copy the pixels somewhere for use.
To get an individual pixel value, you can use GetPixel without even needing a bitmap. This will be slow if you need to access many pixels.
To copy a bitmap to memory you can access, use the GetDIBits function.