Read pixel on game (OpenGL or DirectX) screen - opengl

I want to read the color of a pixel at a given position in a game (so OpenGL or DirectX), by a third-party application (this is not my game).
I tried to to it in C#, the code works great for reading the color of the desktop, of windows, etc, but when I launch the game, I only get #000000, a black pixel. I think that this is because I don't read at the correct "location", or something like that.
Does someone know how to do this? I mentioned C# but C/C++ would be fine too.

In basic steps: Grab the texture of the rendered screen with appropriate OpenGL or Directx command if the game is fullscreen.
For example with glReadPixels you can get the pixel value at window relative pixel coordinates from current bound framebuffer.
If you are not full screen, you must combine the window position with the window relative pixel coordinates.
Some loose example:
glBindFramebuffer(GL_FRAMEBUFFER, yourScreenFramebuffer);
glReadPixels(/* your pixel X, your pixel Y, GLsizei width, 1 pixel wide, 1 pixel tall, GL_RGBA or GL_RGB, GL_UNSIGNED_BYTE, *where to store your pixel value */);

On Windows there is i.e. GDI (Graphics Device Interface): With GDI you can get the Device Context easily using HDC dc = GetDC(NULL); and then read pixel values with COLORREF color = GetPixel(dc, x, y);. But take care: you have to release the Device Context afterwards (when all GetPixel operations of your program are finished) with ReleaseDC(NULL, dc); - otherwise you would leak memory.
See also here for further details.
However, for tasks like this I suggest you to use: Auto-it.
It's easy, simple to use & pretty much straightforward (after all it's just designed for operations like that).
Local $color = PixelGetColor(200, 300)
MsgBox(0, "The color is ", $color )

Related

glDrawPixels isn't filling the window [duplicate]

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

Why retina screen coordinate value is twice the value of pixel value

My computer is a Mac pro with a 13 inch retina screen. The screen resolution is 1280*800 (default).
Using the following code:
gWindow = glfwCreateWindow(800, 600, "OpenGL Tutorial", NULL, NULL);
//case 1
glViewport(0,0,1600,1200);
//case 2
glViewport(0,0,800,600);
Case 1 results in a triangle that fits the window.
Case 2 results in a triangle that is 1/4th the size of the window.
Half of viewport:
The GLFW documentation indicates the following (from here):
While the size of a window is measured in screen coordinates, OpenGL
works with pixels. The size you pass into glViewport, for example,
should be in pixels. On some machines screen coordinates and pixels
are the same, but on others they will not be. There is a second set of
functions to retrieve the size, in pixels, of the framebuffer of a
window.
Why my retina screen coordinate value is twice the value of pixel value?
As Sabuncu said is hard to know what result should be correct without knowing how you draw the triangle.
But I guess your problems is related to the fact that with retina screen, when you use the 2.0 scale factor you need to render twice the pixels as you would with a regular screen - see here
The method you're after is shown just a few lines below your GLFL link
There is also glfwGetFramebufferSize for directly retrieving the current size of the framebuffer of a window.
int width, height;
glfwGetFramebufferSize(window, &width, &height);
glViewport(0, 0, width, height);
The size of a framebuffer may change independently of the size of a window, for example if the window is dragged between a regular monitor and a high-DPI one.
In your case I'm betting the framebuffer size you'll get will be twice the window size, and your gl viewport needs to match it.
The frame-buffer size never needs to be equal to the size of the window, as of that you need to use glfwGetFramebufferSize:
This function retrieves the size, in pixels, of the framebuffer of the specified window. If you wish to retrieve the size of the window in screen coordinates, see glfwGetWindowSize.
Whenever you resize your window you need to retrieve the size of its frambuffer and update the Viewport according to it:
glfwGetFramebufferSize(gWindow, &framebufferWidth, &framebufferHeight);
glViewport(0, 0, framebufferWidth, framebufferHeight);
With retina display, the default framebuffer (the one that rendered onto the canvas) is twice the resolution of the display. Thus, if the display is 800x600, the internal canvas is 1600x1200, and therefore your viewpoert should be 1600x1200 since this is the "window" into the framebuffer.

Create DirectX9Texture from RGBA (const char*) buffer

I have a RGBA format image buffer, and I need to convert it to a DirectX9Texture, I have searched the internet many times, but nothing solid comes up.
I'am trying to integrate Awesomium in my DirectX9 app. In other words, trying to display a webpage on a DirectX surface. And yes, I tried to create my own surface class, without sucess.
I know anwsers can't be too long, so if you have mercy, maybe you can link me to some correct places?
You cannot create a surface directly, you must create a texture, and then use its surface. Although, for your purposes, you shouldn't need to access the surface directly.
IDirect3DDevice9* device = ...;
// Create a texture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174363(v=vs.85).aspx
// parameters should be fairly obvious from your input data.
IDirect3DTexture9* tex;
device->CreateTexture(w, h, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &tex, 0);
// Lock the texture for writing: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205913(v=vs.85).aspx
D3DLOCKED_RECT rect;
tex->LockRect(0, &rect, 0, D3DLOCK_DISCARD);
// Write your image data to rect.pBits here. Note that each scanline of the locked surface
// may have padding, the rect.Pitch will tell you how many bytes each scanline expects. You
// should know what the pitch of your input data is. Also, if your image data is in RGBA, you
// will have to swizzle it to ARGB, as D3D9 does not have a RGBA format.
// Unlock the texture so it can be used.
tex->UnlockRect(0);
This code also ignores any errors that could occur as a result of these function calls. In production code, you should be checking for any possible errors (eg. from CreateTexture, and LockRect).

OpenGL 2D pixel perfect rendering

I'm trying to render a 2D image so that it will cover the entire window exactly.
For my test, I setup a window so that the client area is exactly 320x240 and the texture is also this size.
I setup my orthographic projection for a 1x1x1 cube centered at the origin, and set my viewport to 0,0,320,240
The texture is mapped to a quad of size 1x1 centered in the origin.
The shader is a trivial shader doing the ProjModelViewPos
I created a test texture that will allow me to verify the rendering, and I see a consistent discrepancy I can't shake.
The results of the rendering always some stretching that puts some of the pixels up and to the right of the window, and seem to be always by the same amount, regardless of the window size (same amount of pixels, if I replace 320x240 by another value)
I think it has to do with window decoration widths, but I'm not sure how to fix it so that the solution is not platform / machine specific.
EDITS:
The code is straight C++ using freeglut and glew
Verified that this doesn't happen if I call glutFullScreen, so it's definitely windowed mode related.
Note: this was answered before the language tag was added
Not sure what module you are using for this.
If you are using Pyglet the easiest way is achieve this is:
import pyglet
width = 320
height = 240
window = pyglet.window.Window(width, height)
image = pyglet.resource.image('image.png')
#window.event
def on_draw():
window.clear()
image.blit(0, 0, 0, width, height)
pyglet.app.run()
You can find more information about this here:
http://www.pyglet.org/doc/programming_guide/size_and_position.html
http://www.pyglet.org/doc/programming_guide/displaying_images.html

Get pixel info from SDL2 texture

I'm currently writing a simple program using SDL2 where you can drag some shapes (square, circle, triangle, etc) into a canvas and rotate them and move them around. Each shape is represented visually by a SDL texture that is created from a PNG file (using the IMG_LoadTexture function from the SDL_image library).
The thing is that I would like to know whether a certain pixel from the texture is transparent, so that when someone clicks on the image I could determine if I have to do some action (because the click is on the non transparent area) or not.
Because this is some school assignment I'm facing some restrictions, that is, only use SDL2 libraries and I can't have some map where I can look up if the pixel in question is transparent because the images are dinamically selected. Furthermore I thought about using a SDL surface for this task creating them from the original images but due to the fact that the shapes are being rotated through the texture that wouldn't work.
You can accomplish this by using Render Targets.
SDL_SetRenderTarget(renderer, target);
... render your textures rotated, flipped, translated using SDL_RenderCopyEx
SDL_RenderReadPixels(renderer, rect, format, pixels, pitch);
With the last step you read the pixels from the render target using SDL_RenderReadPixels and then you have to figure out if the alpha channel of the desired pixel is zero (transparent) or not. You can read just the one pixel you want from the render target, or the whole texture, which option you take depends on the number of hit tests you have to perform, how often the texture is rotated/moved around, etc.
You need to create your texture using the SDL_TEXTUREACCESS_STREAMING flag and lock your texture before being able to manipulate pixel data. To tell if a certain pixel is transparent in a texture make sure that you call
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
this allows the texture to recognize an alpha channel.
Try something like this:
SDL_Texture *t;
int main()
{
// initialize SDL, window, renderer, texture
int pitch, w, h;
void *pixels;
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
SDL_QueryTexture(t, NULL, &aw, &h);
SDL_LockTexture(t, NULL, &pixels, &pitch);
Uint32 *upixels = (Uint32*) pixels;
// you will need to know the color of the pixel even if it's transparent
Uint32 transparent = SDL_MapRGBA(SDL_GetWindowSurface(window)->format, r, g, b, 0x00);
// manipulate pixels
for (int i = 0; i < w * h; i++)
{
if (upixels[i] == transparent)
// do stuff
}
// replace the old pixels with the new ones
memcpy(pixels, upixels, (pitch / 4) * h);
SDL_UnlockTexture(t);
return 0;
}
If you have any questions please feel free to ask. Although I am no expert on this topic.
For further reading and tutorials, check out http://lazyfoo.net/tutorials/SDL/index.php. Tutorial 40 deals with pixel manipulation specifically.
I apologize if there are any errors in method names (I wrote this off the top of my head).
Hope this helped.