I have a RGBA format image buffer, and I need to convert it to a DirectX9Texture, I have searched the internet many times, but nothing solid comes up.
I'am trying to integrate Awesomium in my DirectX9 app. In other words, trying to display a webpage on a DirectX surface. And yes, I tried to create my own surface class, without sucess.
I know anwsers can't be too long, so if you have mercy, maybe you can link me to some correct places?
You cannot create a surface directly, you must create a texture, and then use its surface. Although, for your purposes, you shouldn't need to access the surface directly.
IDirect3DDevice9* device = ...;
// Create a texture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174363(v=vs.85).aspx
// parameters should be fairly obvious from your input data.
IDirect3DTexture9* tex;
device->CreateTexture(w, h, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &tex, 0);
// Lock the texture for writing: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205913(v=vs.85).aspx
D3DLOCKED_RECT rect;
tex->LockRect(0, &rect, 0, D3DLOCK_DISCARD);
// Write your image data to rect.pBits here. Note that each scanline of the locked surface
// may have padding, the rect.Pitch will tell you how many bytes each scanline expects. You
// should know what the pitch of your input data is. Also, if your image data is in RGBA, you
// will have to swizzle it to ARGB, as D3D9 does not have a RGBA format.
// Unlock the texture so it can be used.
tex->UnlockRect(0);
This code also ignores any errors that could occur as a result of these function calls. In production code, you should be checking for any possible errors (eg. from CreateTexture, and LockRect).
Related
I'm beating my way through a 1.2->2.0 conversion, one problem at a time. So far I have sound, interaction and a screen that shows... well, something.
I'm sure the problem is due to bit depth and/or formats. The original code used 8-bit indexed SPR files for the sprites, loaded them into a series of uint8 *buffu, and then blitted them to the display's Surface.
I have ported this, following the guide and significant trial-and-error (lots of modes and switches simply don't work on my machine), by creating a Texture and a Surface, letting the old code blit into the Surface, and then do this...
SDL_UpdateTexture(sdltxtr, NULL, sdlsurf->pixels, 640 * sizeof (uint8));
SDL_RenderClear(sdlrend);
SDL_RenderCopy(sdlrend, sdltxtr, NULL, NULL);
SDL_RenderPresent(sdlrend);
The result is a screen with stuff, but it's all misaligned. I assume that is because the Surface and Texture have different bit depths and formats than the sprites...
sdltxtr = SDL_CreateTexture(sdlrend,
SDL_PIXELFORMAT_ARGB8888,
SDL_TEXTUREACCESS_STREAMING,
640, 480);
sdlsurf = SDL_CreateRGBSurface(0, 640, 480, 8, 0,0,0,0);
I've tried various settings from the documentation to try to get a surface or texture that's 8-bit indexed, but all of the flags cause the Surface or Texture to be empty.
Any suggestions?
You mention indexed 8-bit graphics. Assuming that indexed means palettized, you can't simply use the pixel buffer, since this is just a list of entries to the associated color palette of the SDL_Surface.
That being said, you need to create a buffer that actually holds the correct ARGB values from the palette, but not the index values associated with the pixel buffer.
You could use SDL_ConvertSurfaceFormat() to convert your 8-bit palettized surface to a 32-bit ARGB surface and place its buffer into the SDL_Texture. You also can create an own 32-bit ARGB buffer and do the conversion yourself by looking up the correct palette codes (the first option will be easier in most cases, though).
Before you are converting the 8-bit SDL_Surface to a 32-bit one, you should associate a valid palette with it (SDL_SurfaceSetPalette()).
I'm currently writing a simple program using SDL2 where you can drag some shapes (square, circle, triangle, etc) into a canvas and rotate them and move them around. Each shape is represented visually by a SDL texture that is created from a PNG file (using the IMG_LoadTexture function from the SDL_image library).
The thing is that I would like to know whether a certain pixel from the texture is transparent, so that when someone clicks on the image I could determine if I have to do some action (because the click is on the non transparent area) or not.
Because this is some school assignment I'm facing some restrictions, that is, only use SDL2 libraries and I can't have some map where I can look up if the pixel in question is transparent because the images are dinamically selected. Furthermore I thought about using a SDL surface for this task creating them from the original images but due to the fact that the shapes are being rotated through the texture that wouldn't work.
You can accomplish this by using Render Targets.
SDL_SetRenderTarget(renderer, target);
... render your textures rotated, flipped, translated using SDL_RenderCopyEx
SDL_RenderReadPixels(renderer, rect, format, pixels, pitch);
With the last step you read the pixels from the render target using SDL_RenderReadPixels and then you have to figure out if the alpha channel of the desired pixel is zero (transparent) or not. You can read just the one pixel you want from the render target, or the whole texture, which option you take depends on the number of hit tests you have to perform, how often the texture is rotated/moved around, etc.
You need to create your texture using the SDL_TEXTUREACCESS_STREAMING flag and lock your texture before being able to manipulate pixel data. To tell if a certain pixel is transparent in a texture make sure that you call
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
this allows the texture to recognize an alpha channel.
Try something like this:
SDL_Texture *t;
int main()
{
// initialize SDL, window, renderer, texture
int pitch, w, h;
void *pixels;
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
SDL_QueryTexture(t, NULL, &aw, &h);
SDL_LockTexture(t, NULL, &pixels, &pitch);
Uint32 *upixels = (Uint32*) pixels;
// you will need to know the color of the pixel even if it's transparent
Uint32 transparent = SDL_MapRGBA(SDL_GetWindowSurface(window)->format, r, g, b, 0x00);
// manipulate pixels
for (int i = 0; i < w * h; i++)
{
if (upixels[i] == transparent)
// do stuff
}
// replace the old pixels with the new ones
memcpy(pixels, upixels, (pitch / 4) * h);
SDL_UnlockTexture(t);
return 0;
}
If you have any questions please feel free to ask. Although I am no expert on this topic.
For further reading and tutorials, check out http://lazyfoo.net/tutorials/SDL/index.php. Tutorial 40 deals with pixel manipulation specifically.
I apologize if there are any errors in method names (I wrote this off the top of my head).
Hope this helped.
I want to read the color of a pixel at a given position in a game (so OpenGL or DirectX), by a third-party application (this is not my game).
I tried to to it in C#, the code works great for reading the color of the desktop, of windows, etc, but when I launch the game, I only get #000000, a black pixel. I think that this is because I don't read at the correct "location", or something like that.
Does someone know how to do this? I mentioned C# but C/C++ would be fine too.
In basic steps: Grab the texture of the rendered screen with appropriate OpenGL or Directx command if the game is fullscreen.
For example with glReadPixels you can get the pixel value at window relative pixel coordinates from current bound framebuffer.
If you are not full screen, you must combine the window position with the window relative pixel coordinates.
Some loose example:
glBindFramebuffer(GL_FRAMEBUFFER, yourScreenFramebuffer);
glReadPixels(/* your pixel X, your pixel Y, GLsizei width, 1 pixel wide, 1 pixel tall, GL_RGBA or GL_RGB, GL_UNSIGNED_BYTE, *where to store your pixel value */);
On Windows there is i.e. GDI (Graphics Device Interface): With GDI you can get the Device Context easily using HDC dc = GetDC(NULL); and then read pixel values with COLORREF color = GetPixel(dc, x, y);. But take care: you have to release the Device Context afterwards (when all GetPixel operations of your program are finished) with ReleaseDC(NULL, dc); - otherwise you would leak memory.
See also here for further details.
However, for tasks like this I suggest you to use: Auto-it.
It's easy, simple to use & pretty much straightforward (after all it's just designed for operations like that).
Local $color = PixelGetColor(200, 300)
MsgBox(0, "The color is ", $color )
I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT->POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work.
Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]):
// Create an empty surface with the same settings as the original image
SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height,
image->format->BitsPerPixel,
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
0xff000000,
0x00ff0000,
0x0000ff00,
0x000000ff
#else
0x000000ff,
0x0000ff00,
0x00ff0000,
0xff000000
#endif
);
// Map RGBA color to pixel format value
Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format,
static_cast<Uint8>(colorKey.R * 255),
static_cast<Uint8>(colorKey.G * 255),
static_cast<Uint8>(colorKey.B * 255),
static_cast<Uint8>(colorKey.A * 255));
SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat);
// Blit the image onto the padded image
SDL_BlitSurface(image, NULL, paddedImage, NULL);
SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat);
Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem.
I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?
PARTIAL SOLUTION:
Create paddedImage as above
SDL_FillRect the paddedImage with the colorkey
Generate the texture "as usual"
Manually copy the image (SDL_Surface*) pixels to the paddedImage (OGL texture)
This works almost always expect some cases where the image has 3 color components (i.e. no alpha channel). I'm trying to fix that now by converting them to 4 color components
I think that It could be used together with OpenGL if you can convert SDL_Surfaces into OGL textures, and then you could use the blit function to combine your textures, and manipulate thing using the SDL workflow.
I dont know what you want to achieve. You want to transfer one surface to an OGL texture and preserve the Colorkey, or just want to apply the colorkeyed surface to another surface which then you convert into an OGL texture.
Also you dont have to use Per-pixel alphas, as SDL gives you the ability to use per-surface alphas, but its quite complex as of what alphas and colorkeys can be combined and used together.
As this is a complex thing, please refer to the SDL reference, and this tutorial may be helpful too(tough it doesnt handle OGL stuff):
http://www.sdltutorials.com/the-ins-and-outs-and-overlays-of-alpha-blending
I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.
First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.
For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.
However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.
I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.
That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.
I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.
I would also like to understand how to make this work "the right way".
What is the trick to loading ARGB data into an RGBA OpenGL texture?
My current call to xxx looks like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);
where is either RGB or RGBA.
I tried using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);
When my image rep's reports that it is in "alpha first" order.
As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?
I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:
Load the ARGB data into the temporary RGBA texture.
Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
Continue to load other resources, no need to stall the GPU pipeline.
Example pixel shader:
#version 130
uniform sampler2DRect unit_in;
void main() {
gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).
You can use a shader like this:
uniform sampler2D image;
void main()
{
gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}
If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/
This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.
NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.
The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.
The name of a texture whose data is in ARGB format.
GLuint argb_texture;
An array of tokens to set ARGB swizzle in one function call.
static const GLenum argb_swizzle[] =
{
GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};
Bind the ARGB texture
glBindTexture(GL_TEXTURE_2D, argb_texture);
Set all four swizzle parameters in one call to glTexParameteriv
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);
I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.
As The OpenGL Programming Guide suggested:
...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.