I'm currently writing a simple program using SDL2 where you can drag some shapes (square, circle, triangle, etc) into a canvas and rotate them and move them around. Each shape is represented visually by a SDL texture that is created from a PNG file (using the IMG_LoadTexture function from the SDL_image library).
The thing is that I would like to know whether a certain pixel from the texture is transparent, so that when someone clicks on the image I could determine if I have to do some action (because the click is on the non transparent area) or not.
Because this is some school assignment I'm facing some restrictions, that is, only use SDL2 libraries and I can't have some map where I can look up if the pixel in question is transparent because the images are dinamically selected. Furthermore I thought about using a SDL surface for this task creating them from the original images but due to the fact that the shapes are being rotated through the texture that wouldn't work.
You can accomplish this by using Render Targets.
SDL_SetRenderTarget(renderer, target);
... render your textures rotated, flipped, translated using SDL_RenderCopyEx
SDL_RenderReadPixels(renderer, rect, format, pixels, pitch);
With the last step you read the pixels from the render target using SDL_RenderReadPixels and then you have to figure out if the alpha channel of the desired pixel is zero (transparent) or not. You can read just the one pixel you want from the render target, or the whole texture, which option you take depends on the number of hit tests you have to perform, how often the texture is rotated/moved around, etc.
You need to create your texture using the SDL_TEXTUREACCESS_STREAMING flag and lock your texture before being able to manipulate pixel data. To tell if a certain pixel is transparent in a texture make sure that you call
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
this allows the texture to recognize an alpha channel.
Try something like this:
SDL_Texture *t;
int main()
{
// initialize SDL, window, renderer, texture
int pitch, w, h;
void *pixels;
SDL_SetTextureBlendMode(t, SDL_BLENDMODE_BLEND);
SDL_QueryTexture(t, NULL, &aw, &h);
SDL_LockTexture(t, NULL, &pixels, &pitch);
Uint32 *upixels = (Uint32*) pixels;
// you will need to know the color of the pixel even if it's transparent
Uint32 transparent = SDL_MapRGBA(SDL_GetWindowSurface(window)->format, r, g, b, 0x00);
// manipulate pixels
for (int i = 0; i < w * h; i++)
{
if (upixels[i] == transparent)
// do stuff
}
// replace the old pixels with the new ones
memcpy(pixels, upixels, (pitch / 4) * h);
SDL_UnlockTexture(t);
return 0;
}
If you have any questions please feel free to ask. Although I am no expert on this topic.
For further reading and tutorials, check out http://lazyfoo.net/tutorials/SDL/index.php. Tutorial 40 deals with pixel manipulation specifically.
I apologize if there are any errors in method names (I wrote this off the top of my head).
Hope this helped.
Related
I have a RGBA format image buffer, and I need to convert it to a DirectX9Texture, I have searched the internet many times, but nothing solid comes up.
I'am trying to integrate Awesomium in my DirectX9 app. In other words, trying to display a webpage on a DirectX surface. And yes, I tried to create my own surface class, without sucess.
I know anwsers can't be too long, so if you have mercy, maybe you can link me to some correct places?
You cannot create a surface directly, you must create a texture, and then use its surface. Although, for your purposes, you shouldn't need to access the surface directly.
IDirect3DDevice9* device = ...;
// Create a texture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174363(v=vs.85).aspx
// parameters should be fairly obvious from your input data.
IDirect3DTexture9* tex;
device->CreateTexture(w, h, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &tex, 0);
// Lock the texture for writing: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205913(v=vs.85).aspx
D3DLOCKED_RECT rect;
tex->LockRect(0, &rect, 0, D3DLOCK_DISCARD);
// Write your image data to rect.pBits here. Note that each scanline of the locked surface
// may have padding, the rect.Pitch will tell you how many bytes each scanline expects. You
// should know what the pitch of your input data is. Also, if your image data is in RGBA, you
// will have to swizzle it to ARGB, as D3D9 does not have a RGBA format.
// Unlock the texture so it can be used.
tex->UnlockRect(0);
This code also ignores any errors that could occur as a result of these function calls. In production code, you should be checking for any possible errors (eg. from CreateTexture, and LockRect).
I'm making my first thing with libgdx. That's a tiled 2D game. Say I have two types of tiles: blue and green. Those are 32x32 pixel images that cover one cell on the game field. I want to be able to create a transition between tiles such as the one on the right of the image attached. Blue and green doesn't mean all pixels in a tile are the same color, that just defines what texture a pixel is from.
I'm not asking about an algorithm — I've already done it via canvas in JavaScript. I just need some directions on what classes/techniques/solutions to use specifically in libgdx.
So I need to take pixels from the blue texture and draw them above the green one. Is there a way to do this with a shader or maybe by directly taking pixel values from blue tile's texture?
Say I already have all my textures (with no transition sprites yet calculated) loaded in a TextureAtlas. What classes should go next to get the desired effect?
Update: Here is a rough example of what my code currently is. My gameScreen.render() method looks simply like this:
batch.begin();
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
Sprite sprite = getFloorSpriteByCell(cells[x][y]);
batch.draw(sprite, x * 32, y * 32);
}
}
batch.end();
and in getFloorSpriteByCell() I choose between some of preloaded sprites, that's all, no fancy level editing in a fancy gui-thing.
I don't use tilemaps, I just need to draw some part of a texture above another texture during rendering.
Welp, I finished it, and here is how it looks:
Without diffusion:
With diffusion:
The code itself, I guess, is too large and project-specific to post it here. Here are the steps (actual classes from LibGDX are in bold starting with a capital letter):
Pick individual pixels of an image you are diffusing into and put
them into a Pixmap (if you have your textures in a
TextureAtlas as I have had them, use a PixmapTextureAtlas class from this question on GoogleCode to get Pixmaps).
Draw these Pixmaps to Textures
Then, as usual, draw all Textures to a SpriteBatch.
Hope this will be useful for someone. Contact me if you need the actual code.
I have to create animation where gatling gun will be shoot (it doesn't have to be complex, cause it's just a practice). I drew basic version of my gun which looks like this:
Don't bother colors - i made them like that to be able to see where are egdes of particular parts of gun. Now i would like to make it look better by using some texture - moro or something like metalic color - example 1 or example2. I know how to load texture and how to use it for 2d objects, but i have no idea if there is a possiblity to use this texture for my whole drawing or do i have to use texture for every part separately? This is my code which corresponds to load texture from bmp file and make it able to use:
void initTexture(string fileName)
{
loadBmp(fileName.c_str());
textureId = 13;
glBindTexture(GL_TEXTURE_2D, textureId); //Tell OpenGL which texture to edit
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//Map the image to the texture
glTexImage2D(GL_TEXTURE_2D, //Always GL_TEXTURE_2D
0, //0 for now
GL_RGB, //Format OpenGL uses for image
tex.info.biWidth, tex.info.biHeight, //Width and height
0, //The border of the image
GL_RGB, //GL_RGB, because pixels are stored in RGB format
GL_UNSIGNED_BYTE, //GL_UNSIGNED_BYTE, because pixels are stored
//as unsigned numbers
tex.px); //The actual pixel data
}
loadBmp() is function to load bitmap file. I tried to search something in the Internet and stackoverflow, but all exmaples where about cubes or spheres, which doesn't help me. How can i put texture on this drawing?
Texture mapping requires to (manually) assign texture coordinates to each vertex. There are some approaches on automatic texture coordinate generation, or getting away without texture coordinates, by giving each face an individual pixel (Disney Animation pioneered the later method for their computer animated films).
Since you didn't specify which program you used to model I'll refer you to some tutorials on texture UV mapping for Blender
http://wiki.blender.org/index.php/Doc:2.6/Manual/Textures/Mapping/UV/Unwrapping
Please don't tell me you did "code" your gun, because this is wrong, wrong, wrong!
Once you got the texture coordinates you pass them to OpenGL as just another vertex attribute.
I want to read the color of a pixel at a given position in a game (so OpenGL or DirectX), by a third-party application (this is not my game).
I tried to to it in C#, the code works great for reading the color of the desktop, of windows, etc, but when I launch the game, I only get #000000, a black pixel. I think that this is because I don't read at the correct "location", or something like that.
Does someone know how to do this? I mentioned C# but C/C++ would be fine too.
In basic steps: Grab the texture of the rendered screen with appropriate OpenGL or Directx command if the game is fullscreen.
For example with glReadPixels you can get the pixel value at window relative pixel coordinates from current bound framebuffer.
If you are not full screen, you must combine the window position with the window relative pixel coordinates.
Some loose example:
glBindFramebuffer(GL_FRAMEBUFFER, yourScreenFramebuffer);
glReadPixels(/* your pixel X, your pixel Y, GLsizei width, 1 pixel wide, 1 pixel tall, GL_RGBA or GL_RGB, GL_UNSIGNED_BYTE, *where to store your pixel value */);
On Windows there is i.e. GDI (Graphics Device Interface): With GDI you can get the Device Context easily using HDC dc = GetDC(NULL); and then read pixel values with COLORREF color = GetPixel(dc, x, y);. But take care: you have to release the Device Context afterwards (when all GetPixel operations of your program are finished) with ReleaseDC(NULL, dc); - otherwise you would leak memory.
See also here for further details.
However, for tasks like this I suggest you to use: Auto-it.
It's easy, simple to use & pretty much straightforward (after all it's just designed for operations like that).
Local $color = PixelGetColor(200, 300)
MsgBox(0, "The color is ", $color )
I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT->POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work.
Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]):
// Create an empty surface with the same settings as the original image
SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height,
image->format->BitsPerPixel,
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
0xff000000,
0x00ff0000,
0x0000ff00,
0x000000ff
#else
0x000000ff,
0x0000ff00,
0x00ff0000,
0xff000000
#endif
);
// Map RGBA color to pixel format value
Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format,
static_cast<Uint8>(colorKey.R * 255),
static_cast<Uint8>(colorKey.G * 255),
static_cast<Uint8>(colorKey.B * 255),
static_cast<Uint8>(colorKey.A * 255));
SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat);
// Blit the image onto the padded image
SDL_BlitSurface(image, NULL, paddedImage, NULL);
SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat);
Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem.
I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?
PARTIAL SOLUTION:
Create paddedImage as above
SDL_FillRect the paddedImage with the colorkey
Generate the texture "as usual"
Manually copy the image (SDL_Surface*) pixels to the paddedImage (OGL texture)
This works almost always expect some cases where the image has 3 color components (i.e. no alpha channel). I'm trying to fix that now by converting them to 4 color components
I think that It could be used together with OpenGL if you can convert SDL_Surfaces into OGL textures, and then you could use the blit function to combine your textures, and manipulate thing using the SDL workflow.
I dont know what you want to achieve. You want to transfer one surface to an OGL texture and preserve the Colorkey, or just want to apply the colorkeyed surface to another surface which then you convert into an OGL texture.
Also you dont have to use Per-pixel alphas, as SDL gives you the ability to use per-surface alphas, but its quite complex as of what alphas and colorkeys can be combined and used together.
As this is a complex thing, please refer to the SDL reference, and this tutorial may be helpful too(tough it doesnt handle OGL stuff):
http://www.sdltutorials.com/the-ins-and-outs-and-overlays-of-alpha-blending