I'm beating my way through a 1.2->2.0 conversion, one problem at a time. So far I have sound, interaction and a screen that shows... well, something.
I'm sure the problem is due to bit depth and/or formats. The original code used 8-bit indexed SPR files for the sprites, loaded them into a series of uint8 *buffu, and then blitted them to the display's Surface.
I have ported this, following the guide and significant trial-and-error (lots of modes and switches simply don't work on my machine), by creating a Texture and a Surface, letting the old code blit into the Surface, and then do this...
SDL_UpdateTexture(sdltxtr, NULL, sdlsurf->pixels, 640 * sizeof (uint8));
SDL_RenderClear(sdlrend);
SDL_RenderCopy(sdlrend, sdltxtr, NULL, NULL);
SDL_RenderPresent(sdlrend);
The result is a screen with stuff, but it's all misaligned. I assume that is because the Surface and Texture have different bit depths and formats than the sprites...
sdltxtr = SDL_CreateTexture(sdlrend,
SDL_PIXELFORMAT_ARGB8888,
SDL_TEXTUREACCESS_STREAMING,
640, 480);
sdlsurf = SDL_CreateRGBSurface(0, 640, 480, 8, 0,0,0,0);
I've tried various settings from the documentation to try to get a surface or texture that's 8-bit indexed, but all of the flags cause the Surface or Texture to be empty.
Any suggestions?
You mention indexed 8-bit graphics. Assuming that indexed means palettized, you can't simply use the pixel buffer, since this is just a list of entries to the associated color palette of the SDL_Surface.
That being said, you need to create a buffer that actually holds the correct ARGB values from the palette, but not the index values associated with the pixel buffer.
You could use SDL_ConvertSurfaceFormat() to convert your 8-bit palettized surface to a 32-bit ARGB surface and place its buffer into the SDL_Texture. You also can create an own 32-bit ARGB buffer and do the conversion yourself by looking up the correct palette codes (the first option will be easier in most cases, though).
Before you are converting the 8-bit SDL_Surface to a 32-bit one, you should associate a valid palette with it (SDL_SurfaceSetPalette()).
Related
I have a RGBA format image buffer, and I need to convert it to a DirectX9Texture, I have searched the internet many times, but nothing solid comes up.
I'am trying to integrate Awesomium in my DirectX9 app. In other words, trying to display a webpage on a DirectX surface. And yes, I tried to create my own surface class, without sucess.
I know anwsers can't be too long, so if you have mercy, maybe you can link me to some correct places?
You cannot create a surface directly, you must create a texture, and then use its surface. Although, for your purposes, you shouldn't need to access the surface directly.
IDirect3DDevice9* device = ...;
// Create a texture: http://msdn.microsoft.com/en-us/library/windows/desktop/bb174363(v=vs.85).aspx
// parameters should be fairly obvious from your input data.
IDirect3DTexture9* tex;
device->CreateTexture(w, h, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &tex, 0);
// Lock the texture for writing: http://msdn.microsoft.com/en-us/library/windows/desktop/bb205913(v=vs.85).aspx
D3DLOCKED_RECT rect;
tex->LockRect(0, &rect, 0, D3DLOCK_DISCARD);
// Write your image data to rect.pBits here. Note that each scanline of the locked surface
// may have padding, the rect.Pitch will tell you how many bytes each scanline expects. You
// should know what the pitch of your input data is. Also, if your image data is in RGBA, you
// will have to swizzle it to ARGB, as D3D9 does not have a RGBA format.
// Unlock the texture so it can be used.
tex->UnlockRect(0);
This code also ignores any errors that could occur as a result of these function calls. In production code, you should be checking for any possible errors (eg. from CreateTexture, and LockRect).
I've run into issues preserving the alpha channels of surfaces being copied/clip blitted (blitting sections of a surface onto a smaller surface, they're spritesheets). I've tried various solutions, but the end result is that any surface that's supposed to have transparency ends up becoming fully opaque (alpha mask becomes white).
So my question is, how does one copy one RGBA SDL_Surface to another new surface (also RGBA), including the alpha channel? And if it's any different, how does one copy a section of an RGBA surface, to a new RGBA surface (the same size of the clipped portion of the source surface), ala tilesheet blitting.
It seems that SDL_BlitSurface blends the alpha channels, so when for example, I want to copy a tile from my tilesheet surface to a new surface (which is of course, blank, I'm assuming SDL fills surfaces with black or white by default), it ends up losing it's alpha mask, so that when that tile is finally blitted to the screen, it doesn't blend with whatever is on the screen.
SDL_DisplayFormatAlpha works great to copy surfaces with an alpha mask, but it doesn't take clip parameters, it's only intended to copy the entire surface, not a portion of it, hence my problem.
If anyone is still wondering after all these years:
Before bliting the surface, you need to make sure that the blending mode of the source (which is an SDL_Surface) is set to SDL_BLENDMODE_NONE as described in the documentation: SDL_SetSurfaceBlendMode(). Is should look something simple like this:
SDL_SetSurfaceBlendMode(source, SDL_BLENDMODE_BLEND);
SDL_BlitSurface(source, sourceRect, destination, destinationRect);
I had this problem before and have not come to an official answer yet.
However, I think the only way to do it will be to write your own copy function.
http://www.libsdl.org/docs/html/sdlpixelformat.html
This page will help you understand how SDL_Surface stores color information. Note that there is a huge difference between colors above and below 8 bits.
I want to load an SDL_Surface into an OpenGL texture with padding (so that NPOT->POT) and apply a color key on the surface afterwards. I either end up colorkeying all pixels, regardless of their color, or not colorkey anything at all. I have tried a lot of different things, but none of them seem to work.
Here's the working snippet of my code. I use a custom color class for the colorkey (range [0-1]):
// Create an empty surface with the same settings as the original image
SDL_Surface* paddedImage = SDL_CreateRGBSurface(image->flags, width, height,
image->format->BitsPerPixel,
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
0xff000000,
0x00ff0000,
0x0000ff00,
0x000000ff
#else
0x000000ff,
0x0000ff00,
0x00ff0000,
0xff000000
#endif
);
// Map RGBA color to pixel format value
Uint32 colorKeyPixelFormat = SDL_MapRGBA(paddedImage->format,
static_cast<Uint8>(colorKey.R * 255),
static_cast<Uint8>(colorKey.G * 255),
static_cast<Uint8>(colorKey.B * 255),
static_cast<Uint8>(colorKey.A * 255));
SDL_FillRect(paddedImage, NULL, colorKeyPixelFormat);
// Blit the image onto the padded image
SDL_BlitSurface(image, NULL, paddedImage, NULL);
SDL_SetColorKey(paddedImage, SDL_SRCCOLORKEY, colorKeyPixelFormat);
Afterwards, I generate an OpenGL texture from paddedImage using similar code to the SDL+OpenGL texture loading code found online (I'll post if necessary). This code works if I just want the texture with or without padding, and is likely not the problem.
I realize that I set all pixels in paddedImage to have alpha zero which causes the first problem I mentioned, but I can't seem to figure out how to do this. Should I just loop over the pixels and set the appropriate colors to have alpha zero?
PARTIAL SOLUTION:
Create paddedImage as above
SDL_FillRect the paddedImage with the colorkey
Generate the texture "as usual"
Manually copy the image (SDL_Surface*) pixels to the paddedImage (OGL texture)
This works almost always expect some cases where the image has 3 color components (i.e. no alpha channel). I'm trying to fix that now by converting them to 4 color components
I think that It could be used together with OpenGL if you can convert SDL_Surfaces into OGL textures, and then you could use the blit function to combine your textures, and manipulate thing using the SDL workflow.
I dont know what you want to achieve. You want to transfer one surface to an OGL texture and preserve the Colorkey, or just want to apply the colorkeyed surface to another surface which then you convert into an OGL texture.
Also you dont have to use Per-pixel alphas, as SDL gives you the ability to use per-surface alphas, but its quite complex as of what alphas and colorkeys can be combined and used together.
As this is a complex thing, please refer to the SDL reference, and this tutorial may be helpful too(tough it doesnt handle OGL stuff):
http://www.sdltutorials.com/the-ins-and-outs-and-overlays-of-alpha-blending
I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective
I'm rendering into an OpenGL offscreen framebuffer object and like to save it as an image. Note that the FBO is larger than the display size. I can render into the offscreen buffer and use it as texture, which works. I can "scroll" this larger texture through the display using an offset, which makes me confident, that I render into a larger context than the window.
If I save the offscreen buffer to an image file it always gets cropped. The code fragment for saving is:
void ofFBOTexture::saveImage(string fileName) {
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
// get the raw buffer from ofImage
unsigned char* pixels = imageSaver.getPixels();
glReadPixels(0, 0, 1024, 1024, GL_RGB, GL_UNSIGNED_BYTE, pixels);
imageSaver.saveImage(fileName);
}
Please note, that the image content is cropped, the visible part is saved correctly (which means no error in pixel formats, GL_RGB issues etc.), but the remaining space is filled with one color.
So, my question is - what am I doing wrong?
Finally I solved the issue.
I have to activate the fbo for saving its contents:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
// save code
...
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);
while only selecting the fbo for glReadPixels via
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT);
doesn't suffice.
(All other things where correct and tested, eg. viewport sizes, width and height of buffer, image texture etc.)
Two questions to start: How are you creating imageSaver and are you sure your width and height are correct (e.g. are you trying to save a 1024 x 1024 image? What size are you getting)?
You're not doing anything wrong, this has been quite common behaviour with OpenGL graphics drivers for a long time - I recall running up against exactly the same issue on nVidia Geforce 3 cards at least a decade ago, and I adopted a similar solution - rendering to an offscreen texture.
This sounds like you have the wrong viewport size or something similar.