I'm working on a graphics application on an embedded linux system. The app uses SDL, which is using /dev/fb0. This device has a 4 byte pixel format for red, green, blue and alpha. The alpha value is used to blend /dev/fb0 with a motion video plane.
The /dev/fb0 alpha value works as specified when I write to /dev/fb0 directly.
Unfortunately when I call the SDL functions the alpha value in /dev/fb0 is set to zero. (For example calling boxRGBA to set a rectangle white results in each framebuffer pixel in the rectangle being set to 0xFFFFFF00.)
Is there a way of making SDL set the alpha values in /dev/fb0 to my desired values?
There's a lot of SDL documentation on alpha, but this seems to relate to SDL operations and not the alpha value in the final framebuffer output.
edit fbset output:
mode "1360x768-60"
# D: 85.507 MHz, H: 47.716 kHz, V: 60.020 Hz
geometry 1360 768 1360 768 32
timings 11695 256 64 18 3 112 6
rgba 8/16,8/8,8/0,8/24
endmode
edit SDL initialisation and test rectangle fill (note this is SDL 1.2)
SDL_Init(SDL_INIT_VIDEO);
m_screen = SDL_SetVideoMode(0, 0, 0, SDL_FULLSCREEN | SDL_HWSURFACE);
SDL_ShowCursor(SDL_DISABLE);
SDL_FillRect(m_screen, NULL, SDL_MapRGBA(m_screen->format, 0x80, 0x80, 0x80, 0xFF));
SDL_Flip(m_screen);
edit debugging shows that the SDL_Surface returned by SDL_SetVideoMode() has a SDL_PixelFormat with no alpha, i.e. Amask is 0, Ashift is 0 and Aloss is 8.
edit explicitly setting the alpha channel on the SDL_Surface gives the correct results. But the code looks unsatisfactory: I'm setting fields marked for internal use only:
m_screen = SDL_SetVideoMode(0, 0, 0, SDL_FULLSCREEN | SDL_HWSURFACE);
m_screen->format->Amask = 0xFF000000;
m_screen->format->Ashift = 24;
m_screen->format->Aloss = 0;
Related
I've been tearing my hair out over how to do this simple effect. I've got an image (see below), and when this image is used in a game, it produces a clockwise transition to black effect. I have been trying to recreate this effect in SDL(2) but to no avail. I know it's got something to do with masking but I've no idea how to do that in code.
The closest I could get was by using "SDL_SetColorKey" and incrementing the RGB values so it would not draw the "wiping" part of the animation.
Uint32 colorkey = SDL_MapRGBA(blitSurf->format,
0xFF - counter,
0xFF - counter,
0xFF - counter,
0
);
SDL_SetColorKey(blitSurf, SDL_TRUE, colorkey);
// Yes, I'm turning the surface into a texture every frame!
SDL_DestroyTexture(streamTexture);
streamTexture = SDL_CreateTextureFromSurface(RENDERER, blitSurf);
SDL_RenderCopy(RENDERER, streamTexture, NULL, NULL);
I've searched all over and am now just desperate for an answer for my own curiosity- and sanity! I guess this question isn't exactly specific to SDL; I just need to know how to think about this!
Arbitrarily came up with a solution. It's expensive, but works. By iterating through every pixel in the image and mapping the colour like so:
int tempAlpha = (int)alpha + (speed * 5) - (int)color;
int tempColor = (int)color - speed;
*pixel = SDL_MapRGBA(fmt,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempColor,
(Uint8)tempAlpha
);
Where alpha is the current alpha of the pixel, speed is the parameterised speed of the animation, and color is the current color of the pixel. fmt is the SDL_PixelFormat of the image. This is for fading to black, the following is for fading in from black:
if ((255 - counter) > origColor)
continue;
int tempAlpha = alpha - speed*5;
*pixel = SDL_MapRGBA(fmt,
(Uint8)0,
(Uint8)0,
(Uint8)0,
(Uint8)tempAlpha
);
Where origColor is the color of the pixel in the original grayscale image.
I made a quick API to do all of this, so feel free to check it out: https://github.com/Slynchy/SDL-AlphaMaskWipes
Running on OS X, I've loaded a texture in OpenGL using SDL_Image library (using IMG_LOAD() which returns SDL_Surface*). It appeared that the color channels have been swapped, i.e. I had to set GL_BGRA as a pixel format parameter in glTexImage2D().
Is there a way to determine the correct data format (BGRA or RGBA or etc...) without just simply compiling and checking the texture? And what is the reason that SDL swaps these color channels?
Yes. Following link has code examples of how to determine the channel shift for each component: http://wiki.libsdl.org/SDL_PixelFormat#Code_Examples
From the site:
SDL_PixelFormat *fmt;
SDL_Surface *surface;
Uint32 temp, pixel;
Uint8 red, green, blue, alpha;
.
.
.
fmt = surface->format;
SDL_LockSurface(surface);
pixel = *((Uint32*)surface->pixels);
SDL_UnlockSurface(surface);
/* Get Red component */
temp = pixel & fmt->Rmask; /* Isolate red component */
temp = temp >> fmt->Rshift; /* Shift it down to 8-bit */
temp = temp << fmt->Rloss; /* Expand to a full 8-bit number */
red = (Uint8)temp;
You should be able to sort the Xmasks by value. Then you may determine if its RGBA or BGRA. If Xmask == 0 then the color channel does not exist.
I have no idea why the swaps occur.
Edit: Changed from Xshift to Xmask as the latter may be used to determine both location AND existance of color channels.
I'm working on a NES emulator right now and I'm having trouble figuring out how to render the pixels. I am using a 3 dimensional array to hold the RGB value of each pixel. The array definition looks like this for the 256 x 224 screen size:
byte screenData[224][256][3];
For example, [0][0][0] holds the blue value, [0][0][1] holds the green values and [0][0][2] holds the red value of the pixel at screen position [0][0].
When the vblank flag goes high, I need to render the screen. When SDL goes to render the screen, the screenData array will be full of the RGB values for each pixel. I was able to find a function named SDL_CreateRGBSurfaceFrom that looked like it may work for what I want to do. However, all of the examples I have seen use 1 dimensional arrays for the RGB values and not a 3 dimensional array.
What would be the best way for me to render my pixels? It would also be nice if the function allowed me to resize the surface somehow so I didn't have to use a 256 x 224 window size.
You need to store the data as an unidimensional char array:
int channels = 3; // for a RGB image
char* pixels = new char[img_width * img_height * channels];
// populate pixels with real data ...
SDL_Surface *surface = SDL_CreateRGBSurfaceFrom((void*)pixels,
img_width,
img_height,
channels * 8, // bits per pixel = 24
img_width * channels, // pitch
0x0000FF, // red mask
0x00FF00, // green mask
0xFF0000, // blue mask
0); // alpha mask (none)
In 2.0, use SDL_Texture + SDL_TEXTUREACCESS_STREAMING + SDL_RenderCopy, it's faster than SDL_RenderPoint.
See:
official example: http://hg.libsdl.org/SDL/file/e12c38730512/test/teststreaming.c
my derived example which does not require blob data and compares both methods: https://github.com/cirosantilli/cpp-cheat/blob/0607da1236030d2e1ec56256a0d12cadb6924a41/sdl/plot2d.c
Related: Why do I get bad performance with SDL2 and SDL_RenderCopy inside a double for loop over all pixels?
I was searching for how to create a transparent surface in SDL, and I found the following: http://samatkins.co.uk/blog/2012/04/25/sdl-blitting-to-transparent-surfaces/
Basically, it is:
SDL_Surface* surface;
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
surface = SDL_CreateRGBSurface(SDL_HWSURFACE,width,height,32, 0xFF000000, 0x00FF0000, 0x0000FF00, 0x000000FF);
#else
surface = SDL_CreateRGBSurface(SDL_HWSURFACE,width,height,32, 0x000000FF, 0x0000FF00, 0x00FF0000, 0xFF000000);
#endif
and it works, but it seems pretty damn awful to me, so I was wondering if there is some better way of doing this.
What you have there is a check to see if the computer uses big endian or little endian. SDL is multiplatform, and computers use different endiannness.
The author of that article was writing it in a "platform agnostic" manner. If you are running this on a PC, you'll probably be safe just using:
surface = SDL_CreateRGBSurface(SDL_HWSURFACE,width,height,32, 0x000000FF, 0x0000FF00, 0x00FF0000, 0xFF000000);
You don't need the conditionals.
That being said, the code will not be portable to other platforms that use big endiandess
I have a bit of experience with SDL2 with my IT class. But I've been developing a simplified version of functions to that use SDL and the way I load my images is is like this:
ImageId LoadBmp(string FileName, int red, int green, int blue){
SDL_Surface* image = SDL_LoadBMP(FileName.c_str()); // File is loaded in the SDL_Surface* type variable
GetDisplayError(!image, string("LoadBmp:\n Couldn't load image file ") + FileName); // Check if the file is found
Images.push_back(image); // Send the file to the Images vector
SDL_SetColorKey(Images[Images.size() - 1], SDL_TRUE, // enable color key (transparency)
SDL_MapRGB(Images[Images.size() - 1]->format, red, green, blue)); // This is the color that should be taken as being the 'transparent' part of the image
// Create a texture from surface (image)
SDL_Texture* Texture = SDL_CreateTextureFromSurface(renderer, Images[Images.size() - 1]);
Textures.push_back(Texture);
return Images.size() - 1; // ImageId becomes the position of the file in the vector}
What you would probably looking for is
SDL_SetColorKey(Images[Images.size() - 1], SDL_TRUE, // enable color key (transparency)
SDL_MapRGB(Images[Images.size() - 1]->format, red, green, blue)); // This is the color that should be taken as being the 'transparent' part of the image
by doing so, you set the RGB given to be considered as transparent. Hope this helps! Here's the SDL Ready Template I'm currently working on you should be able to use some of those!
https://github.com/maxijonson/SDL2.0.4-Ready-Functions-Template
Actually we call it Alpha blending and you can look at it here:
http://lazyfoo.net/tutorials/SDL/13_alpha_blending/index.php
How can i use the RGBA5555 pixel format on cocos2d?
I define my pixel formats like this:
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA4444];
and i've found these:
// Available textures
// kCCTexture2DPixelFormat_RGBA8888 - 32-bit texture with Alpha channel
// kCCTexture2DPixelFormat_RGB565 - 16-bit texture without Alpha channel
// kCCTexture2DPixelFormat_A8 - 8-bit textures used as masks
// kCCTexture2DPixelFormat_I8 - 8-bit intensity texture
// kCCTexture2DPixelFormat_AI88 - 16-bit textures used as masks
// kCCTexture2DPixelFormat_RGBA4444 - 16-bit textures: RGBA4444
// kCCTexture2DPixelFormat_RGB5A1 - 16-bit textures: RGB5A1
// kCCTexture2DPixelFormat_PVRTC4 - 4-bit PVRTC-compressed texture: PVRTC4
// kCCTexture2DPixelFormat_PVRTC2 - 2-bit PVRTC-compressed texture: PVRTC2
but i can't seem to find the RGBA5555. Any thoughts on that?
There is no RGBA5555 format. That would amount to 4 times 5 Bits = 20 Bits. Such a texture format doesn't exist anywhere.
If you're looking for RGB5551, meaning 5 Bits for each RGB color and 1 Bit for alpha, then you're looking for the RGB5A1 format.