SDL rendering too slow - c++

I'm a beginner in both C++ and SDL, and I'm slowly writing a small game following some tutorials to learn some concepts.
I'm having a problem, though: My rendering seems to be really slow.
I've used PerformanceCounters to calculate my loops with and without my rendering function. Without it, I get 0~2ish milliseconds per frame; when I add the rendering, it goes up to 65ish ms per frame.
Could someone tell me what is wrong with my rendering function?
SDL_Texture *texture;
...
// gets called by the main loop
void render(int x_offset, int y_offset)
{
if (texture)
{
SDL_DestroyTexture(texture);
}
texture = SDL_CreateTexture(renderer,
SDL_PIXELFORMAT_ARGB8888,
SDL_TEXTUREACCESS_STREAMING,
texture_w,
texture_h);
if (SDL_LockTexture(texture, NULL, &pixel_memory, &pitch) < 0) {
printf("Oops! %s\n", SDL_GetError());
}
Uint32 *pixel;
Uint8 *row = (Uint8 *) pixel_memory;
for (int j = 0; j < texture_h; ++j) {
pixel = (Uint32 *)((Uint8 *) pixel_memory + j * pitch);
for (int i = 0; i < texture_w; ++i) {
Uint8 alpha = 255;
Uint8 red = 172;
Uint8 green = 0;
Uint8 blue = 255;
*pixel++ = ((alpha << 24) | (red << 16) | (green << 8) | (blue));
}
}
SDL_UnlockTexture(texture);
}

It's likely slow because you're destroying and creating the texture every single frame, locking textures/uploading pixel data isn't super fast, but I doubt it's the bottleneck here. I strongly recommend allocating the texture once before entering your main loop and re-using it during rendering, then destroying it before your program exits.

The SDL2 is based on hardware rendering. Acessing textures, even with the streaming flag won't be fast since you play ping pong with the GPU.
Instead of creating and destroying a texture each frame, you should consider simply cleaning it before redrawing.
Another option would be to use a surface. You do your stuff with the surface and then draw it as a texture. I'm not sure that the gain would be huge but I think it will still be better than destroying, creating, locking and unlocking a texture each frame.
Looking at your code, I understand it is but a test, though you could try to render to a texture with SDL primitives.
Lastly, keep in mind during your tests that your driver might force the vertical sync, which could lead to fake bad performance.

Probably nothing. Locking textures for direct pixel access is slow. Chances are, you can do a lot of additional stuff in the render function and not see any further decrease in speed.
If you want faster rendering, you need higher-level functions.

Related

SDL GPU Why is blitting two images in two seperates for loops way faster?

So i currently am trying out some stuff in SDL_GPU/C++ and i have the following setup, the images are 32 by 32 pixels respectively and the second image is transparent.
//..sdl init..//
GPU_Image* image = GPU_LoadImage("path");
GPU_Image* image2 = GPU_LoadImage("otherpath");
for (int i = 0; i < screenheight; i += 32) {
for (int j = 0; j < screenwidth; j += 32) {
GPU_Blit(image, NULL, screen, j, i);
GPU_Blit(image2, NULL, screen, j, i);
}
}
This codes with a WQHD sized screen has ~20FPS. When i do the following however
for (int i = 0; i < screenheight; i += 32) {
for (int j = 0; j < screenwidth; j += 32) {
GPU_Blit(image, NULL, screen, j, i);
}
}
for (int i = 0; i < screenheight; i += 32) {
for (int j = 0; j < screenwidth; j += 32) {
GPU_Blit(image2, NULL, screen, j, i);
}
}
i.e. seperate the two blitt calls in two differenct for loops i get 300FPS.
Can someone try to explain this to me or has any idea what might be going on here?
While cache locality might have an impact, I don't think it is the main issue here, especially considering the drop of frame time from 50ms to 3.3ms.
The call of interest is of course GPU_Blit, which is defined here as making some checks followed by a call to _gpu_current_renderer->impl->Blit. This Blit function seems to refer to the same one, regardless of the renderer. It's defined here.
A lot of code in there makes use of the image parameter, but two functions in particular, prepareToRenderImage and bindTexture, call FlushBlitBuffer several times if you are not rendering the same thing as in the previous blit. That looks to me like an expensive operation. I haven't used SDL_gpu before, so I can't guarantee anything, but it necessarily makes more glDraw* calls if you render something other than what you rendered previously, than if you render the same thing again and again. And glDraw* calls are usually the most expensive API calls in an OpenGL application.
It's relatively well known in 3D graphics that making as few changes to the context (in this case, the image to blit) as possible can improve performance, simply because it makes better use of the bandwidth between CPU and GPU. A typical example is grouping together all the rendering that uses some particular set of textures (e.g. materials). In your case, it's grouping all the rendering of one image, and then of the other image.
While both examples render the same number of textures, the first one forces the GPU to make hundreds/thousands (depends on screen size) texture binds while the second makes only 2 texture binds.
The cost of rendering a texture is very cheap on modern GPUs while texture binds (switching to use another texture) are quite expensive.
Note that you can use texture atlas to alleviate the texture bind bottleneck while retaining the desired render order.

Why is my UWP game slower in release than in debug mode?

I'm trying to do an UWP game, and i came accross a problem where my game is much slower in release mode than it is in debug mode.
My game will draw a 3D view (Dungeon master style) and will have an UI part that draws over the 3D view. Because the 3D view can slow down to a small amount of frames per seconds (FPS), i decided to make my game running the UI part always at 60 FPS.
Here is how the main gameloop looks like, in some pseudo code:
Gameloop start
Update game datas
copy actual finished 3D view from buffer to screen
draw UI part
3D view loop start
If no more time to draw more textures on the 3D view exit 3D view loop
Draw one texture to 3D view buffer
3D view loop end --> 3D view loop start
Gameloop end --> Gameloop start
Here are the actual update and render functions:
void Dungeons_of_NargothMain::Update()
{
m_ritonTimer.startTimer(static_cast<int>(E_RITON_TIMER::UI));
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::UI_FRAME_COUNT);
m_ritonTimer.manageFramesPerSecond((int)E_RITON_TIMER::LABY_FRAME_COUNT);
if (m_sceneRenderer->m_numberTotalOfTexturesToDraw == 0 ||
m_sceneRenderer->m_numberTotalOfTexturesToDraw <= m_sceneRenderer->m_numberOfTexturesDrawn)
{
m_sceneRenderer->m_numberTotalOfTexturesToDraw = 150000;
m_sceneRenderer->m_numberOfTexturesDrawn = 0;
}
}
// RENDER
bool Dungeons_of_NargothMain::Render()
{
//********************************//
// Render UI part here //
//********************************//
//**********************************//
// Render 3D view to 960X540 screen //
//**********************************//
m_sceneRenderer->setRenderTargetTo960X540Screen(); // 3D view buffer screen
bool screen960GotFullDrawn = false;
bool stillenoughTimeLeft = true;
while (stillenoughTimeLeft && (!screen960GotFullDrawn))
{
stillenoughTimeLeft = m_ritonTimer.enoughTimeForOneMoreTexture((int)E_RITON_TIMER::UI);
screen960GotFullDrawn = m_sceneRenderer->renderNextTextureTo960X540Screen();
}
if (screen960GotFullDrawn)
m_ritonTimer.frameCountPlusOne((int)E_RITON_TIMER::LABY_FRAME_COUNT);
return true;
}
I removed what is not essential.
Here is the timer part (RitonTimer):
#pragma once
#include "pch.h"
#include <wrl.h>
#include "RitonTimer.h"
Dungeons_of_Nargoth::RitonTimer::RitonTimer()
{
initTimer();
if (!QueryPerformanceCounter(&m_qpcGameStartTime))
{
throw ref new Platform::FailureException();
}
}
void Dungeons_of_Nargoth::RitonTimer::startTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = 0;
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::resetTimer(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart;
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
}
void Dungeons_of_Nargoth::RitonTimer::frameCountPlusOne(int timerIndex)
{
m_frameCount[timerIndex]++;
}
void Dungeons_of_Nargoth::RitonTimer::manageFramesPerSecond(int timerIndex)
{
if (!QueryPerformanceCounter(&m_qpcNowTime))
{
throw ref new Platform::FailureException();
}
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime >= m_qpcFrequency.QuadPart)
{
m_framesPerSecond[timerIndex] = m_frameCount[timerIndex];
m_frameCount[timerIndex] = 0;
m_qpcStartTime[timerIndex] += m_qpcFrequency.QuadPart;
if ((m_qpcStartTime[timerIndex] + m_qpcFrequency.QuadPart) < m_qpcNowTime.QuadPart)
m_qpcStartTime[timerIndex] = m_qpcNowTime.QuadPart - m_qpcFrequency.QuadPart;
}
}
void Dungeons_of_Nargoth::RitonTimer::initTimer()
{
if (!QueryPerformanceFrequency(&m_qpcFrequency))
{
throw ref new Platform::FailureException();
}
m_qpcOneFrameTime = m_qpcFrequency.QuadPart / 60;
m_qpc5PercentOfOneFrameTime = m_qpcOneFrameTime / 20;
m_qpc10PercentOfOneFrameTime = m_qpcOneFrameTime / 10;
m_qpc95PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc5PercentOfOneFrameTime;
m_qpc90PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc80PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc70PercentOfOneFrameTime = m_qpcOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc60PercentOfOneFrameTime = m_qpc70PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc50PercentOfOneFrameTime = m_qpc60PercentOfOneFrameTime - m_qpc10PercentOfOneFrameTime;
m_qpc45PercentOfOneFrameTime = m_qpc50PercentOfOneFrameTime - m_qpc5PercentOfOneFrameTime;
}
bool Dungeons_of_Nargoth::RitonTimer::enoughTimeForOneMoreTexture(int timerIndex)
{
while (!QueryPerformanceCounter(&m_qpcNowTime));
m_qpcDeltaTime = m_qpcNowTime.QuadPart - m_qpcStartTime[timerIndex];
if (m_qpcDeltaTime < m_qpc45PercentOfOneFrameTime)
return true;
else
return false;
}
In debug mode the game's UI works at 60 FPS, and the 3D view is about 1 FPS on my PC. But even there i'm not sure why i have to stop my texture drawing at 45% of one game time and call present, to get the 60 FPS, if i wait longer i only get 30 FPS. (this valor is set in "enoughTimeForOneMoreTexture()" in RitonTimer.
In Release mode it drops dramatically, having like 10 FPS for the UI part, 1 FPS for the 3D part. I tried to find why for the last 2 days, didn't find it.
Also i have another small question: How do i tell visual studio that my game is actually a game and not an app ? Or does Microsoft do the "switch" when i send my game to their store ?
Here i have put my game on my OneDrive so everyone can download the source files and try to compile it, and see if you get the same results as me:
OneDrive link: https://1drv.ms/f/s!Aj7wxGmZTdftgZAZT5YAbLDxbtMNVg
compile in either x64 Debug, or x64 Release mode.
UPDATE:
I think i found the explanation why my game is slower in release mode.
The CPU is probably not waiting for the drawing instruction to be done, but simply adds it to a list which will be forward to the GPU at it's own pace in a separate task (or maybe the GPU does that cache himself). That would explain it all.
My plan was to draw the UI first and then to draw as many textures from the 3D view as possible till 95% of a 1/60th second frame time passed and then present it to the swapchain. The UI would always be at 60 FPS and the 3D view would be as fast as the system allows it (also at 60FPS if it can all be drawn in 95% of the frame time).
This didn't work because it probbably cached all the instructions my 3D view had (i was testing with 150000 BIG texture draw instructions for the 3D view) in one frame time, and so of course the UI was as slow as the 3D view at the end, or close to.
That is also why even in debug mode i didn't get 60FPS when i waited for 95% of a frame time, i had to wait for 45% of a frame time to get my 60 FPS i wanted for the UI.
I tested it with a lower value in release mode to verify that theory, and indeed i also get 60 FPS for the UI when i stop the Drawings at only 15% of a frame time.
I tought it worked like this only in DirectX12.
"How do i tell visual studio that my game is actually a game and not an app" - there's no difference, a game is an app.
I have your code running at 300-400 FPS now in debug mode.
Firstly I commented out your code that checks if you've got time to render another texture. Don't do that. Everything the player sees should render within a single frame. If your frame is taking more than 16ms (with 60fps target) look for expensive operations, or calls that are made repeatedly, possibly adding up to some unexpected cost. Look for code that might be doing something repeatedly when it only needs to do it once per frame or per resize. etc
So the issue is that you were rendering very large textures and a lot of them. You want to avoid overdraw (rendering a pixel where you've already rendered a pixel). You can have a bit of overdraw and that's sometimes preferable to being pedantic. But you were drawing 1000x2000 textures over and over again. So you were absolutely killing the pixel shader. It just can't render that many pixels. I didn't bother looking at the code that tries to control texture rendering based on frame time remaining. For what you're trying to do, that's not helpful.
Inside your render method comment out the while and if/else sections and use this to draw an array of your textures ..
// set sprite dimensions
int w = 64, h = 64;
for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
m_sceneRenderer->renderNextTextureTo960X540Screen(x*64, y*64, w, h);
}
}
and in RenderNextTextureToScreen(int x, int y, int w, int h) ..
m_squareBuffer.sizeX = w; // 1000;
m_squareBuffer.sizeY = h; // 2000;
m_squareBuffer.posX = x; // (float)(rand() % 1920);
m_squareBuffer.posY = y; // (float)(rand() % 1080);
See how this code renders much smaller textures, the textures are 64x64 and there's no overdraw.
And just be aware that the GPU isn't all powerful, it can do a lot if you use it right, but if you just throw crazy operations at it, you can grind it to a halt, just like with the CPU. So try to render things that 'look normal', that you can imagine being in a game. You'll learn in time what's sensible and what isn't.
The most likely explanation for the code running slower in release mode is that your timing and rendering limiter code was broken. It wasn't working properly because the 3d view was running at 1fps, so then who knows what it's behaviour is. With the changes I've made, the program seems to run faster in release mode as expected. Your clock code is showing 600-1600fps in release mode now for me.

SDL tilemap rendering quite slow

Im using SDL to write a simulation that displays quite a big tilemap(around 240*240 tiles). Since im quite new to the SDL library I cant really tell if the pretty slow performance while rendering more than 50,000 tiles is actually normal. Every tile is visible at all times, being around 4*4px big. Currently its iterating every frame through a 2d array and rendering every single tile, which gives me about 40fps, too slow to actually put any game logic behind the system.
I tried to find some alternative systems, like only updating updated tiles but people always commented on how this is a bad practice and that the renderer is supposed to be cleaned every frame and so on.
Here a picture of the map
So I basically wanted to ask if there is any more performant system than rendering every single tile every frame.
Edit: So heres the simple rendering method im using
void World::DirtyBiomeDraw(Graphics *graphics) {
if(_biomeTexture == NULL) {
_biomeTexture = graphics->loadImage("assets/biome_sprites.png");
printf("Biome texture loaded.\n");
}
for(int i = 0; i < globals::WORLD_WIDTH; i++) {
for(int l = 0; l < globals::WORLD_HEIGHT; l++) {
SDL_Rect srect;
srect.h = globals::SPRITE_SIZE;
srect.w = globals::SPRITE_SIZE;
if(sites[l][i].biome > 0) {
srect.y = 0;
srect.x = (globals::SPRITE_SIZE * sites[l][i].biome) - globals::SPRITE_SIZE;
}
else {
srect.y = globals::SPRITE_SIZE;
srect.x = globals::SPRITE_SIZE * fabs(sites[l][i].biome);
}
SDL_Rect drect = {i * globals::SPRITE_SIZE * globals::SPRITE_SCALE, l * globals::SPRITE_SIZE * globals::SPRITE_SCALE,
globals::SPRITE_SIZE * globals::SPRITE_SCALE, globals::SPRITE_SIZE * globals::SPRITE_SCALE};
graphics->blitOnRenderer(_biomeTexture, &srect, &drect);
}
}
}
So in this context every tile is called "site", this is because they're also storing information like moisture, temperature and so on.
Every site got a biome assigned during the generation process, every biome is basically an ID, every land biome has an ID higher than 0 and every water id is 0 or lower.
This allows me to put every biome sprite ordered by ID into the "biome_sprites.png" image. All the land sprites are basically in the first row, while all the water tiles are in the second row. This way I dont have to manually assign a sprite to a biome and the method can do it itself by multiplying the tile size(basically the width) with the biome.
Heres the biome ID table from my SDD/GDD and the actual spritesheet.
The blitOnRenderer method from the graphics class basically just runs a SDL_RenderCopy blitting the texture onto the renderer.
void Graphics::blitOnRenderer(SDL_Texture *texture, SDL_Rect
*sourceRectangle, SDL_Rect *destinationRectangle) {
SDL_RenderCopy(this->_renderer, texture, sourceRectangle, destinationRectangle);
}
In the game loop every frame a RenderClear and RenderPresent gets called.
I really hope I explained it understandably, ask anything you want, im the one asking you guys for help so the least I can do is be cooperative :D
Poke the SDL2 devs for a multi-item version of SDL_RenderCopy() (similar to the existing SDL_RenderDrawLines()/SDL_RenderDrawPoints()/SDL_RenderDrawRects() functions) and/or batched SDL_Renderer backends.
Right now you're trying slam at least 240*240 = 57000 draw-calls down the GPU's throat; you can usually only count on 1000-4000 draw-calls in any given 16 milliseconds.
Alternatively switch to OpenGL & do the batching yourself.

DirectX using multiple Render Targets as input to each other

I have a fairly simple DirectX 11 framework setup that I want to use for various 2D simulations. I am currently trying to implement the 2D Wave Equation on the GPU. It requires I keep the grid state of the simulation at 2 previous timesteps in order to compute the new one.
How I went about it was this - I have a class called FrameBuffer, which has the following public methods:
bool Initialize(D3DGraphicsObject* graphicsObject, int width, int height);
void BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const;
void EndRender() const;
// Return a pointer to the underlying texture resource
const ID3D11ShaderResourceView* GetTextureResource() const;
In my main draw loop I have an array of 3 of these buffers. Every loop I use the textures from the previous 2 buffers as inputs to the next frame buffer and I also draw any user input to change the simulation state. I then draw the result.
int nextStep = simStep+1;
if (nextStep > 2)
nextStep = 0;
mFrameArray[nextStep]->BeginRender(0.0f,0.0f,0.0f,1.0f);
{
mGraphicsObj->SetZBufferState(false);
mQuad->GetRenderer()->RenderBuffers(d3dGraphicsObj->GetDeviceContext());
ID3D11ShaderResourceView* texArray[2] = { mFrameArray[simStep]->GetTextureResource(),
mFrameArray[prevStep]->GetTextureResource() };
result = mWaveShader->Render(d3dGraphicsObj, mQuad->GetRenderer()->GetIndexCount(), texArray);
if (!result)
return false;
// perform any extra input
I_InputSystem *inputSystem = ServiceProvider::Instance().GetInputSystem();
if (inputSystem->IsMouseLeftDown()) {
int x,y;
inputSystem->GetMousePos(x,y);
int width,height;
mGraphicsObj->GetScreenDimensions(width,height);
float xPos = MapValue((float)x,0.0f,(float)width,-1.0f,1.0f);
float yPos = MapValue((float)y,0.0f,(float)height,-1.0f,1.0f);
mColorQuad->mTransform.position = Vector3f(xPos,-yPos,0);
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
}
mGraphicsObj->SetZBufferState(true);
}
mFrameArray[nextStep]->EndRender();
prevStep = simStep;
simStep = nextStep;
ID3D11ShaderResourceView* currTexture = mFrameArray[nextStep]->GetTextureResource();
// Render texture to screen
mGraphicsObj->SetZBufferState(false);
mQuad->SetTexture(currTexture);
result = mQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
mGraphicsObj->SetZBufferState(true);
The problem is nothing is happening. Whatever I draw appears on the screen(I draw using a small quad) but no part of the simulation is actually ran. I can provide the shader code if required, but I am certain it works since I've implemented this before on the CPU using the same algorithm. I'm just not certain how well D3D render targets work and if I'm just drawing wrong every frame.
EDIT 1:
Here is the code for the begin and end render functions of the frame buffers:
void D3DFrameBuffer::BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const {
ID3D11DeviceContext *context = pD3dGraphicsObject->GetDeviceContext();
context->OMSetRenderTargets(1, &(mRenderTargetView._Myptr), pD3dGraphicsObject->GetDepthStencilView());
float color[4];
// Setup the color to clear the buffer to.
color[0] = clearRed;
color[1] = clearGreen;
color[2] = clearBlue;
color[3] = clearAlpha;
// Clear the back buffer.
context->ClearRenderTargetView(mRenderTargetView.get(), color);
// Clear the depth buffer.
context->ClearDepthStencilView(pD3dGraphicsObject->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
void D3DFrameBuffer::EndRender() const {
pD3dGraphicsObject->SetBackBufferRenderTarget();
}
Edit 2 Ok, I after I set up the DirectX debug layer I saw that I was using an SRV as a render target while it was still bound to the Pixel stage in out of the shaders. I fixed that by setting shader resources to NULL after I render with the wave shader, but the problem still persists - nothing actually gets ran or updated. I took the render target code from here and slightly modified it, if its any help: http://rastertek.com/dx11tut22.html
Okay, as I understand correct you need a multipass-rendering to texture.
Basiacally you do it like I've described here: link
You creating SRVs with both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET bind flags.
You ctreating render targets from textures
You set first texture as input (*SetShaderResources()) and second texture as output (OMSetRenderTargets())
You Draw()*
then you bind second texture as input, and third as output
Draw()*
etc.
Additional advices:
If your target GPU capable to write to UAVs from non-compute shaders, you can use it. It is much more simple and less error prone.
If your target GPU suitable, consider using compute shader. It is a pleasure.
Don't forget to enable DirectX debug layer. Sometimes we make obvious errors and debug output can point to them.
Use graphics debugger to review your textures after each draw call.
Edit 1:
As I see, you call BeginRender and OMSetRenderTargets only once, so, all rendering goes into mRenderTargetView. But what you need is to interleave:
SetSRV(texture1);
SetRT(texture2);
Draw();
SetSRV(texture2);
SetRT(texture3);
Draw();
SetSRV(texture3);
SetRT(backBuffer);
Draw();
Also, we don't know what is mRenderTargetView yet.
so, before
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
somewhere must be OMSetRenderTargets .
Probably, it s better to review your Begin()/End() design, to make resource binding more clearly visible.
Happy coding! =)

C++/SDL: Fading out a surface already having per-pixel alpha information

Suppose we have a 32-bit PNG file of some ghostly/incorporeal character, which is drawn in a semi-transparent fashion. It is not equally transparent in every place, so we need the per-pixel alpha information when loading it to a surface.
For fading in/out, setting the alpha value of an entire surface is a good way; but not in this case, as the surface already has the per-pixel information and SDL doesn't combine the two.
What would be an efficient workaround (instead of asking the artist to provide some awesome fade in/out animation for the character)?
I think the easiest way for you to achieve the result you want is to start by loading the source surface containing your character sprites, then, for every instance of your ghost create a working copy of the surface. What you'll want to do is every time the alpha value of an instance change, SDL_BlitSurface (doc) your source into your working copy and then apply your transparency (which you should probably keep as a float between 0 and 1) and then apply your transparency on every pixel's alpha channel.
In the case of a 32 bit surface, assuming that you initially loaded source and allocated working SDL_Surfaces you can probably do something along the lines of:
SDL_BlitSurface(source, NULL, working, NULL);
if(SDL_MUSTLOCK(working))
{
if(SDL_LockSurface(working) < 0)
{
return -1;
}
}
Uint8 * pixels = (Uint8 *)working->pixels;
pitch_padding = (working->pitch - (4 * working->w));
pixels += 3; // Big Endian will have an offset of 0, otherwise it's 3 (R, G and B)
for(unsigned int row = 0; row < working->h; ++row)
{
for(unsigned int col = 0; col < working->w; ++col)
{
*pixels = (Uint8)(*pixels * character_transparency); // Could be optimized but probably not worth it
pixels += 4;
}
pixels += pitch_padding;
}
if(SDL_MUSTLOCK(working))
{
SDL_UnlockSurface(working);
}
This code was inspired from SDL_gfx (here), but if you're doing only that, I wouldn't bother linking against a library just for that.