I could find nothing about rendering transparent rectangles in SDL2 documentaries. I want to render SDL_Rect as transparent texture/surface/anything used as "fog of war". Maybe you know any way to make surface or texture from SDL_Rect or just render it transparent. I don't want new texture in game files because player could just delete file and he would not have that fog of war.
Ok I managed to do it by myself and if anyone else will have same question thats the answer:
SDL_Surface* Fog = NULL;
SDL_Texture* gFog = NULL;
Fog = SDL_CreateRGBSurface(0, SCREEN_WIDTH, SCREEN_HEIGHT, 32, 0, 0, 0, 0);
if (Fog == NULL)std::cout << SDL_GetError();
gFog = SDL_CreateTextureFromSurface(gRenderer, Fog);
if (gFog == NULL)std::cout<<SDL_GetError();
SDL_SetTextureBlendMode(gFog, SDL_BLENDMODE_BLEND);
SDL_SetTextureAlphaMod(gFog, 150);
Related
I'm trying to overlay small interactive info rectangles drawn with SDL 2D over a 3D scene drawn with OpenGL. Each of itself works, but not together. The 3D model is then hidden.
SDL_Init(SDL_INIT_EVERYTHING);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_ES);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
SDL_CreateWindowAndRenderer(m_width, m_height, SDL_WINDOW_OPENGL|SDL_WINDOW_RESIZABLE, &m_window, &m_renderer);
SDL_GLContext context = SDL_GL_CreateContext(m_window);
SDL_RenderClear(m_renderer);
SDL_RenderPresent(m_renderer);
// load vertex, fragmend shader...
glClearColor(1.0, 1.0, 1.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDrawElements(GL_TRIANGLES, m_indicesSize, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
SDL_Rect rect;
rect.w = 50;
rect.h = 50;
rect.x = 100;
rect.y = 100;
SDL_SetRenderDrawColor(m_renderer, 255, 0, 0, 255);
SDL_RenderFillRect(m_renderer, &rect);
SDL_RenderPresent(m_renderer);
How can I solve this problem? Thxs..
You don't, at least for now.
Here's the (open) bug about adding backend API state getters/setters to SDL_Renderer..
Alternatively, create a SDL_Renderer instance that uses the software renderer & upload the bitmaps coming out of that into a OpenGL texture & composite that into your scene.
I'm writing a basic 2D game in C++ with SDL, and just yesterday I converted from 1.2 to 2.0.
I wanted to create a moving camera, and I realized that I should switch from using SDL_Surfaces to SDL_Textures to use some of 2.0's new features. My previous setup involved blitting a bunch of sprite surfaces to a main surface called buffer, and then flipping that to the screen, and it worked well.
So, I created a renderer and assigned it to the window. Then I used SDL_CreateTextureFromSurface to create a texture from my main SDL_Surface. My surface is currently set to nullptr at this point, and I realize that I should probably initialize it to something, but I can't figure out what. Here is the relevant code.
void Game::Init() {
// Initialize SDL.
SDL_Init(SDL_INIT_EVERYTHING);
window = nullptr;
buffer = nullptr;
camera = { player.GetLocation().x - 50, player.GetLocation().y - 50, player.GetLocation().x + 50, player.GetLocation().y + 50 };
fullscreen = false;
screenWidth = 640, screenHeight = 480;
if (fullscreen) {
window = SDL_CreateWindow("Rune", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, screenWidth, screenHeight, SDL_WINDOW_FULLSCREEN_DESKTOP);
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE);
}
else {
// Initialize our window.
window = SDL_CreateWindow("Rune", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, screenWidth, screenHeight, SDL_WINDOW_SHOWN);
renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_SOFTWARE);
// Assign the buffer surface to the window.
}
texture = SDL_CreateTextureFromSurface(renderer, buffer);
SDL_SetRenderTarget(renderer, texture);
// Run the game loop.
Run();
}
In Run, there's normal working code that involves a game loop, but in the game loop I have a method, Render(), which is causing problems. Here it is.
void Game::Render() {
// Clear the buffer.
SDL_FillRect(buffer, NULL, 0x000000);
// Draw everything to the buffer.
level.Draw(buffer);
if (enemy.IsAlive()) enemy.Draw(buffer);
player.Draw(buffer);
SDL_UpdateTexture(texture, NULL, buffer->pixels, buffer->pitch);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, texture, NULL, NULL);
SDL_RenderPresent(renderer);
// Draw buffer to the screen.
//SDL_UpdateWindowSurface(window);
}
Running this code gives me the error that buffer is nullptr, which comes up on the SDL_UpDateTexture line. If I allocate memory for buffer in the Init() function, I get a memory access error. Can anyone offer a solution? Thank you.
After changing all my SDL_Surfaces to Textures, I realized that in my header file I had been calling constructors that relied on the renderer to have already been initialized, which it wasn't. I set them to blank constructors and then initialized them in the Game.cpp file, after the renderer, and now everything works. Thanks Benjamin for your help!
I'm using SDL to create a window and draw OpenGL in it and after drawing OpenGL I use SDL to show sprites (UI). It worked for me on Windows, OSX and NDK but it doesn't work for me on iOS. This is how I draw the sprite:
I create the window:
int flags = SDL_WINDOW_OPENGL | SDL_WINDOW_SHOWN;
gWindow = SDL_CreateWindow("example", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 400, 800, flags);
I create the renderer:
gRenderer = SDL_CreateRenderer(gWindow, id, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
I load the texture:
SDL_Texture* newTexture = NULL;
SDL_Surface* loadedSurface = SDL_LoadBMP(path.c_str());
newTexture = SDL_CreateTextureFromSurface(gRenderer, loadedSurface);
SDL_FreeSurface(loadedSurface);
That's where I do OpenGL drawing. I load .3ds models, load textures, use blending etc.
And then
I draw the sprite:
dstRect.x = 0.0f;
dstRect.y = 0.0f;
dstRect.w = 128.0f;
dstRect.h = 64.0f;
SDL_RenderCopy(gRenderer, newTexture, NULL , &dstRect);
SDL_RenderPresent(gRenderer);
the result is strange. The sprite shows skewed instead of being drawn in a rectangle.
result http://vvcap.net/db/zHhZwoZa1ng7caeP1BG3.png
What could be the reason of the sprite being transformed like that ? How can I fix this ?
Has anybody had a similar problem ?
It almost looks to me as if the camera transform is rotated slightly around the y-axis and the perspective projection is making it looked skewed.
If the sprite is meant to be draw into the screen-space make sure that you have an orthographic projection enabled before the draw, with the width and height being the size of the physical screen (ie. 480x800).
I didn't find a solution but I just used SDL_RenderCopyEx instead of SDL_RenderCopy and it worked.
I am just trying to scale (make bigger proportionally) my character based on windows weight and height. How can I do that?
I tried SDL_BlitScaled(newsurface, NULL, screen, NULL); but it did not work.
My code:
SDL_Surface* screen = SDL_GetWindowSurface(win);
SDL_Surface* mySprite = IMG_Load( SPRITE_PNG_PATH);
SDL_Rect rcSprite;
rcSprite.x = 250;
rcSprite.y = 100;
SDL_BlitSurface(mySprite, NULL, screen, &rcSprite);
The best way to accomplish this is to have a middle surface, whose width and height are your game's native resolution. You then render the entire frame to this surface, and render that surface to the window using the SDL_BlitScaled function.
For example, if you want your native resolution to be 600x400, you would create a 600x400 surface, blit your character and whatever else to that surface, and then finally scaled blit that surface to the window. If you resize your window to 1200x800, everything will look twice as big.
SDL_Surface* screen = SDL_GetWindowSurface(win);
// Create a surface with our native resolution
int NATIVE_WIDTH = 600;
int NATIVE_HEIGHT = 400;
SDL_Surface* frame = SDL_CreateRGBSurface(0, NATIVE_WIDTH, NATIVE_HEIGHT,
screen->format->BitsPerPixel, screen->format->Rmask,
screen->format->Gmask, screen->format->Bmask, screen->format->Amask);
assert(frameSurface);
SDL_Surface* mySprite = IMG_Load( SPRITE_PNG_PATH);
SDL_Rect rcSprite;
rcSprite.x = 250;
rcSprite.y = 100;
// Blit everything to the intermediate surface, instead of the screen.
SDL_BlitSurface(mySprite, NULL, frame, &rcSprite);
// Once we've drawn the entire frame, we blit the intermediate surface to
// the window, using BlitScaled to ensure it gets scaled to fit the window.
SDL_BlitScaled(frame, NULL, screen, NULL);
I'm trying to write bitmap files for every frames that I rendered through OpenGL.
Please notice that I'm not going to read bitmap, I'm gonna WRITE NEW BITMAP files.
Here is part of my C++ code
void COpenGLWnd::ShowinWnd(int ID)
{
if(m_isitStart == 1)
{
m_hDC = ::GetDC(m_hWnd);
SetDCPixelFormat(m_hDC);
m_hRC = wglCreateContext(m_hDC);
VERIFY(wglMakeCurrent(m_hDC, m_hRC));
m_isitStart = 0;
}
GLRender();
CDC* pDC = CDC::FromHandle(m_hDC);
//pDC->FillSolidRect(0, 0, 100, 100, RGB(100, 100, 100));
CRect rcClient;
GetClientRect(&rcClient);
SaveBitmapToDirectFile(pDC, rcClient, _T("a.bmp"));
SwapBuffers(m_hDC);
}
"GLRender" is the function which can render on the MFC window.
"SaveBitmapToDirectFile" is the function that writes a new bitmap image file from the parameter pDC, and I could check that it works well if I erase that double slash on the second line, because only gray box on left top is drawn at "a.bmp"
So where has m_hDC gone? I have no idea why rendered scene wasn't written on "a.bmp".
Here is GLRender codes, but I don't think that this function was the problem, because it can render image and print it out well on window.
void COpenGLWnd::GLFadeinRender()
{
glViewport(0,0, m_WndWidth, m_WndHeight);
glOrtho(0, m_WndWidth, 0, m_WndHeight, 0, 100);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(m_BlendingSrc, m_BlendingDest);
glPixelTransferf(GL_ALPHA_SCALE,(GLfloat)(1-m_BlendingAlpha));
glPixelZoom((GLfloat)m_WndWidth/(GLfloat)m_w1, -(GLfloat)m_WndHeight/(GLfloat)m_h1);
glRasterPos2f(0, m_WndHeight);
glDrawPixels((GLsizei)m_w1, (GLsizei)m_h1, GL_BGR_EXT, GL_UNSIGNED_BYTE, m_pImageA);
glPixelTransferf(GL_ALPHA_SCALE,(GLfloat)m_BlendingAlpha);
glPixelZoom((GLfloat)m_WndWidth/(GLfloat)m_w2, -(GLfloat)m_WndHeight/(GLfloat)m_h2);
glRasterPos2f(0, m_WndHeight);
glDrawPixels((GLsizei)m_w2, (GLsizei)m_h2, GL_BGR_EXT, GL_UNSIGNED_BYTE, m_pImageB);
glFlush();
}
I'm guessing you're using MFC or Windows API functions to capture the bitmap from the window. The problem is that you need to use glReadPixels to get the image from a GL context -- winapi isn't able to do that.