Alpha Blending in SDL resets after resizing window - c++

I wanted to implement alpha blending within my Texture class. It works almost completely. I use the following functions for manipulating the alpha value:
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
SDL_SetTextureAlphaMod(texture, alpha);
The only problem I have is that the textures that have been manipulated seem to reset to the normal alpha value of 255 when I resize or maximize the window. I checked the alpha value and recognized that it is still the value I manipulated it to be before. So the value is not 255. Why is the renderer rendering it as if the alpha value was 255 then?
Information about how and when I use these functions:
Within the main game loop I change the alpha value of the texture with a public method of my Texture class:
Texture::setAlphaValue(int alpha)
There the private alpha variable of the Texture class is changed.
Within the Draw method of my Texture class the texture is drawn and I call
SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
SDL_SetTextureAlphaMod(texture, alpha);
before
SDL_RenderCopyEx(renderer, texture, &sourceRectangle, &destinationRectangle, 0, 0, SDL_Flip);
Information about how I resize the window:
I basically just set the window mode to a resizable window in my SDL initialization. Then handling it like any normal window is possible:
SDL_CreateWindow(window_Title, x_Position, y_Position, window_Width, window_Height, SDL_WINDOW_RESIZABLE);
My primary loop area:
This is the main game loop:
void Game::Render()
{
// set color and draw window
SDL_SetRenderDrawColor(renderer, windowColor.R(), windowColor.G(), windowColor.B(), 0);
SDL_RenderClear(renderer);
texture.setAlphaValue(100);
texture.Draw(SDL_FLIP_NONE);
// present/draw renderer
SDL_RenderPresent(renderer);
}
Test my project:
I also uploaded my alpha-blending test project to dropbox. In this project I simplified everything, there isn't even a texture class anymore. So the code is really simple, but the bug is still there. Here is the link to the Visual Studio project: http://www.dropbox.com/s/zaipm8751n71cq7/Alpha.rar

SDL_SetTextureBlendMode(texture, SDL_BLENDMODE_BLEND);
You should directly change the alpha in this area.
example: alpha = 100;
SDL_SetTextureAlphaMod(texture, alpha); //remember that alpha is an int
SDL_RenderCopy(renderer, texture, NULL, &rect);
P.S. If you're going for a fade-out/fade-in effect, resizing will temporarily pausa alpha changes (in-case you used SDL_GetTicks() and made a float to slowly reduce/increase alpha as time goes by. This is because windows pauses the rendering inside the program but once you stop resizing, it resumes.
Another P.S. Since you're resizing the window make sure to assign the w and h values not as numbers but as products or dynamic numbers(Multiplication is faster than division but you can also use division).
Assigning static numbers would cause the window to resize but the textures inside won't change size.
Happy Coding :)

This has been a reported bug in the SDL library. It is fixed for some time now: https://bugzilla.libsdl.org/show_bug.cgi?id=2202, https://github.com/libsdl-org/SDL/issues/1085

Related

C/C++ DirectX9 copy pixels at rect to another rect on the screen

Background:
Sorry for my English . So I am in a slightly unique situation in the scenario. I am working on a project that involves using a DLL proxy to intercept DirectX9 calls and dnd control drawing of a game.They are things that are statically draw and I want to be able to draw them in another part of the screen.
The Question:
I am wanting to be able to save pixels in a rect on the screen and then draw exact rect somewhere else on the screen. So if I can grab the pixels at x100, y100, w30, h30 and copy that that to another location on the screen that would be great.
This is the code that I have so far which I assume is making a texture from a memory rect.
HRESULT myIDirect3DDevice9::EndScene(void)
{
GetInput();
// Draw anything you want before the scene is shown to the user
m_pIDirect3DDevice9->GetBackBuffer(iSwapChain, iBackBuffer, Type, ppBackBuffer);
LPDIRECT3DTEXTURE9 textureMap;
D3DXCreateTexture(m_pIDirect3DDevice9, 100, 100, D3DX_DEFAULT, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &textureMap);
m_pIDirect3DDevice9->SetTexture(0, textureMap);
// SP_DX9_draw_text_overlay();
return(m_pIDirect3DDevice9->EndScene());
}
Project is based off this:
Library_Wrappers
Other notes:
I want to avoid DLL injection to accomplish this.

Transparency issue: SDL_SetTextureBlendMode

I have this PNG file that has a transparent background.
Snippet of transparent background
I set it to surface then to tex :
SDL_Texture* m_Tex = SDL_CreateTextureFromSurface(renderer, surface);
And I want this texture to have a blinking effect so I'm passing it to setTextureBlendMode function
SDL_SetTextureBlendMode(tex, SDL_BLENDMODE_BLEND);
Uint8 m_Alpha = 255;
I will use the m_Alpha for the blinking purpose. I will activate the blinking by pressing a particular button.
And it is working fine. But Why is the background of my texture not transparent anymore after I turn it back to SDL_BLENDMODE_NONE:
SDL_SetTextureBlendMode(tex, SDL_BLENDMODE_NONE);
Snippet of not transparent anymore after BLENDMODE_NONE
Is there a way to make my texture's background transparent again?
I mean, after researching enough, I can't seem to find a way except the SDL_SetColorKey.
But the SDL_SetColorKey needs the loaded surface again. It only means that I will set the PNG file again on surface, then on tex. I think it's not ideal to do this everytime I want the tex to stop blinking. Please help. Thanks.
SDL_SetTextureBlendMode(tex, SDL_BLENDMODE_NONE);
To render textures with Alpha != 1, you will need some blend mode.
In blend mode, you are describing to the system How you want to merge the background color with the foreground color.
You can have a basic idea from this thread.
SDL2: Generate fully transparent texture

Difference between fill color and background color

I am porting an application from Windows to Mac OS X. Here, I have one confusion between the use of different terms.
On Windows, we use SetBkColor to set background color of a device context.
On Mac OS X, there is setFill to set fill color.
Is there any difference between this background color of Windows and fill color of Mac OS X?
For stroke clear (by setStroke), I think on Windows, same effect is achieved by CreatePen for lines and SetTextColor for texts. Is this concept is okay?
Both native Windows development and Core Graphics on iOS/Mac OS use the so called 'painter's model' of drawing. Just like actual painting, you select a color for your pen or brush and everything you draw, fill, what-have-you from that point until you change it will use that color. On the Mac, more specifically, you set stroke for such things as text and borders, and fills for methods that fill. You have to set each separately as each accomplishes something different.
SetBkColor would be different because it fills into the background, on Mac or iOS, you would instead set the fill color and then use a drawing method to fill a rect -- and usually this would all be done by overriding a view's drawRect method. For example, here's one way to do that:
- (void)drawRect:(NSRect)rect
{
CGContextRef myContext = [[NSGraphicsContext currentContext] graphicsPort];
// ********** Your drawing code here **********
CGContextSetRGBFillColor (myContext, 1, 0, 0, 1); // set my 'brush color'
CGContextFillRect (myContext, CGRectMake (0, 0, 200, 100 )); // fill it
CGContextSetRGBFillColor (myContext, 0, 0, 1, .5); // set my brush color
CGContextFillRect (myContext, CGRectMake (0, 0, 100, 200)); //fill it
}
Drawing is done back to front, so, if you wanted to set the background to a certain color, you would simply make that the first operation and fill the full window/view rectangle with whatever color you like.
Have a look at the Quartz 2D drawing guide for further examples. If you are coming from Windows, you will find Quartz/Core Graphics to have a very comparable, and in my mind richer, set of drawing capabilities. (The above example is from this guide)
https://developer.apple.com/library/mac/documentation/graphicsimaging/conceptual/drawingwithquartz2d/dq_context/dq_context.html

glScissor() call inside another glScissor()

I'm using glScissor() in my application and it works properly, but I met a problem:
I have my Window object for which drawing area is specified by glScissor() and inside this area, I'm drawing my ListView object for which the drawing area should also be specified with glScissor() as I don't want to draw it all.
In a code I could represent it as:
Window::draw()
{
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
// Draw some components...
mListView.draw(); // mListView is an object of ListView type
glDisable(GL_SCISSOR_TEST);
}
ListView::draw()
{
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
// Draw a chosen part of ListView here
glDisable(GL_SCISSOR_TEST);
}
But of course in this situation enable/disable calls are wrong:
glEnable(GL_SCISSOR_TEST);
glEnable(GL_SCISSOR_TEST);
glDisable(GL_SCISSOR_TEST);
glDisable(GL_SCISSOR_TEST);
If I remove those internal glEnable/glDisable calls (the ones in ListView), I would anyway end up with two glScissor() calls which also seems to be wrong.
EDIT
I would like to somehow achieve both scissor effects, I mean - that Window should draw only in it's scissored area and internal ListView also only in it's scissored area.
As you can see in a picture, with red rectangle I have marked scissor area for Window which WORKS, and with a blue rectangle I marked an area on which I'd like to draw my ListView. That's why I was trying to use nested scissors, but I know it was useless. So basically my question is, what could be the best approach to achieve this?
Since OpenGL is a state machine and the scissor rect, like any other state, is overwritten when calling glScissor the next time, you have to properly restore the window's scissor rect after drawing the list view. This can either be done by just letting the window manage it:
Window::draw()
{
// draw some components with their own scissors
mListView.draw(); // mListView is an object of ListView type
glScissor(x, y, width, height);
glEnable(GL_SCISSOR_TEST);
// draw other stuff using window's scissor
glDisable(GL_SCISSOR_TEST);
}
But it might be more flexible to let the individual components restore the scissor state themselves, especially if used in such a hierarchical manner. For this you can either use the deprecated glPush/PopAttrib functions to save and restore the scissor rect:
ListView::draw()
{
glPushAttrib(GL_SCISSOR_BIT);
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
// Draw a chosen part of ListView here
glDisable(GL_SCISSOR_TEST);
glPopAttrib();
}
Or you save and restore the scissor state yourself:
ListView::draw()
{
// save
int rect[4];
bool on = glIsEnabled(GL_SCISSOR_TEST);
glGetIntegerv(GL_SCISSOR_BOX, rect);
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
// Draw a chosen part of ListView here
// restore
glScissor(rect[0], rect[1], rect[2], rect[3]);
if(!on)
glDisable(GL_SCISSOR_TEST);
}
This can of course be automated with a nice RAII wrapper, but that is free for you to excercise.
edited: General case.
Every time you set the scissor state with glScissor, you set the scissor state. It does not nest, and it does not stack, so you cannot "subscissor" with nested calls to glScissor. You'll have to manually compute the rectangular intersection of your ListView and your Window bounding rects, and then scissor to that when drawing ListView.
In the general case, you'll be manually maintaining a stack of scissor rectangles. As you draw each sub-element, you intersect the sub-element's bounding rect against the current top of the stack, and use that as the scissor for that sub-element. Push the new subrect onto the stack when painting children, and pop it when returning back up the hierarchy.
If you're painting other contents to Window, you'll also have to make sure to handle overdraw correctly; either by setting the Z ordering and enabling depth buffering, or by disabling depth buffering and painting your contents back-to-front. Scissoring won't help you mask the Window contents behind the ListView, since scissors can only be rectangular regions.
OpenGL is a state machine. You can call glScissor, glEnable and glDisable as often as you'd like to. They don't act like opening-closing braces that must be matched. If your calls "fold" like this, that's no problem. Just don't expect the one scissor to merge with the other one; it will merely change/overwrite the previous setting.

Painting Text above OpenGL context in MFC

I work on an MFC app containing OpenGL context.I am new to MFC that is why I am asking it.OpenGL works fine ,but when I want to draw a text above the 3D window using this code inside WindowProc:
case WM_PAINT:
hDC=BeginPaint(window,&paintStr);
GetClientRect(window,&aRect);
SetBkMode(hDC,TRANSPARENT);
DrawText(hDC,L"He He I am a text on top of OpenGL",-1,&aRect,DT_SINGLELINE|DT_CENTER|DT_VCENTER);
EndPaint(window,&paintStr);
return 0;
it is shown beneath the OpenGL context.I can see it only when resizing the window as the OpenGL rendering pauses than.
What you're doing is wrong and also harder than doing it all in OpenGL. To solve the problem of adding text to an OpenGL-drawn window, it's better to just make OpenGL draw the text. You can even use the exact same font you were using in MFC by creating a CFont instance when you handle WM_CREATE, selecting the font into the DC, and calling wglUseFontBitmaps, which will make a series of rasterized bitmaps that you can use with glCallLists. (While you're at it, call GetCharABCWidths and GetTextMetrics to determine the width and height of each glyph, respectively.)
ABC glyphInfo[256]; // for font widths
TEXTMETRIC tm; // for font heights
// create a bitmap font
CFont myFont;
myFont.CreateFont(
16, // nHeight
0, // nWidth
0, // nEscapement
0, // nOrientation
FW_NORMAL, // nWeight
FALSE, // bItalic
FALSE, // bUnderline
0, // cStrikeOut
ANSI_CHARSET, // nCharSet
OUT_DEFAULT_PRECIS, // nOutPrecision
CLIP_DEFAULT_PRECIS, // nClipPrecision
DEFAULT_QUALITY, // nQuality
DEFAULT_PITCH | FF_SWISS, // nPitchAndFamily
_T("Arial") // lpszFacename
);
// change the current font in the DC
CDC* pDC = CDC::FromHandle(hdc);
// make the system font the device context's selected font
CFont *pOldFont = (CFont *)pDC->SelectObject (&myFont);
// the display list numbering starts at 1000, an arbitrary choice
wglUseFontBitmaps (hdc, 0, 255, 1000);
VERIFY( GetCharABCWidths (hdc, 0, 255, &glyphInfo[0]) );
pDC->GetTextMetrics(&tm);
if(pOldFont)
pDC->SelectObject(pOldFont);
myFont.DeleteObject();
Then when you handle WM_PAINT, reset your matrices and use glRasterPos2d to put the text where you need it to go. I suggest calculating the exact width of your string using code similar to the one below if you want it to be horizontally centered.
// indicate start of glyph display lists
glListBase (1000);
CRect r;
GetWindowRect(r);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluOrtho2D(0, r.Width(), 0, r.Height());
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
CString formattedString;
formattedString.Format("Pi is about %1.2f", 3.1415);
int stringWidth=0; // pixels
for(int j=0; j < formattedString.GetLength(); ++j)
stringWidth += glyphInfo[ formattedString.GetAt(j) ].abcA + glyphInfo[ formattedString.GetAt(j) ].abcB + glyphInfo[ formattedString.GetAt(j) ].abcC;
double textXPosition, textYPosition;
textXPosition = r.Width()/2-stringWidth/2; // horizontally centered
textYPosition = r.Height()/2-tm.tmHeight/2; // vertically centered
glRasterPos2d(textXPosition,textYPosition);
// this is what actually draws the text (as a series of rasterized bitmaps)
glCallLists (formattedString.GetLength(), GL_UNSIGNED_BYTE, (LPCSTR)formattedString);
glMatrixMode(GL_MODELVIEW);
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
While the setup is annoying, you only have to do it once, and I think it's less frustrating than dealing with GDI. Mixing GDI and OpenGL is really asking for trouble, and OpenGL does a very good job of displaying text -- you get sub-pixel accuracy for free, among other benefits.
Edit: In response to your request for including GUI elements, I will assume that you meant that you want to have both OpenGL-drawn windows and also standard Windows controls (edit boxes, check boxes, buttons, list controls, etc.) inside the same parent window. I will also assume that you intend OpenGL to draw only part of the window, not the background of the window.
Since you said you're using MFC, I suggest that you create a dialog window, add all of your standard Windows controls to it, and then add in a CWnd-derived class where you handle WM_PAINT. Use the resource editor to move the control to where you want it. Effectively, you're making an owner-draw custom control where OpenGL is doing the drawing. So OpenGL will draw that window, and the standard MFC classes (CEdit, CButton, etc.) will draw themselves. This works well in my experience, and it's really not much different from what GDI does in an owner-draw control.
What if instead you want OpenGL to draw the background of the window, and you want standard Windows controls to appear on top of it? I don't think this is a great idea, but you can handle WM_PAINT and WM_ERASE for your CDialog-derived class. In WM_ERASE, call OpenGL to draw your 3D content, which will be overwritten by the standard Windows controls when WM_PAINT is called. Alternatively in WM_PAINT you could call OpenGL before calling CDialog::OnDraw, which would be similar.
Please clarify your statement "I want to add some 2s graphics overlay (like labels ,gui elements)" if you want me to write more.
Looking at your code I assume the OpenGL rendering is called from a timer or as idel loop action. Naturally OpenGL execution will probably contain some clearing, thus taking anything else drawn with it.
Mixing GDI text drawing with OpenGL is not recommended, but can be done. But of course you then need to include that code into the OpenGL drawing function, too, placing all GDI operations after the buffer swap.