I have an application with two classes extending wxGLCanvas and one extending wxWindow. They implement 3 types of possible visualization of the user objects. Only one of them is displayed at the same time. The two wxGLCanvas contain a combination of an OpenGL scene plus some text superposed using wxPaintDC. The wxWindow draws everything using wxBufferedPaintDC.
Problem 1: In some machines, when exchanging from one wxGLCanvas to the other one, during the first rendering of the scene a background image is displayed just until the first rendered image appears. This background image I found out that comes from the wxGLCanvas background.
Problem 2:In the same machines where problem 1 appears, when using the function wxClientDC.Blit, instead of taking the OpenGL scene with the superposed text it is taking the wxGLCanvas background image (same as problem 1) with the superposed text; so it is taking the canvas context excluding the OpenGL scene. In the other machines the result of the screenshot is correct.
INFO: When I select the wxWindow view, where all is drawn using wxBufferedPaintDC, the background image of both, problem 1 and 2, is updated to the frame displayed by the wxWindow. If I now switch between wxGLCanvases I will see the "flash" of the wxWindow view.
Code to take screenshot (Inside class extending wxGLCanvas):
wxClientDC lv_contexteVue(this);
int lv_largeurVue;
int lv_hauteurVue;
lv_contexteVue.GetSize(&lv_largeurVue,&lv_hauteurVue);
wxBitmap lv_vue(lv_largeurVue,lv_hauteurVue);
wxMemoryDC lv_contexteAux;
lv_contexteAux.SelectObject(lv_vue);
lv_contexteAux.Blit(0,0,lv_largeurVue,lv_hauteurVue,&lv_contexteVue,0,0);
lv_vue.SaveFile(wxString(er_cheminSauvegarde.c_str(),wxConvLibc,er_cheminSauvegarde.size()),wxBITMAP_TYPE_BMP);
lv_contexteAux.SelectObject(wxNullBitmap);
Code to display scene OpenGL followed by the overlayed text (Inside class extending wxGLCanvas):
wxPaintDC dc(this);
//dc.Clear();
dc.SetBackground(*wxBLACK);
dc.SetBackgroundMode(wxSOLID);
dc.SetTextBackground(*wxBLACK);
dc.SetTextForeground(*wxWHITE);
SetCurrent(mv_contexte);
if (!mv_estInitialise)
{
initialiser();
mv_estInitialise = true;
}
Evenement lv_demandeDessin(DEMANDE_AFFICHAGE_2DPLUS);
mp_controleur->traiterEvenement(lv_demandeDessin);
SwapBuffers();
//dc.ClearCache();
wxColour lv_couleurEspaceLibre = *wxWHITE;
dc.SetBrush(wxBrush(lv_couleurEspaceLibre));
dc.SetPen(wxPen(lv_couleurEspaceLibre, 1));
//Overlay Text
wxSize screenSize = this->GetSize();
//dc.SetTextForeground(wxColour(240, 240, 240, 255));
wxFont font(8, wxFONTFAMILY_SWISS, wxFONTSTYLE_NORMAL, wxFONTWEIGHT_NORMAL, false);
dc.SetFont(font);
//dc.SetTextBackground(wxColour(0, 0, 0, 200));
string formated = ConstantesATLAS::FILIGRANE_PRE+Constantes::VERSION+Constantes::FILIGRANE_POS;
wxString mystring = wxString::FromAscii(formated.c_str());
//dc.DrawText(mystring,5,screenSize.GetY()-20);
dc.DrawText(mystring,5,5);
Answer from wxWidgets forum to the same question posted here, all credit for them :)
https://forums.wxwidgets.org/viewtopic.php?f=1&t=40288&e=0
Summary: do not mix OpenGL and wxPaintDC. To take screenshot of openGL with overlayed text there are two good posibilities:
1-Put the overlayed text with OpenGL, e.g. a superposed orthographic projection. Then take screenshot using the function glReadPixels().
2-If text is overlayed using the DC, to take screenshot, first ask OpenGL to return all the viewport in an array. Transform this array to wxImage, resuperpose the text in the image the same way it is superposed in the repaint event, use the dc.Blit function to copy the image.
Related
Background:
Sorry for my English . So I am in a slightly unique situation in the scenario. I am working on a project that involves using a DLL proxy to intercept DirectX9 calls and dnd control drawing of a game.They are things that are statically draw and I want to be able to draw them in another part of the screen.
The Question:
I am wanting to be able to save pixels in a rect on the screen and then draw exact rect somewhere else on the screen. So if I can grab the pixels at x100, y100, w30, h30 and copy that that to another location on the screen that would be great.
This is the code that I have so far which I assume is making a texture from a memory rect.
HRESULT myIDirect3DDevice9::EndScene(void)
{
GetInput();
// Draw anything you want before the scene is shown to the user
m_pIDirect3DDevice9->GetBackBuffer(iSwapChain, iBackBuffer, Type, ppBackBuffer);
LPDIRECT3DTEXTURE9 textureMap;
D3DXCreateTexture(m_pIDirect3DDevice9, 100, 100, D3DX_DEFAULT, D3DUSAGE_RENDERTARGET, D3DFMT_X8R8G8B8, D3DPOOL_DEFAULT, &textureMap);
m_pIDirect3DDevice9->SetTexture(0, textureMap);
// SP_DX9_draw_text_overlay();
return(m_pIDirect3DDevice9->EndScene());
}
Project is based off this:
Library_Wrappers
Other notes:
I want to avoid DLL injection to accomplish this.
I have an app written in Qt5.10 using QOpenGLWidget. I use raw OpenGL calls to draw on the widget.
If I have some viewport area that is drawn with alpha <1.0 this area is not drawn anew, but rather it looks like it's drawn over the previous state of the widget.
When I switch between different Windows apps, it's drawing over previous contents of the screen rendered by a different apps.
I don't change the default update behaviour of the widget.
I set the default SurfaceFormat as follows:
QSurfaceFormat surfaceFormat = QSurfaceFormat::defaultFormat();
surfaceFormat.setRedBufferSize(8);
surfaceFormat.setGreenBufferSize(8);
surfaceFormat.setBlueBufferSize(8);
surfaceFormat.setAlphaBufferSize(8);
surfaceFormat.setStencilBufferSize(8);
surfaceFormat.setDepthBufferSize(16);
surfaceFormat.setSamples(4);
surfaceFormat.setRenderableType(QSurfaceFormat::RenderableType::OpenGL);
surfaceFormat.setProfile(QSurfaceFormat::OpenGLContextProfile::CompatibilityProfile);
surfaceFormat.setSwapBehavior(QSurfaceFormat::DoubleBuffer);
QSurfaceFormat::setDefaultFormat(surfaceFormat);
I was trying to add calls to glClear and glInvalidateFramebuffer in random places in the code, but it didn't seem to help.
What should I do to force redrawing of the widget?
I am using Qt5.6, I have developed several widgets that render the content to an off-screen bitmap then copy the final image to the visible area.
I have an area on the visible display that shows a video feed, I want to copy the images over the video without overwriting the background and avoiding flicker.
I am currently creating the off-screen image using a 'QPixmap', I then create a painter using the Pixmap and draw as to the off-screen image. When the image is ready I then call the 'toImage' function to return a 'QImage' and then copy this to the visible display.
A lot of the widget contains lines and circles, a lot of which are not filled.
I've seen other posts not using a QPixmap, just using a 'QImage', should I be using a 'QPixmap' at all?
Question is how to copy the image from the off-screen area to the visible area without overwriting the background?
The key to transparency is that the overlay image has got an alpha channel. QPixmap uses the graphics format of the underlying graphics system which should include an alpha channel. For QImage, the format can be explicitly specified and it should be QImage::Format_ARGB32_Premultiplied, see [1]: http://doc.qt.io/qt-5/qimage.html#Format-enum
To get a a fully transparent QImage/QPixmap in the first place, call QPixmap/QImage::fill(QColor(0, 0, 0, 0)); before creating the QPainter.
The 4th parameter is the alpha channel which is 255 (full opacity) by default).
Unfortunately I can't give advice whether QPixmap or QImage is faster for your setup.
Provided the compositing operation with the videofeed considers the alpha-channel, this should solve your problem.
I have a game I'm currently working on, and it uses multiple views (for a minimap for example).
Thing is, I would like to have a fading effect added at some point, so I thought I'd create a black image that is the size of the screen and change its alpha value with a timer. That part is not a problem.
What happens right now is the main area (ie window default view) is fading (because the opacity of the image is increasing), but the minimap (minimap view) is unaffected. This is normal behaviour for views, but is there a way to draw an image to the whole window, regardless of the views ?
Thanks in advance
To clarify, you have the default view where you'll draw the main game, then you'll have the minimap view where you would draw the minimap. At some point in the game you want the whole screen to fade to black. It sounds like you've been trying to draw a black image on the default view (changing the alpha) to make this effect work.
So, you need a third view that you draw your black image on to get this fading effect.
I've encountered a problem drawing SFML Text. In my application, I use views as a sort of coordinate system for my application. Thus, a typical view would be 10 x 10 or 20 x 20. All my normal drawing functions work fine, when drawing primitives and lines, etc., and the relevant code doesn't have to know about the coordinate system.
However, when I tried to draw text to the screen, I found that it appeared gigantic. When I reduced the font size drastically, it appeared extremely blurry and pixellated, as if it were trying to render to GIANT pixels that are 1x1 in my view.
Is there a way to draw text with a standard font size, in a way that it won't be affected by the view? Ideally, my text would width/size-wise on any view? How can I accomplish this?
Thanks for any input!
(P.S., I'm using SFML 2.0, for reference)
You can set another view (i.e. apply OpenGL transformation) and render text after it. Here is an example from sfml tutorial:
sf::View View(sf::FloatRect(600, 700, 1400, 1300));
App.SetView(View);
// Draw the game...
App.SetView(App.GetDefaultView());
// Draw the interface...