How can i draw in wxWidgets? - c++

So im currently coding a snake game , and i need to draw the first pixel that indicates the head of the snake (positioned in the middle of the software). But i can't seem to find any function that does drawing on the screen . I've tried using DrawRectang and DrawPixel.
Any help ples?

wxWidgets has capabilities to custom draw a widget/window (or a small invalidated part of one) through it's own drawing API.
This is usually used for customized buttons or other controls, graphs, etc. You can handle EVT_PAINT (wxPaintEvent) where you can create a DC ("Device Context"). As well as on creation or size changes, you can force a redraw with wxWindow::Refresh or wxWindow::RefreshRect (for a small part). You might do so using a timer.
Note that the performance and capability is fairly limited. You can use OpenGL or Direct3D , or various high level libraries with wxWidgets, the native platform window handle is obtainable through wxWindow::GetHandle.

Related

render a qt overlay window over opengl child window

I am looking for some information about rendering child windows in specific about how OpenGL interop with GDI. The problem that I have is that I have basically is that I have two windows, first, the main windows are created in qt, and inside of qt, a child window is hosted that leverages an OpenGL renderer.
Now what I wanted to do is to host an overlay on top of my OpenGL window, so I use that to overlay the OpenGL window. The problem that I am having is that when I render with OpenGL, the OpenGL generated graphics seem to obscure the graphics area including and effectively undo the graphics composited by qt.
In the image below the blue area is the qt overlay, in that picture I'm using GDI (BeginPaint/EndPaint) so and the windows seem to interact fine. That is, window order seems correct, the client region is correct. The moment I start to render with Opengl the blue area gets replaced with whatever OpenGL renders.
What I did I basically created to create the overlay I created a second frameless, topmost QMainWindow, and once the platform HWND was initialized I reparent it. Basically I change the new windows parent to be the same parent of my OpenGL window.
What I believed this would do is that the every window, gets drawn separately and the desktop composition manager would make the final composition and basically avoiding the infamous airspace problem as documented by Microsoft in their WPF framework.
What I would like to know is what could cause these issues? At this point, I lack understanding why once i render with OpenGL the pixels by qt overlay are obscured, even though windows hierarchy should say make them composited. What could I do to accomplish what I want?
Mixing OpenGL and GDI drawing on a shared drawable (that also includes sibling / childwindows without the CS_OWNDC windowclass style flag) never was supported. That's not something about Qt, but simply how OpenGL and GDI interact.
But the more important issue is: Why the hell aren't you using the OpenGL support built right into Qt in the first place? Ever since Qt-5 – if available – uses OpenGL to draw everything (all the UI elements). Qt-5 makes it trivial to mix Qt stuff and OpenGL drawing.

Set an image as a background for a DrawingArea

I'm doing a platformer game using gtkmm and cairo, and i can't find a way to set an image as a background, so i don't have to redraw it on every draw event. I'm managing images as pixbufs.
Is it actually possible, or am i thinking it wrong?
Redraw events are always necessary. The difference is about who has to take care of them. Lower level libraries such as Cairo require you to do that.
Maybe you should look into Goocanvas. Particularly for games where you have to move things around easily and capture events, a higher level library than Cairo is handy. GooCanvas also handles screen redraws.
You can just put in the image with GooCanvasImage, and forget about it.
If you're not bound to C++, then have a look at PyGame for Python - it not only handles those events, but provides loads of other tools for game programming.

Windows Imaging Component - Direct2D C++ - Drawing, Saving

Using Windows Image Component (WIC), I want to do the following for my windows desktop application (Direct2D/C++ with Windows 7 SP1 - Visual Studio 2013)
Choose any type of RenderTarget (Direct2D Hwnd/Bitmap/WICBitmap -
etc) for drawing
Create a empty bitmap (D2D1Bitmap or IWICBitmap -
whichever applicable)
Begin draw - Fill colour, draw some lines and ellipses -
End draw ==> (All in the Bitmap)
At some point of time, I need to
save the drawn content in the bitmap as an image in my computer
Place the bitmap in the x1,y1 (top left - xy coordinates) and x2,y2
(bottom right - xy coordinates) of the render target. Because the
rest of the space of the window would be used by toolbar.
How do I achieve this using C++/Direct2D?
GDI+ Code for my functionality:
Bitmap* pBmp = NULL; //create null or empty bitmap
Graphics* pGrBuf = NULL; //initialise graphics to null
pBmp = new Bitmap((INT)rectClient.Width, (INT)rectClient.Height);
pGrBuf = new Graphics(pBmp);
On this Graphics, I could always draw Lines, Rectangles, etc..
pGrBuf.DrawRectangle(....)
pGrBuf.DrawLine(...)
In the end, for achieving point number 5
//leave some space (30, 30 in xy co-ordinates) for putting the toolbox in the top
pGrBuf.DrawImage(m_pBmp, 30.0f, 30.0f);
The code for point 4 is intentionally omitted.
The question have a simple, unambiguous answer, but there are some details that you should (re)consider.
Direct2D is not a panacea framework that will easily outperform others. It's not very clear what are your drawings about and whats their purpose, but there are cases where Direct2D usage is not very appropriate. If you replace GDI(+) with D2D, some of your sufferings will be:
(officialy) limited OS support, according to the DirectX version and/or the functions you will use. You will have to forget about Windows XP, (very possibly) Windows Vista and (less possibly) Windows 7
the performance (compared to GDI+, GDI) is not always greater. Mainly depends from the way and the purpose you use D2D. There are cases where D2D has very poor performance (usually wrong usage or misunderstood concepts).
But also, the advantages that Direct2D could provide are countless.
Roughly said, Direct2D is nothing but a wrapper around Direct3D. It was introduced with the DirectX 10 and its usage was very similar to GDI(+). But with DirectX 11(1), the Direct2D "principles" were changed and now its more D3D-like. It adds another approaches and drops old ones. It could be a little bit confusing at first. Confusing, also because all the tutorials, articles and whatever D2D resources (including MSDN) in the web are mixed up between the D2D versions. Some of them are for the old version and recommend one thing (approach), other describe the new version.
Anyway, I recommend the new version - ie Direct2D 11.1.
To your question...
The "RenderTarget" is a concept from the "old" D2D. The new one is a DeviceContext
The DeviceContext has a target that could be a D2D1Bitmap(1) - offscreen one, a swap chain's back buffer.
The most typical drawing approach is to call drawing functions within a DeviceContext.BeginScene --- DeviceContext.EndScene block. The drawing functions are very similar to the GDI(+) ones.
There are several ways to do that. You can do it with the help of WIC. Also you can copy your D2D1Bitmap data to a DIBBitmap or you can even (re)draw it over a GDI context.
There is a function DeviceContext.DrawImage, but the way you will do it depends on many things. For example, you could have two bitmaps, that are drawn over two different HWnd (one for the toolbar, another one for the other drawing).
Here are some resources that could help you:
What is Direct2D for
Drawing a rectangle with Direct2D
Very well explained guide about migrating to Direct2D 1.1
Answer to another question here, related to Direct2D, but explains in short the way you should draw to a HWnd

C++ Draw a small dot in the center of a screen

I want to draw a small dot at the center of the screen so that it must remain after running of any application. A dot should stay even after I launch an application in full screen mode. Like a dead pixel.
I have already installed Visual C++ on my computer with Windows 7. I have some experience with C++, but I never worked with graphics under Windows OS.
How can I draw a dot on a screen?
Many graphics cards have overlay features, and it is likely possible to set one up to be foremost on the screen regardless of what other applications are rendering in other layers.
But the method to do that would be specific to the video card model and driver.
Or, you can try to get your code inside the application doing full-screen rendering, find their rendering context, and draw to it at the ideal time. Which still requires a bunch of variants for all the different graphics APIs.
Here is someone who describes Steam's attempt to solve the portability issue (with a zillion implementations) and how to take advantage of that.
I would create a properly positioned 1x1 pixel (or whatever size you need) window with no borders or title bar, all client area and paint it appropriately. It's important that the window is created with the WS_EX_TOPMOST style. As long as your program is running, the window will be visible as long as there are no other windows with that style overlapping it.
I've done this as a prank. It worked really well over a full-screen OpenGL game (Quake III). I installed it on a friend's machine so that it would flash the word LOSER! in big letters in the center of the screen at random times during the game.
This worked perfectly well on an XP system. I imagine it should work on Windows 7.

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.