Simulation making using MFC application - mfc

I am doing a simulation using MFC application (its a robot moving in a field) what happens is that the process behind calculates the position to quick where as drawing takes time so what I see ultimately is the robot at the end position no intermediate positions . But when I put AFXMessageBox then I can see all the position its been going through , Can you help me figure this out

Hina, What you need to do is move the complex calculation that calculates the position of the robot to a thread and keep the drawing of robot in your main thread. Then you need to communicate your current positions to the main thread and after the drawing invalidate your surface. This way you can see the positions updated frequently.
What happens when you display a message box is you are able to repaint the surface after the calculation.

You can fasten the drawing my using memory device context. In a nutshell you would do all the drawing on a bitmap in memory which will be fast. When it is all ready, you will print the final drawing on your display. This will be very fast and smooth.

Related

OpenGL: How to minimize drawing?

My OpenGL screen consists of 2 triangles and 1 texture, nothing else. I'd like to update the screen as little as possible, to save power and limit CPU/GPU usage. Unfortunately, when my draw_scene routine returns early without drawing anything, the OpenGL screen goes black-- even if I never call glutSwapBuffers. My background color is not black by the way. It seems that if I do not draw, the OpenGL window loses its contents. How can I minimize the amount of drawing that is done?
Modern graphics systems assume, that when a redraw is initiated, that the whole contents are redrawn. Furthermore, if you get a redraw event from the graphics system, then that's usually because the contents of the window have become undefined and need to be recreated, so you must redraw in that situation.
To save power you have to disable the idle loop (or just pass over everything that does and immediately yield back to the OS scheduler) and don't have timers create events.

How to display smooth video in FireMonkey (FMX, FM3)?

Has anyone figured out how to display smooth video (i.e. a series of bitmaps) in a FireMonkey application, HD or 3D? In VCL you could write to a canvas from a thread and this would work perfectly, but this does not work in FMX. To make things worse, the apparently only reliable way is to use TImage, and that seems to be updated from the main thread (open a menu and video freezes temporarily). All EMB examples I could find all either write to TImage from the main thread, or use Synchronize(). These limitations make FMX unusable for decent video display so I am looking for a hack or possibly bypass of FMX. I use XE5/C++ but welcome any suggestions. Target OS is both Windows 7+ & OS X. Thanks!
How about putting a TPaintbox on your form to hold the video. In the OnPaint method you simply draw the next frame to the paintbox canvas. Now put a TTimer on the form, set the interval to the frame rate required. In the OnTimer event for the timer just write paintbox1.repaint
This should give you regular frames no matter what else the program is doing.
For extra safety, you could increment a frame number in the OnTimer event. Now in the paintbox paint method you know which frame to paint. This means you won't jump frames if something else calls the paint method as well as the timer - you will just end up repainting the same frame for the extra call to OnPaint.
I use this for marching ants selections although I go one step further and use an overlaid canvas so I can draw independently to the selection and the underlying paintbox canvas to remove the need to repaint the main canvas when the selection changes. That requires calls to API but I guess you won't need it unless you are doing videos with a transparent colour.
Further research, including some talks with the Itinerant developer, has unfortunately made it clear that, due to concurrency restrictions, FM has been designed so that all GPU access goes through the main thread and therefore painting will always be limited. As a result I have decided FM is not suitable for my needs and I am re-evaluating my options.

Optimizing Drag Operations (Direct2d)

I am working on a window which displays a number of objects (a graph) (C++/Win32API). This is in GDI+ and I have written a test piece of code with Direct2d as I want to improve the performance when dragging objects in the window. The best approach I have found (to date) is the following (using a graph of 1000 nodes and 999 edges).
(bascially buffer static content to a bitmap buffer and only draw whats moving)
When dragging starts (e.g lbuttondown state), create a base rendertarget with the full graph excluding the node being dragged and the attached edges, call GetBitmap and store for later use. When I need to draw (due to mousemove event and lbuttondown state true), then clear the current hwndrendertarget (background wash white), then draw the edges that are connected to the node being moved into the hwndrendertarget, then copy the save bitmap to the hwndrendertarget, then copy the node bitmap (created when the node is first created in the DB, saves actually drawing it) being moved to the hwndrendertarget, then call EndDraw.
Now this works ok (ish), what I don't like it that, when the node is dragged quickly the mouse cursor moves ahead of the node being dragged (distance depending on speed of drag/mousemove bu worst case up to about 1/2 inch). My reference app is MS Visio, dragging a single object on this shows the cursor staying in the same position over the object being dragged maybe +/- 1/2 pixels.
What I have not tried yet is moving all the (and only) the drawing operations to a separate thread, but before I try this I would like to research other methods if other single threaded approaches would trump this way.
Update:
I have optimized this a bit more with improvement, I found I was allocating and de-allocating the edge brush in the draw function which I have moved out to a class wide object and initialize for the life of the class as with other brushes etc. The cursor now only gets a little way (2 pixels or so) outside of the object being dragged when being dragged quickly, the object is a 15px radius circle. so the cursor is able to move up to 17px away from the middle of the object (the point the cursor should stick to) when being dragged. In testing I found an interesting thing, on my main monitor the drag is worse in that the cursor can get ahead of the object being dragged by more than 17px, say up to maybe 25px from the center point of the object where the cursor should be fixed. On the second monitor for the extended desktop (i.e. no taskbar) the drag is better in that described previously. If I hide the taskbar on the main monitor and run the app on that monitor and drag, the performance is the same as the second monitor.

How to efficiently render double buffered window without any tearing effect?

I want to create my own tiny windowless GUI system, for that I am using GDI+. I cannot post code here because it got huge(c++) but bellow is the main steps I am following...
Create a bitmap of size equal to the application window.
For all mouse and keyboard events update the custom control states (eg. if mouse is currently held over a particular control e.t.c.)
For WM_PAINT event paint the background to offscreen bitmap and then paint all the updated controls on top of it and finally copy entire offscreen image to the front buffer via Graphics::DrawImage(..) call.
For WM_SIZE/WM_SIZING delete the previous offscreen bitmap and create another one with new window size.
Also there are some checks to prevent repeated drawing of controls i.e. controls are drawn only when it needs repainting in other words when the state of a control is changed only then it is painted e.t.c.
The system is working fine but only with one exception...when window is being resizing something sort of tearing effect appears. Now what I mean by tearing effect I shall try to explain ...
On the sizing edge/border there is a flickering gap as I drag the border.It is as if my DrawImage() function returns immediately and while one swap operation is half done another image drawing starts up.
Now you may think that it is common artifact that happens in many other application for the fact that resizing backbuffer is not always as fast as resizing window are but in other applications I noticed in other applications that although there is a leg between window size and client area size as window grows in size nothing flickers near the edge (its usually just white background that shows up as thin uniform strips along the border).
Also the dynamic controls which move with window resize acts jerky during sizing.
At first it seemed to me that using a constant fullscreen size offscreen surface could minimize the artifact but when I tried it results are not that satisfactory. I also tried to call Sleep() during sizing so that the flipping is done completely before another flip starts but strangely even that won't worked for me!
I have heard that GDI on vista is not hardware accelerated, could that might be the problem?
Also I wonder how frameworks such as Qt renders windowless GUI so smoothly, even if you size a complex Qt GUI window very fast negligibly little artifact appears. As far as I know Qt can use opengl for GUI rendering but that is second option.
If I use directx then real time resizing is even harder, opengl on the other hand seems to be nice for resizing without any problem but I will loose all the 2d drawing capability of GDI+.
If any of you have done anything like this before please guide me. Also if you have any pointer that I should consider for custom user interface design then provide me the links.
Thanks!
I always wished to design interfaces like windows media player 11 but can someone tell me that there is a straight forward solution for a c++ programmer (I want to know how rather than use some existing framework etc.)? Subclassing, owner drawing, custom drawing nothing seems to give you such level of control, I dont know a way to draw semitransparent control with common controls, so I think this question deserves some special attention . Thanks again.
Could it be a WM_ERASEBKGND message that's causing it?
see this question: GDI+ double buffering in C++
Also, if you need fast response from your GUI I would advise against GDI+.

Best Drawing approach

I have developed an application in wxWidgets in which I am using bitmap for drawing. So First time when my application launches, it reads coordinates from file and draw lines accordingly. The application also receives UDP packets from network, UDP packets also contain some x y coordinates information which has to be drawn on the screen, so when the packet are received I redraw the bitmap image, and display on screen, I also need to refresh the bitmap on mouse move event because on mouse move there is some new drawing which I have to draw on screen.
All this increases the operational cost and slows down my GUI. So kindly suggest me some alternative drawing approach which you think might be efficient in this situation.
I have searched out on Google and got the option of OpenGL, but due to time shortage I don't want to use openGL, because I haven't any experience of OpenGL.
It sounds as if your problem is that your GUI is unresponsive to user input because the application is busy redrawing the display. There are a couple of general solutions to this kind of problem.
Draw the bitmap in memory using a worker thread. While this is going on the main thread can continue to interact with the user. Once the bitmap has been redrawn, the worker thread signals the main thread, and the main thread then copied the completed bitmap to the screen - which is extremely fast.
Use the main thread to draw the bitmap directly to the screen, but sprinkle the drawing code with calls to wxApp::Yield(). This will allow the GUI to remain responsive to the user during a lengthy drawing process.
Option 1 is the 'best', especially when running on multicore machines, but it is a challenge to keep the two threads synchronized and prevent contention between them, unless you have significant experience with multithreading design. Option 2 is much simpler, though you still have to be careful that the user interaction doesn't start another drawing process before the first is finished.
Save off the data to draw instead of always refreshing the bitmap and have the main loop make refreshes of the bitmap from time to time.
This way you can make the program never hog down. The backside is of course that the reactivity will be lower (ie. when data comes, it won't be seen on screen for another 20 milliseconds instead of right away).