Direct2D - preserve the existing content and overwrite the new values - c++

I am planning to develop a XY Plotter for my application. To give some basic idea, how it should look like (of course the implementation would be different), please refer here and here.
During the simulation (let's assume, it takes 4 hours to complete the simulation), on a fixed X axis, the new Y values should be (over)written.
But, the problem with Direct2D is that, every time pRenderTarget->BeginDraw() is called, the existing Drawing(/Plot/BitMap/Image, etc) is deleted and a new image is being drawn. Therefore I would lose the old values.
Of course, I can always buffer the old Y values in a buffer/variable and use it in the next drawing. But, the simulation runs for 4 hours and unfortunately I can't afford to save all the Y values. That's why, I need to render/draw the new Y values on the existing target-image/plot/etc.
And, If don't call pRenderTarget->EndDraw() within a definite amount of time, my application would crash due to resource constraints.
How do I prevent this problem and achieve the requirement?

What you're asking is quite a complex requirement - it's more difficult than it appears! Direct2D is an Immediate-Mode drawing API. There is no state maintenance or persistence of what you have drawn to the screen in immediate mode graphics.
In most immediate-mode graphics APIs, there is the concept of clipping and dirty rects. In Direct2D you can use one of these three techniques to draw to a subset of the screen. Rendering offscreen to bitmap and double-buffering might be a good technique to try. e.g. your process becomes:
Draw to off-screen bitmap
Blit bitmap to screen
On new data, draw to a new bitmap / combine with existing bitmaps
This technique will only work if your plot is not scrolling or changing in scale as you append new data / draw.

Related

Retaining and combining rendered pixels

Note: In mine OpenGL project i have enabled SDL_GL_SwapBuffers, like so SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1).
How do i retain the pixels after calling SDL_GL_SwapBuffers(), so to reuse the rendered pixels without having to render them again, and than how do i combine the retained pixels as the background layer, clear the buffer with glClear() and render polygons on top the background layer?
Provide commented sample code.
Technically you might be able to get the old contents of the backbuffer back depending on what swap method you have selected. This is a total hack, but it could work. If it is exchange, if you swap buffers again without clearing the color buffer you might have an old copy of the frontbuffer lying around in the backbuffer. If your swap method is copy, then your backbuffer should never be cleared unless you issue glClear (...) yourself. Be careful, because there is a third common swap option that leaves the contents of the buffers undefined if you try to read them after swapping.
The last swap behavior I mentioned is common on embedded graphics devices, like PowerVR (iOS). Not so much on desktops. And this all assumes that OpenGL's window system implementation is using 1 frontbuffer and 1 backbuffer, which brings me back to the statement that this is a total hack. Behind the scenes implementations can implement triple-buffering, and most of the window system APIs do not even provide a way to request the number of backbuffers let alone query it. Swap chains are nasty things in the GL world :-\
In short, frame amortized rendering (using values computed during prior frames to finish an algorithm) can be accomplished in OpenGL but you will only make life more difficult if you try to use the actual front/backbuffer(s) that the window system (e.g. WGL, glX, CGL, EGL) uses. What you need to do is quite simple, draw into an FBO and manage a swap-chain of FBOs yourself. This will unfortunately increase memory requirements, but it is how most modern graphics engines do amortization.
You will need to lookup FBOs yourself for this one, I explained the theory and that is really all you can expect (for future reference) since the question did not include any code.

Fastest way of plotting a point on screen in MFC C++ app

I have an application that contains many millions of 3d rgb points that form an image when plotted. What is the fastest way of getting them to screen in a MFC application? I've tried CDC.SetPixelV in conjunction with a bitmap, which seems quite slow, and am looking towards a Direct3D or OpenGL window in my MFC view class. Any other good places to look?
Double buffering is your solution. There are many examples on codeproject. Check this one for example
Sounds like a point cloud. You might find some good information searching on that term.
3D hardware is the fastest way to take 3D points and get them into a 2D display, so either Direct3D or OpenGL seem like the obvious choices.
If the number of points is much greater than the number of pixels in your display, then you'll probably first want to cull points that are trivially outside the view. You put all your points in some sort of spatial partitioning structure (like an octree) and omit the points inside any node that's completely outside the viewing frustrum. This reduces the amount of data you have to push from system memory to GPU memory, which will likely be the bottleneck. (If your point cloud is static, and you're just building a fly through, and if your GPU has enough memory, you could skip the culling, send all the data at once, and then just update the transforms for each frame.)
If you don't want to use the GPU and instead write a software renderer, you'll want to render to a bitmap that's in the same pixel format as your display (to eliminate the chance of the blit need to do any pixels formatting as it blasts the bitmap to the display). For reasonable window sizes, blitting at 30 frames per second is feasible, but it might not leave much time for the CPU to do the rendering.

OpenGL height-map painting using CUDA VBO

I've asked several questions regarding VBO previously here and from the comments i had received i decided that a new approach must be taken.
To put it simply - I'm trying to draw the Mandelbrot set which is defined on a large FLOAT array, around 512X512 Points. the purpose of my program is to let the user control the zooming and world's orientation (it's a 3d model).
so far I've painted the entire thing using GL_TRIANGLE_STRIP which turned to be a bad choice because of its slow painting process. also because implementing my painting style (order of calling the glVertex) became impossible for coding for VBOs.
so I've got several questions.
even after this description i'm not sure either the VBO is the best choice because it's up the user to control the calculations.for each calculation that he causes by the program, i have to recompute the mandelbrot set(~60ms),and recopy the points to the buffer : a process which takes some time(?ms).
the program allows the user also to move in the world so no calculations are done here therefore VBO is an excellent choice here.
1.what's the best way to paint height map(when each cell in the array holds only the height)
2.how can i apply it on VBO and transfer it to cuda (cudaRegisterBuffer or something like that)
3.is there a way to distinguish between the mode and decide when VBOs are needed(in a no calculations mode) and when they aren't(calculations mode).
You don't need to copy the CUDA data each frame if you bind the CUDA array/VBO to the DirectX/OpenGL VB (refer to the CUDA Programming Guide for details). One way to render data as a height-field is to use the Geometry Shader to emit the tris based on the height-field. Another way is to use the height field as a parallax-map (ref DirectX SDK). My personal fave would be to make your height-field an array of positions (X/Y/Z) and use CUDA to modify only the Y-Values, then use an index buffer to define the polygons that compose the surface. Note that you'll also need to update the vertex normals, and you may also want to use XYZ/UV if you want to texture the surface. If 512x512 is too big, use raster-ops (texture sampling) to populate a lower-resolution height-field of the region of interest. You can do this stage in CUDA or OpenGL/DirectX (I'd recommend doing it in CUDA where you can easily write your own sampling kernel to lookup pixels when down-sampling).

How does Photoshop (Or drawing programs) blit?

I'm getting ready to make a drawing application in Windows. I'm just wondering, do drawing programs have a memory bitmap which they lock, then set each pixel, then blit?
I don't understand how Photoshop can move entire layers without lag or flicker without using hardware acceleration. Also in a program like Expression Design, I could have 200 shapes and move them around all at once with no lag. I'm really wondering how this can be done without GPU help.
Also, I don't think super efficient algorithms could justify that?
Look at this question:
Reduce flicker with GDI+ and C++
All you can do about DC drawing without GPU is to reduce flickering. Anything else depends on the speed of filling your memory bitmap. And here you can use efficient algorithms, multithreading and whatever you need.
Certainly modern Photoshop uses GPU acceleration if available. Another possible tool is DMA. You may also find it helpful to read the source code of existing programs like GIMP.
Double (or more) buffering is the way it's done in games, where we're drawing a ton of crap into a "back" buffer while the "front" buffer is being displayed. Then when the draw is done, the buffers are swapped (a pointer swap, not copies!) and the process continues in the new front and back buffers.
Triple buffering offers another bonus, in that you can start drawing two-frames-from-now when next-frame is done, but without forcing a buffer swap in the middle of the screen refresh. Many games do the buffer swap in the middle of the refresh, but you can sometimes see it as visible artifacts (tearing) on the screen.
Anyway- for an app drawing bitmaps into a window, if you've got some "slow" operation, do it into a not-displayed buffer while presenting the displayed version to the rendering API, e.g. GDI. Let the system software handle all of the fancy updating.

The legacy device context is too coarse

I have a Process Control system. It has a huge 2D workspace where all the logic is laid out.
The 2D workspace is a coordinate system.
You usually do not see the whole workspace at once, but rather some in-zoomed part of it focusing on some part of the controlled process. Such subsystem views are bookmarked into predefined named images (Power Generator1, Diesel Generator, Main lubrication pump etc).
This workspace interacts with many legacy MFC software components that individually contribute graphics onto the workspace (the device context is passed around to all contributors).
Now, one of the software components renders AutoCAD drawings onto the surface. However, the resolution of the device context is not sufficient for the details of this job. The device context logical resolution is unfortunately dictated by our own coordinate system, which at high zoom levels is quite different from the device units (pixels).
For example, a line drawn using
DC.MoveTo(1,1);
DC.LineTo(1,2);
.... will actually, even though it's drawn directly onto the device context by increment of just one logical unit, cover quite some distance on the screen. But the width of the line would still be only one device pixel. A circle looks high res, but its data (center point and radius) can only be done in coarse increments.
I have considered the following options:
* When a predefined image is loaded and displayed, create a device context with a better suited resolution. The problem would then be that the other graphic providers interact with it using old logical units, which when used against the new DC would result in way too small and displaced graphical elements.
I wonder if I can create some DC wrapper that accepts both kinds of coordinates through different APIs, which are then translated into high res coordinates internally.
Is it possible to have two DCs with different logical/device unit ratio? And render them both to screen?
I mentioned that a circle is rendered beautifully with one pixel width even though it's placement and radius are restricted. Vertical lines are also rendered beautifully, even though the end points can only be given in coarse coordinates. This leads me to believe that it is technically possible to draw in an area that in DC logical coordinates could only be described in decimals.
Does anybody have any idea about what to do?
You need to scale your model, not the device context.
You could draw the high-def image to another DC in a new window and place that window over your low-res-drawing. Of course you have to handle clipping yourself.