I have a Process Control system. It has a huge 2D workspace where all the logic is laid out.
The 2D workspace is a coordinate system.
You usually do not see the whole workspace at once, but rather some in-zoomed part of it focusing on some part of the controlled process. Such subsystem views are bookmarked into predefined named images (Power Generator1, Diesel Generator, Main lubrication pump etc).
This workspace interacts with many legacy MFC software components that individually contribute graphics onto the workspace (the device context is passed around to all contributors).
Now, one of the software components renders AutoCAD drawings onto the surface. However, the resolution of the device context is not sufficient for the details of this job. The device context logical resolution is unfortunately dictated by our own coordinate system, which at high zoom levels is quite different from the device units (pixels).
For example, a line drawn using
DC.MoveTo(1,1);
DC.LineTo(1,2);
.... will actually, even though it's drawn directly onto the device context by increment of just one logical unit, cover quite some distance on the screen. But the width of the line would still be only one device pixel. A circle looks high res, but its data (center point and radius) can only be done in coarse increments.
I have considered the following options:
* When a predefined image is loaded and displayed, create a device context with a better suited resolution. The problem would then be that the other graphic providers interact with it using old logical units, which when used against the new DC would result in way too small and displaced graphical elements.
I wonder if I can create some DC wrapper that accepts both kinds of coordinates through different APIs, which are then translated into high res coordinates internally.
Is it possible to have two DCs with different logical/device unit ratio? And render them both to screen?
I mentioned that a circle is rendered beautifully with one pixel width even though it's placement and radius are restricted. Vertical lines are also rendered beautifully, even though the end points can only be given in coarse coordinates. This leads me to believe that it is technically possible to draw in an area that in DC logical coordinates could only be described in decimals.
Does anybody have any idea about what to do?
You need to scale your model, not the device context.
You could draw the high-def image to another DC in a new window and place that window over your low-res-drawing. Of course you have to handle clipping yourself.
Related
I created this UI framework in Direct2D some time ago to be able to draw/manage my own windows and widgets. I've been using it and updating it according to my needs and it works pretty well. However, now that high resolutions monitors are the new thing, I came across a small problem. Drawing images/icons in the best definition I can.
Since I'm using Direct2D all the draw functions work properly according to the DPIs/scaling of the target machines except of course images that are based in pixels and for that reason are not automatically managed by DirectX.
So, in the beginning I was simply drawing bitmaps as they were in 96 DPI, this meant that if I had an icon 10x10, and I used a function like ID2D1RenderTarget::DrawBitmap by specifying a destination rectangle, my image would be scaled up for higher DPIs. This of course would be noticeable and the icon would be blurry.
My first attempt at fixing this was to create my icons 4x bigger than the default DPI of 96. Then, using the same ID2D1RenderTarget::DrawBitmap and knowing that these images are 4x bigger, the DrawBitmap would draw the icon scaled down instead of scaled up. This had much better results, Starting from a windows scale of 150% and up it's perfect.
However, scaling down from 4x to 1x, the result is not great, images get somewhat pixelized. Much worse that doing the same in Photoshop.
I also tried using SetTransform before the DrawBitmap so see if the result is better, but it's exactly the same.
So my question is, how are people dealing with this issue. I'm sure I'm not the only one...
If your goal is to get best visual results, you'll need to prepare groups of icons in various resolutions, not just downscaled but specifically designed in lower sizes. Then you'll need to select one of those according to current context.
Regarding DrawBitmap, you could try with different interpolation modes.
As for general solutions that people are using, I don't think there is one. Many applications don't support this properly, or if they do for control layout, embedded bitmap resources are still stretched and look deformed or interpolated and look too blurry.
I am planning to develop a XY Plotter for my application. To give some basic idea, how it should look like (of course the implementation would be different), please refer here and here.
During the simulation (let's assume, it takes 4 hours to complete the simulation), on a fixed X axis, the new Y values should be (over)written.
But, the problem with Direct2D is that, every time pRenderTarget->BeginDraw() is called, the existing Drawing(/Plot/BitMap/Image, etc) is deleted and a new image is being drawn. Therefore I would lose the old values.
Of course, I can always buffer the old Y values in a buffer/variable and use it in the next drawing. But, the simulation runs for 4 hours and unfortunately I can't afford to save all the Y values. That's why, I need to render/draw the new Y values on the existing target-image/plot/etc.
And, If don't call pRenderTarget->EndDraw() within a definite amount of time, my application would crash due to resource constraints.
How do I prevent this problem and achieve the requirement?
What you're asking is quite a complex requirement - it's more difficult than it appears! Direct2D is an Immediate-Mode drawing API. There is no state maintenance or persistence of what you have drawn to the screen in immediate mode graphics.
In most immediate-mode graphics APIs, there is the concept of clipping and dirty rects. In Direct2D you can use one of these three techniques to draw to a subset of the screen. Rendering offscreen to bitmap and double-buffering might be a good technique to try. e.g. your process becomes:
Draw to off-screen bitmap
Blit bitmap to screen
On new data, draw to a new bitmap / combine with existing bitmaps
This technique will only work if your plot is not scrolling or changing in scale as you append new data / draw.
I have been looking into a Visual Studio C++ Windows application project which used two functions SetWindowExt (...) and SetViewportExt (...). I am confused about what these two functions do and why they are necessary. Searching about these functions, I came to the concept of logical coordinates and device coordinates.
Can anyone please explain what is the importance of these two concepts?
Device coordinates are the simplest to understand. They are directly related to the device that you're using—e.g., the screen or a printer.
For an example, let's look at a window displayed on the screen. Device coordinates are defined relative to a particular device, so in the case of a window, everything will be in client coordinates. That means the origin will be the upper-left corner of the window's client area and the y-axis will run from top to bottom. All units are measured in pixels, since this is an on-screen element.
You use these all the time, so you probably already understand them better than you think. For example, whenever you handle a mouse event or a window resize, you get and set device coordinates.
Logical coordinates take the current mapping mode into account. Each device context (DC) can have a mapping mode applied to it (GetMapMode and SetMapMode). The various available mapping modes are defined by the MM_Xxx values. Each of these different mapping modes will cause the origin and y-axis direction to be interpreted differently. The documentation will tell you exactly how they work.
When you manipulate a device context (e.g., draw onto it), the current mapping mode is taken into account and thus you work with logical coordinates.
With the default MM_TEXT mapping mode, each logical unit maps to one device unit (remember, for a window, this would be one pixel), so no conversion is required. In this mapping mode, the logical and device coordinate systems work exactly the same way. And since this is the default and probably the one you work with most of the time, it is probably the source of your confusion.
Relevant reading: Coordinate Spaces and Transformations (MSDN)
I have an application that contains many millions of 3d rgb points that form an image when plotted. What is the fastest way of getting them to screen in a MFC application? I've tried CDC.SetPixelV in conjunction with a bitmap, which seems quite slow, and am looking towards a Direct3D or OpenGL window in my MFC view class. Any other good places to look?
Double buffering is your solution. There are many examples on codeproject. Check this one for example
Sounds like a point cloud. You might find some good information searching on that term.
3D hardware is the fastest way to take 3D points and get them into a 2D display, so either Direct3D or OpenGL seem like the obvious choices.
If the number of points is much greater than the number of pixels in your display, then you'll probably first want to cull points that are trivially outside the view. You put all your points in some sort of spatial partitioning structure (like an octree) and omit the points inside any node that's completely outside the viewing frustrum. This reduces the amount of data you have to push from system memory to GPU memory, which will likely be the bottleneck. (If your point cloud is static, and you're just building a fly through, and if your GPU has enough memory, you could skip the culling, send all the data at once, and then just update the transforms for each frame.)
If you don't want to use the GPU and instead write a software renderer, you'll want to render to a bitmap that's in the same pixel format as your display (to eliminate the chance of the blit need to do any pixels formatting as it blasts the bitmap to the display). For reasonable window sizes, blitting at 30 frames per second is feasible, but it might not leave much time for the CPU to do the rendering.
Is there anyone who can explain how hardware cursor works precisely? How does it relate to the graphics I'm drawing on the screen? I'm using OpenGL to draw, how does hardware cursor relate to OpenGL graphics?
EDIT: For those who may be interested in this in the future I just implemented what is needed to show the cursor with the hardware. The implementation was in the kernel and to use it simple ioctl's were sufficient. Works perfectly.
Hardware Cursor means, that the GPU provides to draw a (small) overlay picture over the screen framebuffer, which position can be changed by two registers (or so) on the GPU. So moving around the pointer doesn't require to redraw the portions of the framebuffer that were previously obstructed.
Relation to OpenGL: None!
The hardware cursor is not rendered or supported by OpenGL. Some small piece of hardware overlays it on whatever image is going out the display connector - it's inserted directly into the bitstream at scan-out of each frame. Because of that, it can be moved around by changing a pair of hardware registers containing its coordinates. In the old days, these were called sprites and various numbers of them were supported on different systems.
Hardware cursors have less latency, and thus provide a better experience, because they are not tied to your game or engine frame rate but to the screen refresh rate.
Software cursors, rendered by you as a screen-space sprite during your render loop, however, must run at the rate of your game engine. Thus, if your game experiences lag or otherwise drops below target fps, the cursor latency will get worse. A minor drop in game fps is usually acceptable, but a minor drop in cursor latency is very noticeable as a "sluggish cursor".
You can test this easily by rendering a software cursor while leaving the hardware cursor on. (FYI, in Windows API the hw cursor function is ShowCursor). You'll find that the software cursor trails behind the hardware cursor.