I have been looking into a Visual Studio C++ Windows application project which used two functions SetWindowExt (...) and SetViewportExt (...). I am confused about what these two functions do and why they are necessary. Searching about these functions, I came to the concept of logical coordinates and device coordinates.
Can anyone please explain what is the importance of these two concepts?
Device coordinates are the simplest to understand. They are directly related to the device that you're using—e.g., the screen or a printer.
For an example, let's look at a window displayed on the screen. Device coordinates are defined relative to a particular device, so in the case of a window, everything will be in client coordinates. That means the origin will be the upper-left corner of the window's client area and the y-axis will run from top to bottom. All units are measured in pixels, since this is an on-screen element.
You use these all the time, so you probably already understand them better than you think. For example, whenever you handle a mouse event or a window resize, you get and set device coordinates.
Logical coordinates take the current mapping mode into account. Each device context (DC) can have a mapping mode applied to it (GetMapMode and SetMapMode). The various available mapping modes are defined by the MM_Xxx values. Each of these different mapping modes will cause the origin and y-axis direction to be interpreted differently. The documentation will tell you exactly how they work.
When you manipulate a device context (e.g., draw onto it), the current mapping mode is taken into account and thus you work with logical coordinates.
With the default MM_TEXT mapping mode, each logical unit maps to one device unit (remember, for a window, this would be one pixel), so no conversion is required. In this mapping mode, the logical and device coordinate systems work exactly the same way. And since this is the default and probably the one you work with most of the time, it is probably the source of your confusion.
Relevant reading: Coordinate Spaces and Transformations (MSDN)
Related
In cocos2d-x, there is the the concept of "Design Resolution Size", which lets you pick the appropriate asset, depending on the size of the screen, and you can apply the appropriate content scaling factor.
Here is the problem:
I draw a 2d Sin Curve by passing in a set of vertices. These vertices are computed for a screen of 480x320.
What happens when I run it on a device which has a resolution of 1920x1200, even though the design resolution is set to 480x320 ? Do I have to recompute the vertices so that the same number of crests / troughs are seen on the higher resolution device, or is there some way to do this without extra computation ?
I don't have any more devices to test this, so I don't know how to figure this out.
EDIT: I now use cocos2d-x v3.
Anything drawn directly/solely with OpenGL will bypass/ignore any and all cocos2d internal settings and code paths, such as design resolution.
You can always use a simulator to test your resolution-specific code. For iOS the simulator comes with Xcode, for Android use the Emulator.
After purchasing and testing this on a number of devices with cocos2d-x v3, I find that scaling is handled automatically. Regardless of the resolution of the actual device, my openGl draw commands seem to result in the same output on all devices. It appears that cocos2d-x does something internally, so things looks the same on all devices.
I can use setpixel (GDI) to set any pixel on the screen a colour.
So how would I reproduce Setpixel in in the lowest assembly level. What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
setpixel most probably just calculates address of given pixel using formula:
pixel = (frame_start + y * frame_width) + x
then it simply *pixel = COLOR
You can actually use CreateDIBSection to create your own buffers and associate it with DeviceContext, then you can modify pixels at the low level using formula as above. This is usefull if you have your own graphics library like AGG.
Learning about GDI I like to look into WINE source code, here you can see how complicated it actually is (dibdrv_SetPixel):
http://fossies.org/dox/wine-1.6.1/gdi32_2dibdrv_2graphics_8c_source.html
it must take into account also clipping regions, and also different pixel sizes and probably other features. Also it is possible that some drivers might somehow accelerate this in hardware, but I have not heard of it.
If you want to recreate setpixel you need to know how your graphics hardware works. Most hardware manufacutrres follow at least the VESA standard, see here. This standard specifies that you can set the display mode using interrupt 0x10.
Once the display mode is set the memory region displayed is defined in the standard and you can simply write directly to display memory.
Advanced graphics hardware deviates from the standard (because it only covers the basics). So the above does not work for advanced features. You'll have to resort to the gpu documentation.
The "how" is always depends on "what", what I mean is that for different setups there are different methods, different systems different methods, what is common is that they are usually not allowing you to do it directly i.e. write to a memory address that will be displayed.
Some devices with dedicated setup may allow you to do that ( like some consoles do as far as I know ) but there you will have to do some locking or other utility work to make it work as it should.
Since in modern PCs Graphics Accelerators are fused into the video cards ( one counter example is the Voodoo 1 which needed a video card in order to operate, since it was just a 3D accelerator ) the GPU usually holds the memory that it will draw from the framebuffer in it's own memory making it inaccessible from the outside.
So generally you would say here is a memory address "download" the data into your own GPU memory and show it on screen, and this is where the desktop composition comes in. Since video cards suffer from this transfer all the time it is in fact faster to send the data required to draw and let the GPU do the drawing. So Aero is just a visual but as far as I know the desktop compositor works regardless of Aero making the drawing GPU dependent.
So technically low level functions such as SetPixel are software since Windows7 because of the things I mentioned above so solely because you just cant access the memory directly. So what I think it probably does is that for every HDC there is a bitmap and when you use the set pixel you simply set a pixel in that bitmap that is later sent to the GPU for display.
In case of DOS or other old tech. it is probably just an emulation in the same way it is done for GDI.
So in the light of these,
So how would I reproduce Setpixel in in the lowest assembly level.
it is just probably a copy to a memory location, but windows integrates the window surfaces and it's frambuffer that you will never get direct access. One way to emulate what it does is to make a bitmap get it memory pointer and simple set the pixel manually then tell windows to show this bitmap on screen.
What actually is happening that triggers the instructions that say, ok sens a byte a position x in the framebuffer.
Like I said before it really depends on the environment what is done at the moment you make this call, and the code that needs to be executed comes from different places , some are done by Microsoft some are done by the GPU's manufacturer and these all together produce the result that pixel you see on your screen.
For to set a pixel to the framebuffer using a videomode with 32 bit color we need the address of the pixel and the color of the pixel.
With the address and the color we can simple use a move instruction to set the color to the framebuffer.
Sample with using the EDI-Register as a 32bit addressregister(default segmentregister is DS) for to address the framebuffer with the move instruction.
x86 intel syntax:
mov edi, Framebuffer ; load the address(upper left corner) into the EDI-Register
mov DWORD [edi], Color ; write the color to the address of DS:EDI
The first instruction load the EDI-Register with the address of the framebuffer and the second instruction write the color to the framebuffer.
Hint for to calculate the address of a pixel inside of the frambuffer:
Some Videomodes are using maybe a lorger scanline with more bytes for the horizontal resolution, with a part outside of the visible view.
Dirk
I need to control the individual pixels of a projector (an Infocus IN3104) whose native resolution is 1024x768. I would like to know which subset of functions in C or an APL to do this either by:
Functions that control the individual pixels of the adapter (not the pixels of a window).
A pixel-perfect, 1:1 map from an image file (1024x728) to the adaptor set at the native resolution of the projector.
In a related question ([How can I edit individual pixels in a window?][1]) the answerer Caladain states "Things have come a bit from the old days of direct memory manipulation.". I feel I need to go back to that to achieve my goal.
I don't know enough of the "graphic pipeline" to know what API or software tool to use. I'm overwhelmed by the number of technologies when I search this topic. I program in R, which easily interfaces to C, but would welcome suggestions of subsets of functions in OpenGL or C++ or ..... any other technology?
Or even an full blown application (rendering) which will map without applying a transformation.
For example even MS paint has the >VIEW>Bitmap but I get some transformation applied and I don't get pixel perfect rendering. This projector has DisplayLink digital input and I've also tried to tweek the timing parameters when using the VESA inputs and I don't think the transformation happens in the projector. In any case, using MS paint would not be flexible enough for me.
Platform: Linux or Windows.
I don't see a reason why a full-screen window, e.g. using SDL, wouldn't work. Normal bitmapped graphics is always 1:1, there shouldn't be any weird scaling going on behind your back for a full-screen:ed window.
Since SDL is portable, you should be able to run the same code in Windows or Linux (or any other supported platform).
The usual approach to this problem on current systems is:
Set graphics card to desired resolution
Create borderless full screen window
Draw whatever you want
There's really not much to gain from a "low level access", although it were certainly possible.
I'm extremely new to OpenGL. I'm writing a program that displays flying 3D text on screen. I need to know when certain text string appears (drawn) onto the screen and are visible to the user. The program needs to identify which text strings are displayed. (Note: although my problem deals with text, it could be generalized to any OpenGL object).
At first, I started to think that I could use OpenGL's picking mechanism, but so far I've only seen examples where the selection area is focused on some sort of user interaction. I want to know what objects are displayed on the entire window area. This leads me to think I'm on the wrong track... Am I missing something?
Any suggestions are welcome.
You can use the query objects (specifically those object created using GL_ARB_occlusion_query extension Specification). Those object are used to query how many fragments are rendered using a sequence of OpenGL operations (begin/end, etc...).
Another scheme (software only), is to determine a bounding box for your rendered text, then compute mathematically whether the bounding box is inside the view frustrum (derived from the current perspective used for rendering.
A note: using OpenGL picking doesn't necessary imply the use of gluPickMatrix. You can render you scene "as is", and the query the rendered names (altought picking is deprecated from OpenGL 3).
Query objects are easy to use, and they are lightweight. Picking is another good solution for most hardware, but more schematic than query objects.
hmm, is it actually in 3D? or is it just 2D text on the screen in 2D space? in that case I would just keep track of it manually. how exactly are you drawing your text?
generally the way you do this is with a "frustum check" where you basically just make a volume for the camera and test whether you're 3d objects are inside it or not.
You can try OpenGL's feedback mechanism. In this mode, OpenGL generates fragments and passes them to a feedback buffer. If something is clipped, no fragments will be generated. When the text becomes visible, you will find the corresponding fragment in the fragment buffer.
This link should get you started.
Here is another link, the Question 10.010 seems particularly relevant to what you want.
Run your object coordinates through your projection and modelview matrices to get screen-space coordinates. Compare the X/Y output against your screen extents to figure out if the text is on-screen.
I have a Process Control system. It has a huge 2D workspace where all the logic is laid out.
The 2D workspace is a coordinate system.
You usually do not see the whole workspace at once, but rather some in-zoomed part of it focusing on some part of the controlled process. Such subsystem views are bookmarked into predefined named images (Power Generator1, Diesel Generator, Main lubrication pump etc).
This workspace interacts with many legacy MFC software components that individually contribute graphics onto the workspace (the device context is passed around to all contributors).
Now, one of the software components renders AutoCAD drawings onto the surface. However, the resolution of the device context is not sufficient for the details of this job. The device context logical resolution is unfortunately dictated by our own coordinate system, which at high zoom levels is quite different from the device units (pixels).
For example, a line drawn using
DC.MoveTo(1,1);
DC.LineTo(1,2);
.... will actually, even though it's drawn directly onto the device context by increment of just one logical unit, cover quite some distance on the screen. But the width of the line would still be only one device pixel. A circle looks high res, but its data (center point and radius) can only be done in coarse increments.
I have considered the following options:
* When a predefined image is loaded and displayed, create a device context with a better suited resolution. The problem would then be that the other graphic providers interact with it using old logical units, which when used against the new DC would result in way too small and displaced graphical elements.
I wonder if I can create some DC wrapper that accepts both kinds of coordinates through different APIs, which are then translated into high res coordinates internally.
Is it possible to have two DCs with different logical/device unit ratio? And render them both to screen?
I mentioned that a circle is rendered beautifully with one pixel width even though it's placement and radius are restricted. Vertical lines are also rendered beautifully, even though the end points can only be given in coarse coordinates. This leads me to believe that it is technically possible to draw in an area that in DC logical coordinates could only be described in decimals.
Does anybody have any idea about what to do?
You need to scale your model, not the device context.
You could draw the high-def image to another DC in a new window and place that window over your low-res-drawing. Of course you have to handle clipping yourself.