MFC does not draw rectangular edges with LineTo - mfc

I have a Problem when drawing with the MFC CDC class and the LineTo and MoveTo function.
The CDC Object doesn't begin at the point where I move the Pointer and leaves the first Pixel blanc like in the center of the image. When I draw around a corner like on the left side of the image it is clear that the CDC object leaves the outmost pixel free.
I did try to load a custom brush but had no different results.
memDC.FillSolidRect(client, BACKGROUND_COLOR);
CPen penBorder(PS_ENDCAP_SQUARE | PS_SOLID, BORDER_WIDTH, BORDER_COLOR);
//Draw the Horizontal line for the Status Bar
CPen* oldPen = memDC.SelectObject(&penBorder);
memDC.MoveTo(client.left + 0.5f * BORDER_WIDTH, client.top + 0.5f * BORDER_WIDTH);
memDC.LineTo(client.Width() - 0.5f * BORDER_WIDTH, client.top + 0.5f * BORDER_WIDTH);
CPen penRecess(PS_ENDCAP_SQUARE | PS_SOLID, BORDER_WIDTH, RECESS_COLOR);
//Draw the recess
memDC.SelectObject(&penRecess);
memDC.MoveTo(client.left + 1.5f * BORDER_WIDTH, client.top + 1.5f * BORDER_WIDTH);
memDC.LineTo(client.Width() - 0.5f * BORDER_WIDTH, client.top + 1.5f * BORDER_WIDTH);

The problem has nothing to do with MFC but is inherent in the underlying Windows GDI function. The documentation for LineTo says:
The LineTo function draws a line from the current position up to, but not including, the specified point.
If you need the last point drawn, you should do a second LineTo one pixel away from the first.

Related

OpenGL screen-to-world coordinates conversion

So the default 2d clipping area of opengl is left -1.0 to right 1.0, and buttom -1.0 to top 1.0
And the window I created for an opengl program is 640 pixles in width and 480 pixels in height. The top left pixel is (0,0), the button right pixel is (640, 480)
I also wrote a function to retrive the coordinates when I click and drag and release the mouse button(When I click, it's (x1,y1) and when I release it's(x2,y2) )
So what should I do to convert (x1,y1) and (x2,y2) to the corresponding position in the clipping area?
The answer given by #BDL might get you close enough for what you need, but the calculations are not really correct.
The division needs to be by the number of pixels in each coordinate direction, because you do have 640/480 pixels within the coordinate range.
One subtle detail to take into account is that, when you get a given position from your mouse input, these will be the integer coordinates of the pixels. If you simply apply the scaling based on the window size, the resulting OpenGL coordinate would map to the left/bottom edge of the pixel. But what you most likely want is the center of the pixel. To precisely transform this into the OpenGL coordinate space, you can simply apply a 0.5 offset to your input value, moving the value from the edge to the center of the pixel.
For example, the left most pixel would have x-coordinate 0, the right most 639. The centers of these two, after applying the 0.5 offset, are 0.5 and 639.5. Applying this correction, you can also see that they are now both a distance of 0.5 away from the corresponding edges of the area at 0 and 640, making the whole thing symmetrical.
So the correct calculation is:
float xClip = ((xPix + 0.5f) / 640.0f) * 2.0f - 1.0f;
float yClip = 1.0f - ((yPix + 0.5f) / 480.0f) * 2.0f;
Or slightly simplified:
float xClip = (xPix + 0.5f) / 320.0f - 1.0f;
float yClip = 1.0f - (yPix + 0.5f) / 240.0f;
This takes the y-inversion into account.
I assume that the rightmost pixel is 639 (otherwise your window would be 641 pixels large).
The transformation is quiet simple, we just need a linear mapping. To transform a point P from pixel coordinates to clipping coordinates one can use the following formula
319.5
P_clip = (P_pixel / [ ]) - 1.0
239.5
Let's go over it step by step for the x coordinate. First we transform the [0, 639] range to a [0, 1] range by dividing through the window width
P_01 = P_pixel_x / 639
Then we transform from [0, 1] to [-1, 1] by multiplying by 2 and subtracting 1
P_clip_x = P_01 * 2 - 1
When one combines these two calculations and extends it to the y coordinate one gets the equation given above.

how implement own blend function?

I want to implement the following blend function in my program which isn't using OpenGL.
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA);
In the OpenGL realtime test Application I was able to blend with this function colors on a white background. The blended result should look like http://postimg.org/image/lwr9ossen/.
I have a white background and want to blend red points over it. a high density of red points should be get opaque / black.
glClearColor(1.0f, 1.0f, 1.0f, 0.0f);
for(many many many times)
glColor4f(0.75, 0.0, 0.1, 0.85f);
DrawPoint(..)
I tried something, but I had no success.
Has anyone the equation for this blend function?
The blend function should translate directly to the operations you need if you want to implement the whole thing in your own code.
The first argument specifies a scaling factor for your source color, i.e. the color you're drawing the pixel with.
The second argument specifies a scaling factor for your destination color, i.e. the current color value at the pixel position in the output image.
These two terms are then added, and the result written to the output image.
GL_DST_COLOR corresponds to the color in the destination, which is the output image.
'GL_ONE_MINUS_SRC_ALPHA` is 1.0 minus the alpha component of the pixel you are rendering.
Putting this all together, with (colR, colG, colB, colA) the color of the pixel you are rendering, and (imgR, imgG, imgB) the current color in the output image at the pixel position:
GL_DST_COLOR = (imgR, imgG, imgB)
GL_ONE_MINUS_SRC_ALPHA = (1.0 - colA, 1.0 - colA, 1.0 - colA)
GL_DST_COLOR * (colR, colG, colB) + GL_ONE_MINUS_SRC_ALPHA * (imgR, imgG, imgB)
= (imgR, imgG, imgB) * (colR, colG, colB) +
(1.0 - colA, 1.0 - colA, 1.0 - colA) * (imgR, imgG, imgB)
= (imgR * colR + (1.0 - colA) * imgR,
imgG * colG + (1.0 - colA) * imgG,
imgB * colB + (1.0 - colA) * imgB)
This is the color you write to your image as the result of rendering the pixel.

OpenGL screen layout

I have some questions about the screen set up. Originally when I would draw a triangle the x vector 1 would be all the way to the right and -1 would be all the way to the left. Now I have adjusted it to account for the different aspect ratio of the window. My new question how do I make the numbers which are used to render a 2d tri go along with the pixel values. If my window is 480 pixels wide and 320 tall I want to have to enter this to span the screen with a tri
glBegin(GL_TRIANGLES);
glVertex2f(240, 320);
glVertex2f(480, 0);
glVertex2f(0, 0);
glEnd();
but instead it currently looks like this
glBegin(GL_TRIANGLES);
glVertex2f(0, 1);
glVertex2f(1, -1);
glVertex2f(-1, -1);
glEnd();
Any ideas?
You need to use functions glViewport and glOrtho with correct values. Basically glViewport sets the part of your window capable of rendering 3D-Graphics using OpenGL. glOrtho establishes coordinate system within that part of a window using OpenGL's coordinates.
So for your task you need to know exact width and height of your window. If you are saying they are 480 and 320 respectively then you need to call
glViewport(0, 0, 480, 320)
// or: glViewport ( 0,0,w,h)
somewhere, maybe in your SizeChanging-handler(if you are using WINAPI it is WM_SIZE message)
Next, when establishing OpenGL's scene you need to specify OpenGL's coordinates. For orthographic projection they will be the same as dimensions of a window so
glOrtho(-240, 240, -160, 160, -100, 100)
// or: glOrtho ( -w/2, w/2, -h/2, h/2, -100, 100 );
is siutable for your purppose. Not that here I'm using depth of 200 (z goes from -100 to 100).
Next on your rendering routine you may draw your triangle
Since the second piece of code is working for you, I assume your transformation matrices are all identity or you have a shader that bypasses them. Also your viewport is spanning the whole window.
In general if your viewport starts at (x0,y0) and has WxH size, the normalized coordinates (x,y) you feed to glVertex2f will be transformed to (vx,vy) as follows:
vx = x0 + (x * .5f + .5f) * W
vy = y0 + (y * .5f + .5f) * H
If you want to use pixel coordinates you can use the function
void vertex2(int x, int y)
{
float vx = (float(x) + .5f) / 480.f;
float vy = (float(y) + .5f) / 320.f;
glVertex3f(vx, vy, -1.f);
}
The -1 z value is the closest depth to the viewer. It's negative because the z is assumed to be reflected after the transformation (which is identity in your case).
The addition of .5f is because the rasterizer considers a pixel as a 1x1 quad and evaluates the coverage of your triangle in the middle of this quad.

Open GL: draw rectangles with borders?

Check the image I produced, but what I want to do is producing those rectangles with borders, and set the background colour to another. How can I do that?
glRectf(top_left_x, top_left_y, bottom_right_x, bottom_right_y)?
if loop==0:
ratio = 0.10
glBegin(GL_QUADS)
while ratio <= 1.0:
width = window_width/2
height = window_height
long_length = width * ratio
short_length = height* (1.0 - ratio)
top_left_x = (width - long_length) / 2.0
top_left_y = (height - window_height * (1.0-ratio)) /2
bottom_right_x = top_left_x + long_length
bottom_right_y = top_left_y + short_length
glColor(1.0,1.0,1.0,0.5)
glVertex3f(top_left_x, top_left_y, 0.0)
glVertex3f(top_left_x + long_length, top_left_y, 0.0)
glVertex3f(bottom_right_x,bottom_right_y, 0.0)
glVertex3f(bottom_right_x-long_length,bottom_right_y, 0.0)
ratio += 0.05
glEnd()
You can draw a rectangle not filled this way:
glBegin(GL_LINES);
glVertex2d(top_left_x, top_left_y);
glVertex2d( top_right_x, top_right_y);
glVertex2d( bottom_right_x,bottom_right_y);
glVertex2d(bottom_left_x,bottom_left_y);
glVertex2d(top_left_x, top_left_y);
glEnd();
OpenGL use a state machine. So for changing the color just put :
glColor3f (R, G, B);
before your drawing primitives.
So, mixing it up, your step should be:
choose fill color
draw fill rect with glRectf
choose border color
draw unfilled rect with the code I posted
These steps repeated for each rectangle you are drawing of course.

Zooming into the mouse, factoring in a camera translation? (OpenGL)

Here is my issue, I have a scale point, which is the unprojected mouse position. I also have a "camera which basically translates all objects by X and Y. What I want to do is achieve zooming into mouse position.
I'v tried this:
1. Find the mouse's x and y coordinates
2. Translate by (x,y,0) to put the origin at those coordinates
3. Scale by your desired vector (i,j,k)
4. Translate by (-x,-y,0) to put the origin back at the top left
But this doesn't factor in a translation for the camera.
How can I properly do this. Thanks
glTranslatef(controls.MainGlFrame.GetCameraX(),
controls.MainGlFrame.GetCameraY(),0);
glTranslatef(current.ScalePoint.x,current.ScalePoint.y,0);
glScalef(current.ScaleFactor,current.ScaleFactor,0);
glTranslatef(-current.ScalePoint.x,-current.ScalePoint.y,0);
Instead of using glTranslate to move all the objects, you should try glOrtho. It takes as parameters the wanted left coords, right coords, bottom coords, top coords, and min/max depth.
For example if you call glOrtho(-5, 5, -2, 2, ...); your screen will show all the points whose coords are inside a rectangle going from (-5,2) to (5,-2). The advantage is that you can easily adjust the zoom level.
If you don't multiply by any view/projection matrix (which I assume is the case), the default screen coords range from (-1,1) to (1,-1).
But in your project it can be very useful to control the camera. Call this before you draw any object instead of your glTranslate:
float left = cameraX - zoomLevel * 2;
float right = cameraX + zoomLevel * 2;
float top = cameraY + zoomLevel * 2;
float bottom = cameraY - zoomLevel * 2;
glOrtho(left, right, bottom, top, -1.f, 1.f);
Note that cameraX and cameraY now represent the center of the screen.
Now when you zoom on a point, you simply have to do something like this:
cameraX += (cameraX - screenX) * 0.5f;
cameraY += (cameraY - screenY) * 0.5f;
zoomLevel += 0.5f;