I was doing a test program with direct2d to show lines, however I noticed a little detail and that is that when I tell direct2d to draw a dotted line (100,200) and (500,200), direct2d really isn't drawing the beginning of the line at the point (100,200), but it draws it one pixel less, that is, it coordinates at the coordinate (100,99). Does anyone know why this is? I checked this detail using direct2d without antialiasing mode and showing in the debug output the mouse coordinates.
This is caused by Direct2D's own design.
To be precise, the coordinates of the line are from (99.5, 199.5) - (500.5, 200.5).
And #Rick Brewster's answer has explained these.
When you give it a pixel coordinate such as (100, 120), that refers to
the top and left corner of the pixel element that spans from pixel
coordinates (100, 120) to (101, 121) (top/left are inclusive,
right/bottom are exclusive). Since it's a straight horizontal line you
are effectively getting a filled rectangle from (99.5, 119.5) -
(300.5, 120.5).
So if you want to draw a line from that covers the pixels (100, 200) to
(500, 200), you can use aliased rendering or use half-pixel offsets.
Related
I'm trying to use QPainter::drawEllipse to draw circles. I want to be able to:
set the width of the stroke of the circle (QPen::width)
choose the shape of the pixels that are at the center of the circle (1x1, 1x2, 2x1 or 2x2)
optionally make the circle filled instead of stroked
ensure that the circle has the correct radius (even when the stroke width is greater than 1)
These goals are surprisingly difficult to achieve. This is an example of what I want to render (drawn by hand):
The image is 32x32 (scaled up to 512x512). The red center point is at (15, 15). The center is 1x2 so there's an extra red pixel below the center pixel. The stroke has a width of 2 pixels. If the stroke was made wider, pixels would be added to the inside of the circle. The bounding box of the circle is the same regardless of stroke width. The radius is 8 pixels. Each of the blue lines is 8 pixels long. Just to be clear, the red and blue pixels are just there for describing the circle. They are not part of my desired output.
What my problem really boils down to is rendering an ellipse that fits perfectly inside a rectangle. I can calculate the rectangle using the center point, the radius, and the center shape. That part is easy. Simply calling drawEllipse with this rectangle doesn't work. I think I have to adjust this rectangle somehow before calling drawEllipse but I'm not too sure how to adjust it. I've tried fiddling around with it and I found some solutions that work for some pen widths but not others.
Does the pen cap matter? I've been using RoundCap. Should I be using a different cap?
I'm almost at the point where I'm considering doing the pixel manipulation myself. I'm rendering onto a QImage and using the Source composite operation so my code might be slightly faster than drawEllipse. memset is about 10x faster than QImage::fill so writing faster code probably won't be too hard! I'd rather not have to do that though.
I stumbled upon a section in the docs that talks about how QRects are rendered. It describes the relationship between the rendered pixels and the logical rectangle. The rendered rectangle is bigger than the logical rectangle. All I have to do is make the logical rectangle smaller to compensate.
QRect adjustStrokedRect(const QRect rect, const int thickness) {
return QRect{
rect.left() + thickness / 2,
rect.top() + thickness / 2,
rect.width() - thickness,
rect.height() - thickness
};
}
Ok, so now I can get stroked rectangles to render in the right place. An ellipse is described by a QRect so what if I just apply this transformation to that rectangle?
Nope.
It sort of works if the thickness is 1, 2, 4, 6 but not 3, 5, 7. The circle is one pixel too small when the thickness is 3, 5, 7. So I tried adding 1 to the rectangle size if thickness % 2 == 1 && thickness != 1 but then an asymmetric circle is rendered from a square. For some combinations of position and size, a wonky asymmetric circle is rendered even when the size is square.
Here's a weird image that you can easily reproduce:
Produce it with this code:
QImage image{32, 32, QImage::Format_ARGB32_Premultiplied};
QPainter painter{&image};
QPen pen{Qt::NoBrush, 3.0, Qt::SolidLine, Qt::RoundCap, Qt::RoundJoin};
pen.setColor(QColor{0, 255, 0, 255});
painter.setPen(pen);
painter.drawEllipse(8, 8, 17, 17);
image.save("weird.png");
I simply don't understand how that's even possible. To me, it seems like drawEllipse is rendering an ellipse that just roughly fits within the rectangle. I haven't been able to find the relationship between the rectangle and the ellipse anywhere in the docs. Perhaps this is because it's a very loose relationship.
I have no trouble getting QPainter::drawEllipse to draw circles with a stroke width of 1 so for now I just won't allow thick circles in my application. If I can’t render it perfectly, I won’t render it at all. I'm not marking this answer as accepted though as I would still like this to work.
I probably am too late for this, but still, for future reference:
Unfortunately, Qt plots ellipses using Bezier curves (as of now, which might just change soon) , which is a pretty good approximation of an ellipse, but isn't perfect. Plotting a pixel-perfect ellipse would require a manual implementation at the pixel level.
Try setting this QPainter flag to true:
painter->setRenderHint(QPainter::Antialiasing, true);
Did the trick for me!
I understand i need to render only 1x1 or 3x3 pixel part of the screen where the mouse is, with object id's as colors and then get id from the color.
I have implemented ray-cast picking with spheres and i am guessing it has something to do with making camera look in direction of the mouse ray?
How do i render the correct few pixels?
Edit:
setting camera in direction of mouse ray works, but if i make the viewport smaller the picture scales but what (i think) i need is for it to be cropped rather than scaled. How would i achieve this?
The easiest solution is to use the scissor test. It allows you to render only pixels within a specified rectangular sub-region of your window.
For example, to limit your rendering to 3x3 pixels centered at pixel (x, y):
glScissor(x - 1, y - 1, 3, 3);
glEnable(GL_SCISSOR_TEST);
glDraw...(...);
glDisable(GL_SCISSOR_TEST);
Note that the origin of the coordinate system is at the bottom left of the window, while most window systems will give you mouse coordinates in a coordinate system that has its origin at the top left. If that's the case on your system, you will have to invert the y-coordinate by subtracting it from windowHeight - 1.
Does anyone know a method of drawing textured lines or rectangles using OpenCV? I know this can be done in OpenGL.
Another alternative would be to draw an image, squashed, stretched and rotated so that overlaid on another image would occupy the same space as a line of a given thickness drawn between two points, as shown below. I'd like to replace the red line of length L and width w with the image, squashed and stretched to be of the same length and width, and rotated to fit between the black dots.
What I've seen with OpenCV's image rotation is that it creates a larger image to accomodate the rotated image. Obviously, this isn't what I want.
Thanks in advance.
I have a program in which I have a simple rectangle drawn over the screen. When I put the rectangle inside a camera such as a ofEasyCam, it translates the position of the rectangle to the centre of the screen. Also, it flips the figure vertically and gives me an inverted drawing of the rectangle.
I have lots of objects on the screen and all of them appear inverted. How do I prevent the camera from rotating the Y-axis so that my object appears as is?
Found this link which suggests there might be a bug, the solution suggested is to do the following:
cam.setScale(1, -1, 1);
It also mentioned that this occurred when they tried an "ofEasyCam - inside an ofFbo"
I made a window sized 800x600. I called
gluOrtho2D(-400,400,-300,300);
glViewport(400,300,400,300);
and I drew a line from (-100,-100) to (100,100). I think I should see a line from (0,0) to (100,100), but I am getting the whole line. Why is this?
In theory, glViewport doesn't cause any clipping (see section 10). Normally, all drawing is clipped to the window. Since you have asked OpenGL to draw into a region of your window, you will need to also tell OpenGL to clip coordinates outside this viewport. For that, you will need glScissor. However, some implementations do clip their drawing to the viewport (see my comment for details).
In addition, your math is wrong. Your projection matrix is 800 units wide by 600 units tall, centered at (0, 0). This is then mapped to a portion of the window that is 400 pixels wide by 300 pixels tall, in the upper-right corner of the window.
If you draw a line from (-100, -100) to (100, 100), it will extend across only a small part of your viewing frustrum. The frustrum is sized to fit in the viewport.
In the image, the blue box is the window, and the red box represents the viewport. The black line should be the line that you drew.
An image describing what the paragraph says. http://img696.imageshack.us/img696/6541/opengl.png
Hope that helps!
glViewport describes the area of your window which will be drawn by OpenGL.
glOrtho or gluOrtho2D define a unit system (OpenGL units) which fit into that (via glViewport defined) area.
So your line will be drawn within the Viewport from -100,-100 to 100,100