How to understand Sprite coordinates in AddSprite function? - c++

I have a bitmap with a sprite on it. Say the sprite with coordinates (0,0)-(5,4) in the bitmap coordinate space (source rectangle). I put it onto the destination rectangle with the same coordinates (0,0)-(5,4) in the render target coordinate space (ID2D1SpriteBatch::AddSprites function). But it draws the sprite with the cut right and bottom edged by one pixel.
If I use a source rectangle with (0,0)-(6,5) coordinates (that's plus one pixel), it solves the problem and Direct2D draws the sprite as needed. OK, but I do not understand why I have to use "one pixel plus" technic to draw "the uncut" sprite? What's wrong with sprite coordinates?

The D2D_RECT_F structure that you are passing to ID2D1SpriteBatch::AddSprites as the destinationRectangles is documented as:
Represents a rectangle defined by the coordinates of the upper-left corner (left, top) and the coordinates of the lower-right corner (right, bottom).
Note that you are specifying the corners on the destination device context, not the inclusive starting/ending pixel rows/columns. Therefore, if you draw from (0,0) to (0,0), you would be asking to draw a 0-sized rectangle, rather than a 1x1 rectangle.

Related

Modifying a texture on a mesh at given world coordinate

Im making an editor in which I want to build a terrain map. I want to use the mouse to increase/decrease terrain altitude to create mountains and lakes.
Technically I have a heightmap I want to modify at a certain texcoord that I pick out with my mouse. To do this I first go from screen coordinates to world position - I have done that. The next step, going from world position to picking the right texture coordinate puzzles me though. How do I do that?
If you are using a simple hightmap, that you use as a displacement map in lets say the y direction. The base mesh lays in the xz plain (y=0).
You can discard the y coordinate from world coordinate that you have calculated and you get the point on the base mesh. From there you can map it to texture space the way, you map your texture.
I would not implement it that way.
I would render the scene to a framebuffer and instead of rendering a texture the the mesh, colorcode the texture coordinate onto the mesh.
If i click somewhere in screen space, i can simple read the pixel value from the framebuffer and get the texture coordinate directly.
The rendering to the framebuffer should be very inexpensive anyway.
Assuming your terrain is a simple rectangle you first calculate the vector between the mouse world position and the origin of your terrain. (The vertex of your terrain quad where the top left corner of your height map is mapped to). E.g. mouse (50,25) - origin(-100,-100) = (150,125).
Now divide the x and y coordinates by the world space width and height of your terrain quad.
150 / 200 = 0.75 and 125 / 200 = 0.625. This gives you the texture coordinates, if you need them as pixel coordinates instead simply multiply with the size of your texture.
I assume the following:
The world coordinates you computed are those of the mouse pointer within the view frustrum. I name them mouseCoord
We also have the camera coordinates, camCoord
The world consists of triangles
Each triangle point has texture coordiantes, those are interpolated by barycentric coordinates
If so, the solution goes like this:
use camCoord as origin. Compute the direction of a ray as mouseCoord - camCoord.
Compute the point of intersection with a triangle. Naive variant is to check for every triangle if it is intersected, more sophisticated would be to rule out several triangles first by some other algorithm, like parting the world in cubes, trace the ray along the cubes and only look at the triangles that have overlappings with the cube. Intersection with a triangle can be computed like on this website: http://www.lighthouse3d.com/tutorials/maths/ray-triangle-intersection/
Compute the intersection points barycentric coordinates with respect to that triangle, like that: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-rendering-a-triangle/barycentric-coordinates
Use the barycentric coordinates as weights for the texture coordinates of the corresponding triangle points. The result are the texture coordinates of the intersection point, aka what you want.
If I misunderstood what you wanted, please edit your question with additional information.
Another variant specific for a height map:
Assumed that the assumptions are changed like that:
The world has ground tiles over x and y
The ground tiles have height values in their corners
For a point within the tile, the height value is interpolated somehow, like by bilinear interpolation.
The texture is interpolated in the same way, again with given texture coordinates for the corners
A feasible algorithm for that (approximative):
Again, compute origin and direction.
Wlog, we assume that the direction has a higher change in x-direction. If not, exchange x and y in the algorithm.
Trace the ray in a given step length for x, that is, in each step, the x-coordinate changes by that step length. (take the direction, multiply it with step size divided by it's x value, add that new direction to the current position starting at the origin)
For your current coordinate, check whether it's z value is below the current height (aka has just collided with the ground)
If so, either finish or decrease step size and do a finer search in that vicinity, going backwards until you are above the height again, then maybe go forwards in even finer steps again et cetera. The result are the current x and y coordinates
Compute the relative position of your x and y coordinates within the current tile. Use that for weights for the corner texture coordinates.
This algorithm can theoretically jump over very thin tops. Choose a small enough step size to counter that. I cannot give an exact algorithm without knowing what type of interpolation the height map uses. Might be not the worst idea to create triangles anyway, out of bilinear interpolated coordinates maybe? In any case, the algorithm is good to find the tile in which it collides.
Another variant would be to trace the ray over the points at which it's x-y-coordinates cross the tile grid and then look if the z coordinate went below the height map. Then we know that it collides in this tile. This could produce a false negative if the height can be bigger inside the tile than at it's edges, as certain forms of interpolation can produce, especially those that consider the neighbour tiles. Works just fine with bilinear interpolation, though.
In bilinear interpolation, the exact intersection can be found like that: Take the two (x,y) coordinates at which the grid is crossed by the ray. Compute the height of those to retrieve two (x,y,z) coordinates. Create a line out of them. Compute the intersection of that line with the ray. The intersection of those is that of the intersection with the tile's height map.
Simplest way is to render the mesh as a pre-pass with the uvs as the colour. No screen to world needed. The uv is the value at the mouse position. Just be careful though with mips/filtering etv

mfc when to use logical/device coordinates

I have heard that rectangles, coordinates of the mouse and other things involving drawing all use device coordinates. Is this true? Are there any ways that can tell me if I have logical or device coordinates?
I could look at the documentation of functions that give me the coordinates, but sometimes they don't explicitly say if these are logical or device coordinates. For example, the documentation for GetCursorPos function says it "retrieves the position of the mouse cursor, in screen coordinates."
I am assuming screen coordinates are the same as device coordinates? Does this mean I have to convert the screen coordinates I get from the function into client coordinates?
You know what coordinate (0,0) is on the top-left corner of screen. But on paper when we draw a graph, (0,0) may be on bottom left, or on the centre of the graph plotting paper.
By default, the logical and coordinates and physical/screen coordinates are same, and (0,0) points to top-left. But what if you want to draw a line from bottom left to somewhere in middle of screen, that matches the math/trigonometry you've learnt or are practising? Well, the you move to changing the logical coordinate system to something of your liking.
You'd use SetMapMode to change the logical coordinate system. You may later use LPtoDP, DPtoLP, ClientToScreen, ScreenToClient etc for mapping and for physical monitor to your window coordinates mapping.
About Coordinate Spaces and Transformations

GPU mouse picking OpenGL/WebGL

I understand i need to render only 1x1 or 3x3 pixel part of the screen where the mouse is, with object id's as colors and then get id from the color.
I have implemented ray-cast picking with spheres and i am guessing it has something to do with making camera look in direction of the mouse ray?
How do i render the correct few pixels?
Edit:
setting camera in direction of mouse ray works, but if i make the viewport smaller the picture scales but what (i think) i need is for it to be cropped rather than scaled. How would i achieve this?
The easiest solution is to use the scissor test. It allows you to render only pixels within a specified rectangular sub-region of your window.
For example, to limit your rendering to 3x3 pixels centered at pixel (x, y):
glScissor(x - 1, y - 1, 3, 3);
glEnable(GL_SCISSOR_TEST);
glDraw...(...);
glDisable(GL_SCISSOR_TEST);
Note that the origin of the coordinate system is at the bottom left of the window, while most window systems will give you mouse coordinates in a coordinate system that has its origin at the top left. If that's the case on your system, you will have to invert the y-coordinate by subtracting it from windowHeight - 1.

Screen space bounding box computation in OpenGL

I'm trying to implement tiled deferred rendering method and now I'm stuck. I'm computing min/max depth for each tile (32x32) and storing it in texture. Then I want to compute screen space bounding box (bounding square) represented by left down and top right coords of rectangle for every pointlight (sphere) in my scene (see pic from my app). This together with min/max depth will be used to check if light affects actual tile.
Problem is I have no idea how to do this. Any idea, source code or exact math?
Update
Screen-space is basically a 2D entity, so instead of a bounding box think of a bounding rectangle.
Here is a simple way to compute it:
Project 8 corner points of your world-space bounding box onto the screen using your ModelViewProjection matrix
Find a bounding rectangle of these points (which is just min/max X and Y coordinates of the points)
A more sophisticated way can be used to compute a screen-space bounding rect for a point light source. We calculate four planes that pass through the camera position and are tangent to the light’s sphere of illumination (the light radius). Intersections of each tangent plane with the image plane gives us 4 lines on the image plane. This lines define the resulting bounding rectangle.
Refer to this article for math details: http://www.altdevblogaday.com/2012/03/01/getting-the-projected-extent-of-a-sphere-to-the-near-plane/

Get rectangle vertices by center, normal, length and height

I am looking for a way to get all the vertices of a rectangle whose center, normal, length and height I know. I am a little weak in maths so please help me.
Edit : the plane is in 3D space.
You can easily calculate the x and y coordinates of the vertices of a rectangle in 2D space given the center, width and height by subtracting/adding half the width/height from the x/y position of the center point.
If you need this in 3D space, this becomes a little more tricky and relies on a bit of trigonometry, but still follows the same principle. You'll need one extra piece of information. You need some way of fixing the orientation of the square in some direction; ie, which direction is the rectangle 'facing'. The normal will allow you to work out what plane the rectangle is on, but without some orientation on that plane, the best you can do is work out a set of possible values in a circle around the center for each of the vertices.