accessing a transformed texture's pixel data - c++

How do i access the TRANSFORMED pixel data for a texture, after it has been transformed (rotated and scaled) by D3DXMatrixTransformation2D() and texture->SetTransform()?
I'm trying to do 2D pixel perfect collision detection and it's impossible if you can only access the untransformed pixel data using texture->LockRect().
Anybody got any ideas?

This will not achieve the results you wish. With the SetTransform method you set a transformation which is applied before the texture is drawn. So it wont morph your image that you can read out the pixel values.
What you could do is to project world coordinates to UV coordinates of your texture and then read out the according pixel value and compare it and do your collision resolution there.
I guess you use the inverse matrix of the matrix you created by D3DXMatrixTransformation2D().

Related

What is, in simple terms, textureGrad()?

I read the Khronos wiki on this, but I don't really understand what it is saying. What exactly does textureGrad do?
I think it samples multiple mipmap levels and computes some color mixing using the explicit derivative vectors given to it, but I am not sure.
When you sample a texture, you need the specific texture coordinates to sample the texture data at. For sake of simplicity, I'm going to assume a 2D texture, so the texture coordinates are a 2D vector (s,t). (The explanation is analogous for other dimensionalities).
If you want to texture-map a triangle, one typically uses one of two strategies to get to the texture coordinates:
The texture coordinates are part of the model. Every vertex contains the 2D texture coordinates as a vertex attribute. During rasterization, those texture coordinates are interpolated across the primitive.
You specify a mathematic mapping. For example, you could define some function mapping the 3D object coordinates to some 2D texture coordinates. You can for example define some projection, and project the texture onto a surface, just like a real projector would project an image onto some real-world objects.
In either case, each fragment generated when rasterizing the typically gets different texture coordinates, so each drawn pixel on the screen will get a different part of the texture.
The key point is this: each fragment has 2D pixel coordinates (x,y) as well as 2D texture coordinates (s,t), so we can basically interpret this relationship as a mathematical function:
(s,t) = T(x,y)
Since this is a vector function in the 2D pixel position vector (x,y), we can also build the partial derivatives along x direction (to the right), and y direction (upwards), which are telling use the rate of change of the texture coordinates along those directions.
And the dTdx and dTdy in textureGrad are just that.
So what does the GPU need this for?
When you want to actually filter the texture (in contrast to simple point sampling), you need to know the pixel footprint in texture space. Each single fragment represents the area of one pixel on the screen, and you are going to use a single color value from the texture to represent the whole pixel (multisampling aside). The pixel footprint now represent the actual area the pixel would have in texture space. We could calculate it by interpolating the texcoords not for the pixel center, but for the 4 pixel corners. The resulting texcoords would form a trapezoid in texture space.
When you minify the texture, several texels are mapped to the same pixel (so the pixel footprint is large in texture space). When you maginify it, each pixel will represent only a fraction of the corresponding texel (so the footprint is quiete small).
The texture footprint tells you:
if the texture is minified or magnified (GL has different filter settings for each case)
how many texels would be mapped to each pixel, so which mipmap level would be appropriate
how much anisotropy there is in the pixel footprint. Each pixel on the screen and each texel in texture space is basically a square, but the pixel footprint might significantly deviate from than, and can be much taller than wide or the over way around (especially in situations with high perspective distortion). Classic bilinear or trilinear texture filters always use a square filter footprint, but the anisotropic texture filter will uses this information to
actually generate a filter footprint which more closely matches that of the actual pixel footprint (to avoid to mix in texel data which shouldn't really belong to the pixel).
Instead of calculating the texture coordinates at all pixel corners, we are going to use the partial derivatives at the fragment center as an approximation for the pixel footprint.
The following diagram shows the geometric relationship:
This represents the footprint of four neighboring pixels (2x2) in texture space, so the uniform grid are the texels, and the 4 trapezoids represent the 4 pixel footprints.
Now calculating the actual derivatives would imply that we have some more or less explicit formula T(x,y) as described above. GPUs usually use another approximation:
the just look at the actual texcoords the the neighboring fragments (which are going to be calculated anyway) in each 2x2 pixel block, and just approximate the footprint by finite differencing - the just subtracting the actual texcoords for neighboring fragments from each other.
The result is shown as the dotted parallelogram in the diagram.
In hardware, this is implemented so that always 2x2 pixel quads are shaded in parallel in the same warp/wavefront/SIMD-Group. The GLSL derivative functions like dFdx and dFdy simply work by subtracting the actual values of the neighboring fragments. And the standard texture function just internally uses this mechanism on the texture coordinate argument. The textureGrad functions bypass that and allow you to specify your own values, which means you control the what pixel footprint the GPU assumes when doing the actual filtering / mipmap level selection.

Generating depth map with panda3d

I need to generate test data for 3d reconstruction code. For this I decided to use panda3d. I am able to create simple app and see the scene. Now I need to create depth map for the scene, i.e. for each pixel on the screen I need to calculate depth, i.e. distance from camera to the closest object in the 3d space (moving perpendicularly to camera plane). What API functions are more suitable for that?
This is in principle similar to shadow mapping, as demonstrated in the advanced shadow sample. You will need to create an offscreen buffer and camera to render the depth buffer. Note that unless you use an orthographic lens, the resulting depth values will not be linear and will need to be transformed to a linear value using the near and far values of the lens. The near and far distances should be configured such as to get the desired range of depth values.
Alternatively, you can use a shader to write the appropriate distance values into the colour buffer, which is particularly useful if you want to store distance values of a perspective camera without having to undo the perspective projection later, or if you want to store the original world-space positions.
If you want to be able to access the values on the CPU, you will need to use the RTM_copy_ram value instead of RTM_bind_or_copy when binding your texture to tell Panda3D to transfer the results of rendering the buffer to CPU-accessible memory.

Texture Mapping without OpenGL

So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.

OpenGL/GLUT - Project ModelView Coordinate to Texture Matrix

Is there a way using OpenGL or GLUT to project a point from the model-view matrix into an associated texture matrix? If not, is there a commonly used library that achieves this? I want to modify the texture of an object according to a ray cast in 3D space.
The simplest case would be:
A ray is cast which intersects a quad, mapped with a single texture.
The point of intersection is converted to a value in texture space clamped between [0.0,1.0] in the x and y axis.
A 3x3 patch of pixels centered around the rounded value of the resulting texture point is set to an alpha value of 0.( or another RGBA value which is convenient, for the desired effect).
To illustrate here is a more complex version of the question using a sphere, the pink box shows the replaced pixels.
I just specify texture points for mapping in OpenGL, I don't actually know how the pixels are projected onto the sphere. Basically I need to to the inverse of that projection, but I don't quite know how to do that math, especially on more complex shapes like a sphere or an arbitrary convex hull. I assume that you can somehow find a planar polygon that makes up the shape, which the ray is intersecting, and from there the inverse projection of a quad or triangle would be trivial.
Some equations, articles and/or example code would be nice.
There are a few ways you could accomplish what you're trying to do:
Project a world coordinate point into normalized device coordinates (NDCs) by doing the model-view and projection transformation matrix multiplications by yourself (or if you're using old-style OpenGL, call gluProject), and perform the perspective division step. If you use a depth coordinate of zero, this would correspond to intersecting your ray at the imaging plane. The only other correction you'd need to do map from NDCs (which are in the range [-1,1] in x and y) into texture space by dividing the resulting coordinate by two, and then shifting by .5.
Skip the ray tracing all together, and bind your texture as a framebuffer attachment to a framebuffer object, and then render a big point (or sprite) that modifies the colors in the neighborhood of the intersection as you want. You could use the same model-view and projection matrices, and will (probably) only need to update the viewport to match the texture resolution.
So I found a solution that is a little complicated, but does the trick.
For complex geometry you must determine which quad or triangle was intersected, and use this as the plane. The quad must be planar(obviously).
Draw a plane in the identity matrix with dimensions 1x1x0, map the texture on points identical to the model geometry.
Transform the plane, and store the inverse of each transform matrix in a stack
Find the point at which the the plane is intersected
Transform this point using the inverse matrix stack until it returns to identity matrix(it should have no depth(
Convert this point from 1x1 space into pixel space by multiplying the point by the number of pixels and rounding. Or start your 2D combining logic here.

OpenGL colorize filters

I have an open GL quad that is rendered with a grayscale gradient. I would like to colorize it by applying a filter, something like:
If color = 0,0,0 then set color to 255,255,255
If color = 0,0,1 then set color to 255,255,254
etc, or some scheme I decide on.
Note the reason I do this in grayscale because the algorithm I'm using was designed to be drawn in grayscale and then colorized since the colors may not be known immediately.
This would be similar to the java LookupOp http://download.oracle.com/javase/6/docs/api/java/awt/image/LookupOp.html.
Is there a way to do this in openGL?
thanks,
Jeff
You could interpret those colours from the grayscale gradient as 1-D texture coordinates and then specify your look-up table as a 1-D texture. This seems to fit your situation.
Alternatively, you can use a fragment program (shader) to perform arbitrary colour transformations on individual pixels.
Some more explanation: What is a texture? A texture, conceptually, is some kind of lookup function, with some additional logic on top.
A 2-D texture is something which for any pair of coordinates (s,t) or (x,y) in the range of [0,0] - [1,1] yields a specific colour (RGB, RGBA, L, whatever). Additionally it has some settings like warping or filtering.
Underneath, a texture is described by discrete data of a given "density" - perhaps 16x16, perhaps 256x512. The filtering process makes it possible to specify a colour for any real number between [0,0] and [1,1] (by mixing/interpolating neighbouring texels or just taking the nearest one).
A 1-D texture is identical, except that it maps just a single real value to a colour. Therefore, it can be thought of as a specific type of a "lookup table". You can consider it equivalent to a 2-D texture based on a 1xN image.
If you have a grayscale gradient, you may render it directly by treating the gradient value as a colour - or you can treat it as texture coordinates (= indices in the lookup table) and using the 1-D texture for an arbitrary colour space transform.
You'd just need to translate the gradient values (from 0..255 range) to the [0..1] range of texture indices. I'd recommend something like out = (in+0.5)/256.0. The 0.5 makes for the half-texel offset as we want to point to the middle of a texel (a value inside a texture), not to a corner between 2 values.
To only have the exact RGB values from lookup table (= 1-D texture), also set the texture filters to GL_NEAREST.
BTW: Note that if you already need another texture to draw the gradient, then it gets a bit more complicated, because you'd want to treat the values received from one texture as coordinates for another texture - and I believe you'd need pixel shaders for that. Not that shaders are complicated or anything... they are extremely handy when you learn the basics.