I am doing a project where i want to have i vertex buffer (in opengl) where I have vertices that make out a mesh of an image. Meaning that each pixel of the image consists of two triangles (a square pixel). I think I have achieved that by simple initializing a window with the size of the image and then having a vbo with a vertex grid also of the image size (grid of width and height).
For this image i also have a disparity/correspondence map(vector field) which a want to interpolate and use to deform this image mesh (deform the image/vertex grid). The idea comes from this article http://graphics.tu-bs.de/media/publications/stich08VTI.pdf (section 5) which is essentially what i want to do.
I want to have an image represented with a mesh and deform it by a vector field to have a new virtual view. How can this easily be done? I can't fully grasp how i am suppose to move the vertices in the vertex shader.
Firstly, it is pixel correspondence (in the vector field) but i can only move the vertices and one vertex belongs to two pixels so how do i deal with this (moving pixels)?
Secondly, is the phrase "per vertex mesh deformation" just moving the vertices in the vertex shader by some coordinates (in this case the vector field)?
And thirdly, if I manage to deform the mesh, how do I sample the original image correctly to get the "new view"? Do I just deform a set of UV coordinates by the same vector field as the image mesh and then sample the original image as a texture in the fragment shader?
Related
I've seen other questions about only drawing fragments on the triangle edges using barycentric coordinates, but I need more than that and I wonder if there should be another approach.
This is basically a shadow map render and I want to write some additional results to the FBO color attachment. (Namely the light origin - edge vertices plane equation).
I can easily do this via a geometry shader converting triangles to lines but it's not pixel-to-pixel exact to the triangle edge. And it's also causing depth fighting that I can't accept.
I was hoping for a trick in a fragment shader that I can somehow render triangles and get the edge vertex coordinates in there.
So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.
Is there a way to manipulate the field of view of camera when the camera is at 2 different positions in world space?
For example, In the first position, multiple mesh parts are transformed in different directions across the origin(where the camera looks at) until they form one mesh. This is done by calling glm::translate(), glm::rotate(), e.t.c before loading the vertices in the vbo.
In the second position, I want to transform the whole mesh (from above). Since I already loaded everything needed into the vbo and my models are drawn, I can't draw my new transformed mesh. Is there a way to draw my new transformed mesh without loading vertices into the vbo again?
And, if I have to load my vbo again, how do I go about it, since loading the vbo is dependent on how many parts the mesh is divided into.
Loading the vertex data into VBOs and transforming them has nothing to do with each other - stuff like glm::translate() or glm::rotate will not have any influence on the buffer contents - the only change some matrix. As long as you do not apply some transformation do you vertex data when you upload it to the buffer, no one else does. Typicalle, those transformation matrices are used when drawing the objects (by the vertex shader, on the GPU), so one can have moving objects and a moving camera without having to respecify the geometry data.
So in your case, it will be enough to just change the projection and view matrices and draw the obnjects again with those new matrices applied.
I have an issue with applying noise over the surface of a non-trivial mesh (well any mesh) in OpenGL without texture coordinates. I basically want to have a noise texture applied over the surface but since I don't have texture coordinates I can't just apply a noise texture. Generating texture coordinates in the vertex shader works to an extent however whether it is cube, sphere or object planar coordinates there is always some texture smearing.
smearing with cube map http://img811.imageshack.us/img811/3923/0ouu.png
Smearing with cube map coordinates across surface changes
smearing with object planar http://img195.imageshack.us/img195/987/c3cz.png
Smearing with object planar (xy) coordinates along z plane
I've done random noise generation in the fragment shader however as this changes every frame it is not what i need (and not computationally cheap either).
I just need a static uniform distribution of noise across the mesh surface.
Anybody got any ideas on how this could be done?
You could acquire 3d model space coordinates for each pixel in fragment shader and use some 3d noise based on those values.
What i'd like to do:
I have a 3d transformed, uvmapped object with a white texture as well as a screenspace image.
I want to bake the screenspace image into the texture of the object, such that it's 3d transformed representation on screen exactly matches the screenspace image (so i want to project it onto the uv space).
I'd like to do this with image_load_and store. I imagine it as:
1st pass: render the transformed 3d objects uvcoordinates into a offscreen texture
2nd pass: render screensized quad, on each pixel, check the value of the texture rendered in the first pass, if there are valid texturecoordinates there, look up the screenspace image with the screenspace quad's own uv textures and write this texel color with image_load_and_store into a texturebuffer by using the uv textures read from the input texture as index.
As I never worked with this feature before, I'd just like to ask whether someone who worked with it already considers this feasible and whether there maybe are already some examples that do something in this direction?
Your proposed way is certainly one method to do it, and actually it's quite common. The other way is to to a back projection from screen space to texture space. It's not that hard as it might sound at first. Basically for each triangle you have to find the transformation of the tangent space vectors (UV) on the models surface to their screen counterparts. In addition to that transform the triangle itself to find the boundaries of the screen space triangle in the picture. Then you invert that projection.