Applying texture without coordinates - c++

Is there any way to apply the texture to an object without specifying texture coordinates?

In fixed-function OpenGL, you can generate texture coordinates by activating texture coordinate generation modes. There are a couple of fixed algorithms (spherical coordinates, reflection coordinates), and there is one that multiplies a vertex component by a 4x4 matrix to generate a texture coordinate.
In shaders, you can use anything you can algorithmically generate.
However, without telling us how you want a texture mapped to the surface, there's no way to know if what you want is possible. There is no glTextureMyObject that does "something"; either explicit texture coordinates must be used or some algorithm must generate them.

Related

GLSL - testing fragment world space coordinate intersection with geometry texture, and texture modification

I am exploring some GLSL and have something I want to try to implement. Here is the situation:
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Any help is greatly appreciated, thanks :)
I have a previously rendered texture which stores only world-space coordinates of fragments (rgb = xyz). This texture is being passed to another render pass, is it possible take the world position texture and sample it to test the current fragments' world-space coordinate to see if they are a match?
An example could be 2 cameras, testing to see if any of the points in 3D space rendered to texture by camera A can also be seen by camera B.
Yes, it is possible. This is essentially a shadow-map, but now you'll have to calculate the distances manually during the sampling. It's unclear why you insist on storing the world-space XYZ coordinates and what's the use-case of this. It should be much simpler and more efficient to store the depths in a depth texture and use the built-in depth-texture lookup.
Also, is it possible to have a texture that can be modified between several different shaders? i.e. having a camera render a texture, then pass that texture to another shader and change it?
Yes. You can render a texture and then use imageLoad and imageStore (and related APIs) in another shader to modify it. You must be careful, however, with feedback loops. Because of the parallel nature of the GPUs, and their cache-incoherent architecture, it might be complicated and a detailed answer would depend on the exact thing you're trying to achieve.

Quad texture stretching on OpenGL

So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.

Texture Mapping without OpenGL

So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.

An alternative to glDrawPixels in OpenGL 3.0?

So I know that glDrawPixels is deprecated. Is there any function that does the same thing?
I thought of using textures, but they are modified by the current matrix, unlike pixels that are drawn by glDrawPixels.
I thought of using textures, but they are modified by the current matrix
The "current matrix" is deprecated in 3.0 and removed in 3.1+ as well. So if you're not using glDrawPixels, you wouldn't be using matrix functions either. So it's nothing to be concerned about.
You could use a fragment shader where a function of gl_FragCoord is used to sample a rectangular texture.
Alternatively, you could use a more traditional approach and just set up your transformation matrices to approximate the pixel coordinate system of your window and then draw a textured quad with your image.
You need to draw a quad with :
A specific ModelViewProjection Matrix which will place it where you want (as Nicol said, there is no "current" matrix anymore)
A simple vertex shader which will use said Matrix to actually transform the vertices
A simple fragment shader which will sample the texture
And of course, adequate texture coordinates.
For starters, use an Identity matrix, and a mesh with X and Y coords between 0 and 1.
You might want to use a mix of http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/ and http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-11-2d-text/ (though the latter one should be improved regarding the matrix used)

How to create 4-dimensional textures?

EDIT
glTexcoord4f allows to specif four dimensions of a texture, but how do you create 4-dimensional textures
The r component is used to specify either the depth in a 3D (volumetric) texture, or the layer in a 2D texture array.
The q component plays the same role, like the vertex position w element: It is used for scaling the perspective divide in perspective texture projection.
There isn't any real "meaning" to them. If you were using shaders, you can assign any meaning you want to them.
For example, in our game: we used the xy for the actual texcoords, the z for which texture to sample from, and the w (4th component) to control the brightness.
There is such thing as 3D and 4D textures which do actually require 3 and 4 texcoords respectively, I suppose that could be the "meaning" of them.
The main reason that they exist, is because graphics cards work with 4 component vectors. When you pass a 2D texcoord in, it's still a 4-vector behind the scenes (the other r and q components aren't set). OpenGL provides you with the functionality to use them, on the off chance that you might need it.
The r component is the 3rd coordinate for GL_TEXTURE_3D (for rendering volumes). I am not familiar with any method that uses the 4th coordinate.
But it seems reasonable to have that available as all homogeneous OpenGL vectors have 4 components.
There is no such thing as a 4-dimensional texture. At least, not without extensions.
The reason glTexCoord4D exists is to allow passing 4 values. In the modern shader-based rendering world, "texture coordinates" don't have to be texture coordinates at all. They're just values the shader uses to do whatever it does.
Many of the texture lookup functions in shaders take more texture coordinate dimensions than the dimensionality of the actual texture. All texture functions for shadow textures take an extra coordinate, which represents the comparison value. All of the Proj texture functions take an extra coordinate, which represents the homogeneous coordinate for a homogeneous coordinate system.
In fixed-function land, 4D texture coordinates can be used for projective texturing of 3D textures. So the 4D coordinate is in a homogeneous coordinate system.