Method to get all points' coordinates OpenGL - opengl

I have a rectangle and a circle. Of the circle i have all points' coordinates because i calculate them to draw it using math rules. the rectangle in drawed using two triangles so 4 vertices. Now these are free to translate and route in the plan and i want to determinate when one of them touch the other one. so I thought that this happens when one of the coordinates of one of them is the same that one of the others of the other object. The problem is that i haven't an array of all coordinates of the rectangle. Is there a method that return all coordinates that a drawed triangles and not only the vertices' ones in OpenGL?

There is method to record coords and commands supplied to OpenGL, using stencil buffer, but that's a rather inefficient way, because you would need to decompile commands inside buffer.
If you didn't had an array of coordinates , you already used the most inefficient way to supply geometry to OpenGL:
glBegin(...);
glVertex3f(...);
glVertex3f(...);
...
glVertex3f(...);
glEnd();
The more efficient way to do that is to use vertex buffer, which automatically requires to have array of coordinates. With large amount of vertices, VBO methods is times faster than vertex by vertex copying.
OpenGL doesn't store the coordinates you've supplied to it any longer than it is required, i.e. until rasterization. Whole goal of OpenGL is to create image on screen, not to solve some abstract tasks.

Related

Wrap texture around a cylinder using index buffer object?

Context: Since the index buffer object allows me to prevent duplicating vertices and the texture buffer objects only applies to vertices, I seem unable to wrap the texture properly all the way around the cylinder (no vertices are mapped to the UV coordinates (1, 0) for example). It seems like I need to add an additional set of vertical vertices overlapping the first one but this messes with my calculation of the normals of the vertices.
Question: Is there a way to map texture coordinates to indices instead of vertices or something like that, so that the texture completely wraps around the rolled mesh?

Texture Mapping without OpenGL

So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.

How to clip texture with arbitrary shape?

I am rendering complex 3d objects. Here is a simple example with a sphere-like object:
Next I am applying a clipping plane to these objects and rendering a texture on this plane, giving the impression you are looking at the inside of the object, as if it was sliced. For example:
The problem is the jagged edge of the texture. It will stick out passed the boundary of the surface. Here's another angle where you can see it sticking out. The surface and the texture both derive from the same source data, but the surface is smoothed and has a higher resolution than the texture.
What I want is to be able to somehow clip the texture, so that it never sticks out past the boundary of the surface. Also, I don't want to simply scale down the texture, since although this might prevent it from sticking outside, it would create interior gaps between the texture edge and the surface edge. I would rather the texture be a little too big and have it clipped so that it sits flush against the edge of the surface.
Here's where I am:
I figured the first step would be to define the intersection of the plane and the surface. So now I have that, as an ordered list of line segments. However, I'm not sure how to proceed with this info (or if this is even the best approach).
I've been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape and draw this into a stencil buffer. Then apply this when drawing the texture. (Although I think it's a lot of work since the shapes can be complicated.)
I am wondering if I can somehow use the already drawn surface (in conjunction with a stencil buffer or some other technique) to somehow clip the texture -- without having to go through the extra trouble of deriving the intersection line, etc.
What's the best approach here? (Any online examples you can point me to would also be really helpful.)
If you're clipping convex objects and know coordinates of clipped points, you can create polygonal "cap" yourself - just draw clipped points in proper order using GL_TRIANGLE_FAN, and that's it. Won't work with non-convex object - that would require triangulation algorithm. You could use glu tesselators to triangulate polygons, but that can be tricky/difficult.
If clipped area can be defined by formula, you can write a shader that'll precisely clip pixels over certain distance (i.e. if x^2+x^2+z^2 > r^2 do not draw pixel).
You could also draw back-facing faces with a shader that would draw every back facing pixel as if it were on on clip-plane using simple raytracing. That's complicated, and might be overkill in your case. Dead Rising used similar technique in their game engine.
Also you can use stencil buffer.
Draw back-facing faces first with GL_INCR (glStencilOp(GL_KEEP, GL_INCR, GL_INCR)), then draw front-facing surfaces with GL_DECR (glStencilOp(GL_KEEP, GL_DECR, GL_DECR)). Then draw texture only where stencil is non-zero. (glStencilFunc(GL_GREATER, 0, 0xff); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);). If you have many overlapping shapes, however, you'll need to take special care of them.
--edit--
However, I'm not sure how to proceed with this info (or if this is even the best approach).
Draw it as a triangle fan. For convex objects, that's all you need. For non-convex objects that won't work.
ve been reading up on stencil buffers. One approach might be to turn the intersection line into a 2d shape
No, it won't work like that. Region you want to fill with texture should hold certain stencil value. That's how stencil clipping works.
to somehow clip the texture
In OpenGL you have 6(?) clip planes. If you need more than that, you'll need advanced techniques - stencil, deriving intersection line, shaders, or triangulation.
Any online examples you can point me to would also be really helpful
Drawing Filled, Concave Polygons Using the Stencil Buffer

OpenGL: Multi-texturing an array of "linked" quads

I recently completed my system for loading an array of quads into VBOs. This system allows quads to share vertices in order to save a substantial amount of memory. For example, an array of 100x100 quads would use 100x100x4=40000 vertices normally (4 vertices per quad), but with this system, it would only use 101x101=10201 vertices. That is a huge amount of space saving when you get into even larger scales.
My problem is is that in order to texture each quad individually, each vertex needs a "UV" coordinate pair (or "ST" coordinate) to map one part of the texture to. This leads to the problem, how do I texture each quad independently of each other? Even if two of the same textured quads are next to each other, I cannot use the same texture coordinate for both of the quads. This is illustrated below:
*Each quad being 16x16 pixels in dimension and the texture coordinates having a range of 0 to 1.
To make things even more complicated, some quads in the array might not even be there (because that part of the terrain is just an empty block). So as you might have guessed, this is for a rendering engine for those 2D tile games everyone is trying to make.
Is there a way to texture quads using the vertex saving technique or will I just have to trash this method and just use the way less efficient way?
You can't.
Vertices in OpenGL are a collection of data. They may contain positions, but they also contain texture coordinates or other things. Every vertex, every collection of position/coordinate/etc, must be unique. So if you need to pair the same position with different texture coordinates, then you have different vertices.

Placing multiple images on a 3D surface

If I was to place a texture on the surface of a 3D object, for example a cube, I could use the vertices of that cube to describe the placement of this texture.
But what if I want to place multiple separate images on the same flat surface? Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface. I want the actual images to be chosen and placed dynamically at runtime, otherwise I could condense them offline as a single texture.
I have an approach but I want to seek advice as to whether there is a better method, or if this is perfectly acceptable:
My guess is to create multiple separate 2D quads (with depth of 0), each with a texture associated with them and placed on them (they could of course be a texture atlas with different texture coordinates).
Then, I transform these quads such that they appear to be on the surface of a 3D object, such as a cube. Of course I'd have to maintain a matrix hierarchy so these quads are transformed appropriately whenever the cube is transformed, such that they appear to be attached to the cube.
While this isn't necessarily hard, I am new to texturing and would like to know if this is a normal practice for something like this.
You could try rendering a scene and saving that as a texture then use that texture on the surface.
Check out glCopyTexImage2D() or glCopyTexSubImage2D().
Or perhaps try using frame buffer objects.
But what if I want to place multiple separate images on the same flat surface?
Use multiple textures, maybe each with its own set of textuer coordinates. Your OpenGL implementation will offer you a number of textuer units. Each of them can supply a different texture.
glActiveTexture(GL_TEXTURE_0 + i);
glBindTexture(…);
glUniform1i(texturesampler[i], i); // texturesampler[i] contains the sampler uniform location of the bound program.
Or suppose it is just one image, but I don't want it to appear at the edges of the surface, where the vertices are, but rather somewhere small and in the middle of the surface.
That's where GL_CLAMP… texture wrap modes get their use.
glTexParameteri(GL_TEXTURE_WRAP_{S,T,R}, GL_CLAMP[_TO_{EDGE,BORDER}]);
With those you specify texture coordinates at the vertices to be outside the [0, 1] interval, but instead of repeating the image will show only one time, with only the edge pixels repeated. If you make the edge pixels transparent, it's as if there was no image there.