I have a few days looking for how to do this, but I'm stuck, if someone can tell me some resource that can be useful for me, I do what I look for, I do not want to write all the code for my only guide me how to do it, "because I think it is possible "
My intention is to host a texture on the GPU, and take, selections, parts of that texture, passing the size you want to take, and draw in "quad" or "mesh" in libgdx.
I can create a multitexture, using vertex and shader, but not like taking parts of a texture to them in another texture, and change the parts that should be drawn.
But maybe this is not the right way to do what I want.
Below, I show an image to understand me better:
is the texture on the GPU, original
the image, which would be drawn depending on the coordinates that
are passed
the result would be shown
If I understood your intent correctly, I think what you want is simply texture coordinate or UV mapping. Assuming you have the texture (1), you would then draw four quads, each of them using different texture coordinates to access the multitexture. For example, looking at the top image of (3),
the top-left and top-right quads would use texture coordinates from [0.0, 0.0] to [0.5, 0.5] to access the black area,
the bottom-left quad would use texture coordinates from [0.5, 0.5] to [1.0, 1.0] to access the red area, and
the bottom-right quad would use texture coordinates from [0.5, 0.0] to [1.0, 0.5] to access the blue area.
If you want, you can create a function that maps the "atlas frame index" 1, 2, 3 or 4 to the correct texture coordinates to make drawing easier.
Related
I know that I must call one of the following before each call to glVertex:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
But I have no idea what they mean. I know, however, that if I multiply (or is that divide?) the right side (or is it all the ones?) by two, my texture expands, and if I do the opposite, my texture repeats twice. I've managed to code a texture atlas by applying operations until it worked. But I have no proper idea about what's going on. Why does dividing these coordinates affect the image and why does reversing them mirror it? How do texture coordinates work?
Texture coordinates specify the point in the texture image that will correspond to the vertex you are specifying them for. Think of a rectangular rubber sheet with your texture image printed on it, where the length of each side is normalized to the range 0-1. Now let's say you wanted to draw a triangle using that texture. You'd take 3 pins and place them in the rubber sheet in the positions of each of your desired texture coordinates. (Say [0, 0], [1, 0] and [1, 1]) then move those pins (without taking them out) to your desired vertex coordinates (Say [0, 0], [0.5, 0] and [1, 1]), so that the rubber sheet is stretched out and the image is distorted. That's basically how texture coordinates work.
If you use texture coordinates greater than 1 and your texture is set to repeat, then it's as if the rubber sheet was infinite in size and the texture was tiled across it. Therefore if your texture coordinates for two vertices were 0, 0 and 4, 0, then the image would have to be repeated 4 times between those vertices.
#b1nary.atr0phy Image for all you visual thinkers!
OpenGL uses inverse texturing. It takes coordinates from world space (X,Y,Z) to texture space (X,Y) to discrete space(U,V), where the discrete space is in the [0,1] domain.
Take a polygon, think of it as a sheet of paper. With this:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
You tell OpenGL to draw on the whole sheet of paper. When you apply modifications your texturing space modifies accordingly to the give coordinates. That is why for example when you divide you get the same texture twice, you tell OpenGL to map half of your sheet, instead of the whole sheet of paper.
Chapter 9 of the Red Book explains this in detail and is available for free online.
http://www.glprogramming.com/red/chapter09.html
Texture coordinates map x,y to the space 0-1 in width and height texture space. This is then stretched like a rubber sheet over the triangles. It is best explained with pictures and the Red Book does this.
For 2D image textures, 0,0 in texture coordinates corresponds to the bottom left corner of the image, and 1,1 in texture coordinates corresponds to the top right corner of the image. Note that "bottom left corner of the image" is not at the center of the bottom left pixel, but at the edge of the pixel.
Also interesting when uploading images:
8.5.3 Texture Image Structure
The texture image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice; and depth slices are stacked from back to front.
Note that most image formats have the data start at the top, not at the bottom row.
So when drawing a rectangle on OpenGL, if you give the corners of the rectangle texture coordinates of (0,0), (1,0), (1,1) and (0, 1), you'll get the standard rectangle.
However, if you turn it into something that's not rectangular, you'll get a weird stretching effect. Just like the following:
I saw from this page below that this can be fixed, but the solution given is only for trapezoidal values only. Also, I have to be doing this over many rectangles.
And so, the questions is, what is the proper way, and most efficient way to get the right "4D" texture coordinates for drawing stretched quads?
Implementations are allowed to decompose quads into two triangles and if you visualize this as two triangles you can immediately see why it interpolates texture coordinates the way it does. That texture mapping is correct ... for two independent triangles.
That diagonal seam coincides with the edge of two independently interpolated triangles.
Projective texturing can help as you already know, but ultimately the real problem here is simply interpolation across two triangles instead of a single quad. You will find that while modifying the Q coordinate may help with mapping a texture onto your quadrilateral, interpolating other attributes such as colors will still have serious issues.
If you have access to fragment shaders and instanced vertex arrays (probably rules out OpenGL ES), there is a full implementation of quadrilateral vertex attribute interpolation here. (You can modify the shader to work without "instanced arrays", but it will require either 4x as much data in your vertex array or a geometry shader).
Incidentally, texture coordinates in OpenGL are always "4D". It just happens that if you use something like glTexCoord2f (s, t) that r is assigned 0.0 and q is assigned 1.0. That behavior applies to all vertex attributes; vertex attributes are all 4D whether you explicitly define all 4 of the coordinates or not.
So I'm supposed to Texture Map a specific model I've loaded into a scene (with a Framebuffer and a Planar Pinhole Camera), however I'm not allowed to use OpenGL and I have no idea how to do it otherwise (we do use glDrawPixels for other functionality, but that's the only function we can use).
Is anyone here able enough to give me a run-through on how to texture map without OpenGL functionality?
I'm supposed to use these slides: https://www.cs.purdue.edu/cgvlab/courses/334/Fall_2014/Lectures/TMapping.pdf
But they make very little sense to me.
What I've gathered so far is the following:
You iterate over a model, and assign each triangle "texture coordinates" (which I'm not sure what those are), and then use "model space interpolation" (again, I don't understand what that is) to apply the texture with the right perspective.
I currently have my program doing the following:
TL;DR:
1. What is model space interpolation/how do I do it?
2. What explicitly are texture coordinates?
3. How, on a high level (in layman's terms) do I texture map a model without using OpenGL.
OK, let's start by making sure we're both on the same page about how the color interpolation works. Lines 125 through 143 set up three vectors redABC, greenABC and blueABC that are used to interpolate the colors across the triangle. They work one color component at a time, and each of the three vectors helps interpolate one color component.
By convention, s,t coordinates are in source texture space. As provided in the mesh data, they specify the position within the texture of that particular vertex of the triangle. The crucial thing to understand is that s,t coordinates need to be interpolated across the triangle just like colors.
So, what you want to do is set up two more ABC vectors: sABC and tABC, exactly duplicating the logic used to set up redABC, but instead of using the color components of each vertex, you just use the s,t coordinates of each vertex. Then for each pixel, instead of computing ssiRed etc. as unsigned int values, you compute ssis and ssit as floats, they should be in the range 0.0f through 1.0f assuming your source s,t values are well behaved.
Now that you have an interpolated s,t coordinate, multiply ssis by the texel width of the texture, and ssit by the texel height, and use those coordinates to fetch the texel. Then just put that on the screen.
Since you are not using OpenGL I assume you wrote your own software renderer to render that teapot?
A texture is simply an image. A texture coordinate is a 2D position in the texture. So (0,0) is bottom-left and (1,1) is top-right. For every vertex of your 3D model you should store a 2D position (u,v) in the texture. That means that at that vertex, you should use the colour the texture has at that point.
To know the UV texture coordinate of a pixel in between vertices you need to interpolate the texture coordinates of the vertices around it. Then you can use that UV to look up the colour in the texture.
So basically I am making a 2D game with opengl/c++. I have a quad with a texture mapped on it and because I cant use non power of two images (or at least I shouldnt) I have an image inside a power of two image and I wish to remove the excess image with texture mapping.
GLfloat quadTexcoords[] = {
0.0, 0.0,
0.78125, 0.0,
0.78125, 0.8789,
0.0, 0.8789
};
glGenBuffers(1, &VBO_texcoords);
glBindBuffer(GL_ARRAY_BUFFER, VBO_texcoords);
glBufferData(GL_ARRAY_BUFFER, sizeof(quadTexcoords), quadTexcoords, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
This is my texture code. Nothing special I know. The X coordinates (0.7256) work fine and it remove the excess image on the right side of the image. However the Y coordinate does not work. I have debugged my game and found the correct coordinate is sent to the shader but it wont map. It seems to be working in reverse sometimes but it is very unclear.
if I give it a coordinate for Y like 20 it repeats multiple times but still leaves a little white line a the top of the quad. I havent got the faintest idea what it could be.
Other details: The image I am trying to map is 800 by 450 and it is wrapped in an image of size 1024 by 512. I scale the quad by the aspect ratio of 800 by 450. I doubt this would make a difference but you never know!
Thanks for your time.
EDIT: here is an example of whats happening.
This is the full image mapped fully (0 to 1 in X and Y) The blue portion is 200 pixels high and the full image is 300 pixels high.
The second image is the image mapped to 2 thirds of the Y axis (i.e. 0 to 0.6666 in Y). This should remove the white at the top but that is not what is happening. I don't think the coordinates are back to front as I got the mapping of several tutorials online.
It seems to be working in reverse sometimes but it is very unclear.
OpenGL assumes the viewport origin in the lower left and texture coordinates running "along with" flat memory texel values in S, then T direction. In essence this means, that with one of the usual mappings, textures have their origin in the lower left, contrary to the upper left origin found in most image manipulation programs.
So in your case the white margin you see, is simply the padding you probably applied to the texture image at the bottom, instead of the top, where you should put it. Why can't you use NPO2 textures anyway? They're widely supported.
Not a real solution to you problem but maybe a way to go around the problem:
You can scale the image to 1024x1024 (which deforms the image) and use 0->1 texture coordinates. Because the aspect ratio of your quad is 800x450 the image should be displayed correctly.
I know that I must call one of the following before each call to glVertex:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
But I have no idea what they mean. I know, however, that if I multiply (or is that divide?) the right side (or is it all the ones?) by two, my texture expands, and if I do the opposite, my texture repeats twice. I've managed to code a texture atlas by applying operations until it worked. But I have no proper idea about what's going on. Why does dividing these coordinates affect the image and why does reversing them mirror it? How do texture coordinates work?
Texture coordinates specify the point in the texture image that will correspond to the vertex you are specifying them for. Think of a rectangular rubber sheet with your texture image printed on it, where the length of each side is normalized to the range 0-1. Now let's say you wanted to draw a triangle using that texture. You'd take 3 pins and place them in the rubber sheet in the positions of each of your desired texture coordinates. (Say [0, 0], [1, 0] and [1, 1]) then move those pins (without taking them out) to your desired vertex coordinates (Say [0, 0], [0.5, 0] and [1, 1]), so that the rubber sheet is stretched out and the image is distorted. That's basically how texture coordinates work.
If you use texture coordinates greater than 1 and your texture is set to repeat, then it's as if the rubber sheet was infinite in size and the texture was tiled across it. Therefore if your texture coordinates for two vertices were 0, 0 and 4, 0, then the image would have to be repeated 4 times between those vertices.
#b1nary.atr0phy Image for all you visual thinkers!
OpenGL uses inverse texturing. It takes coordinates from world space (X,Y,Z) to texture space (X,Y) to discrete space(U,V), where the discrete space is in the [0,1] domain.
Take a polygon, think of it as a sheet of paper. With this:
glTexCoord(0,0);
glTexCoord(0,1);
glTexCoord(1,1);
glTexCoord(1,0);
You tell OpenGL to draw on the whole sheet of paper. When you apply modifications your texturing space modifies accordingly to the give coordinates. That is why for example when you divide you get the same texture twice, you tell OpenGL to map half of your sheet, instead of the whole sheet of paper.
Chapter 9 of the Red Book explains this in detail and is available for free online.
http://www.glprogramming.com/red/chapter09.html
Texture coordinates map x,y to the space 0-1 in width and height texture space. This is then stretched like a rubber sheet over the triangles. It is best explained with pictures and the Red Book does this.
For 2D image textures, 0,0 in texture coordinates corresponds to the bottom left corner of the image, and 1,1 in texture coordinates corresponds to the top right corner of the image. Note that "bottom left corner of the image" is not at the center of the bottom left pixel, but at the edge of the pixel.
Also interesting when uploading images:
8.5.3 Texture Image Structure
The texture image itself (referred to by data) is a sequence of groups of values. The first group is the lower left back corner of the texture image. Subsequent groups fill out rows of width width from left to right; height rows are stacked from bottom to top forming a single two-dimensional image slice; and depth slices are stacked from back to front.
Note that most image formats have the data start at the top, not at the bottom row.