I am trying to create some simple polygons in openGL3.3. I have 2 types of objects with the following properties :
Object 1 - 10 vertices (listed below, in order) stored in GL_ARRAY_BUFFER and use GL_TRIANGLE_FAN
v x y z w
v 0.0 0.0 1.0 1.0
v 0.0 1.0 0.1 1.0
v 0.71 0.71 0.1 1.0
v 1.0 0.0 0.1 1.0
v 0.71 -0.71 0.1 1.0
v 0.0 -1.0 0.1 1.0
v -0.71 -0.71 0.1 1.0
v -1.0 0.0 0.1 1.0
v -0.71 0.71 0.1 1.0
v 0.0 1.0 0.1 1.0
Object 2 - 4 vertices (listed below, in order) stored in GL_ARRAY_BUFFER and use GL_TRIANGLE_STRIP
v x y z w
v 0.0 0.0 0.0 1.0
v 0.0 1.0 0.0 1.0
v 1.0 0.0 0.0 1.0
v 1.0 1.0 0.0 1.0
I load 64 Object1's and 4 Object2's. The Object2's are scaled using glm::scale(20, 20, 0).
When I try to render these with GL_CULL_FACE disabled but GL_DEPTH_TEST enabled with glDepthFunc(GL_LESS) everything works fine. As soon as I try to enable GL_CULL_FACE all I get is a blank window.
Some other useful information :
- Rendering order = 4 Object2's followed by 64 Object1's
- Camera - glm::lookAt(glm::vec3(0.0f, 0.0f, 50.0f), glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3(0.0f, 1.0f, 0.0f));
- Perspective - glm::perspective(45.0f, 16.0f / 9.0f, 0.01f, 1000.0f);
I have been trying to figure out why GL_CULL_FACE doesn't work for the past couple of days but have no idea. Any help is much appreciated.
This is almost always due to polygon normal direction (right-hand rule of the three vertices of each triangle)
Try switching the direction using:
glCullFace(GL_BACK or GL_FRONT)
Some more reading here
Which culling direction have you specified? When you create a polygon, you 'wind' the vertices either clockwise or counter-clockwise relative to the camera's view point, and you have to tell OpenGL which type it should cull.
As you're probably aware, back face culling is for hiding polys facing away from the camera, so if you have clockwise-wound faces, that means the verts are clockwise in order when facing the camera and counter-clockwise facing away, so you tell OpenGL to cull counter-clockwise polys.
I suspect that you're culling the polys you want to see: GL_BACK is the default so switching the mode to GL_FRONT should work. See the documentation for more info: http://www.opengl.org/sdk/docs/man/xhtml/glCullFace.xml
Related
I am currently working on Oculus Rift PC SDK. Was trying to start off with something simpler like Tiny Room Demo(DX11). Saw this tutorial online to load a 3D model into the scene from external file (Rastertek
Tutorial 7: 3D Model Rendering)
The way the Tiny Room Demo creates model is to hardcode the coordinates and renders it
TriangleSet walls;
walls.AddSolidColorBox(10.1f, 0.0f, 20.0f, 10.0f, 4.0f, -20.0f, 0xff808080); // Left Wall
walls.AddSolidColorBox(10.0f, -0.1f, 20.1f, -10.0f, 4.0f, 20.0f, 0xff808080); // Back Wall
walls.AddSolidColorBox(-10.0f, -0.1f, 20.0f, -10.1f, 4.0f, -20.0f, 0xff808080); // Right Wall
Add(
new Model(&walls, XMFLOAT3(0, 0, 0), XMFLOAT4(0, 0, 0, 1),
new Material(
new Texture(false, 256, 256, Texture::AUTO_WALL)
)
)
);
void AddSolidColorBox(float x1, float y1, float z1, float x2, float y2, float z2, uint32_t c)
{
AddQuad(Vertex(XMFLOAT3(x1, y2, z1), ModifyColor(c, XMFLOAT3(x1, y2, z1)), z1, x1),
Vertex(XMFLOAT3(x2, y2, z1), ModifyColor(c, XMFLOAT3(x2, y2, z1)), z1, x2),
Vertex(XMFLOAT3(x1, y2, z2), ModifyColor(c, XMFLOAT3(x1, y2, z2)), z2, x1),
Vertex(XMFLOAT3(x2, y2, z2), ModifyColor(c, XMFLOAT3(x2, y2, z2)), z2, x2));
...}
AddQuad(Vertex v0, Vertex v1, Vertex v2, Vertex v3) { AddTriangle(v0, v1, v2); AddTriangle(v3, v2, v1); }
void AddTriangle(Vertex v0, Vertex v1, Vertex v2)
{
VALIDATE(numVertices <= (maxBuffer - 3), "Insufficient triangle set");
for (int i = 0; i < 3; i++) Indices[numIndices++] = short(numVertices + i);
Vertices[numVertices++] = v0;
Vertices[numVertices++] = v1;
Vertices[numVertices++] = v2;
}
Tried to load the model into the scene using a function from the tutorial
TriangleSet models;
models.LoadModel("F:\\cube.txt");
Add(
new OBJModel(&models, XMFLOAT3(0, 0, 0), XMFLOAT4(0, 0, 0, 1),
new OBJMaterial(
new Texture(false, 256, 256, Texture::AUTO_WHITE)
//new Texture(DirectX, L"wallpaper.jpg")
)
)
); //3D Model
void LoadModel(char* filename)
{
ifstream fin;
char input;
// Open the model file.
fin.open(filename);
// Read up to the value of vertex count.
fin.get(input);
while (input != ':')
{
fin.get(input);
}
// Read in the vertex count.
m_vertexCount = 0;
fin >> m_vertexCount;
// Read up to the beginning of the data.
fin.get(input);
while (input != ':')
{
fin.get(input);
}
fin.get(input);
fin.get(input);
// Read in the vertex data.
for (int i = 0; i<m_vertexCount; i++)
{
Indices[numIndices++] = short(numVertices + i);
//numVertices++; deleted
fin >> Vertices[numVertices].Pos.x >> Vertices[numVertices].Pos.y >> Vertices[numVertices].Pos.z;
fin >> Vertices[numVertices].U >> Vertices[numVertices].V;
fin >> Normals[numVertices].Norm.x >> Normals[numVertices].Norm.y >> Normals[numVertices].Norm.z;
Vertices[numVertices].C = ModifyColor(0xffffffff, Vertices[numVertices].Pos);
numVertices+=1; //new statement
}
// Close the model file.
fin.close();
}
I did not use the normal as from the tutorial it was meant for the texture of the object. Instead I defined the color to be solid yellow. Tried to keep the structure of loading the model as similar to Tiny Room Demo as possible.
I have used the same model, material and texture (vertex shader and pixel shader) as how Tiny Room Demo does. However what was rendered onto the scene did not appear as what it is supposed to be.
Did a step by step debugging to see if the coordinates were correctly loading into the Vertices[numVertices]. Seems like there is no issue. The file I tried to load was cube.txt
Vertex Count: 36
Data:
-1.0 1.0 -1.0 0.0 0.0 0.0 0.0 -1.0
1.0 1.0 -1.0 1.0 0.0 0.0 0.0 -1.0
-1.0 -1.0 -1.0 0.0 1.0 0.0 0.0 -1.0
-1.0 -1.0 -1.0 0.0 1.0 0.0 0.0 -1.0
1.0 1.0 -1.0 1.0 0.0 0.0 0.0 -1.0
1.0 -1.0 -1.0 1.0 1.0 0.0 0.0 -1.0
1.0 1.0 -1.0 0.0 0.0 1.0 0.0 0.0
1.0 1.0 1.0 1.0 0.0 1.0 0.0 0.0
1.0 -1.0 -1.0 0.0 1.0 1.0 0.0 0.0
1.0 -1.0 -1.0 0.0 1.0 1.0 0.0 0.0
1.0 1.0 1.0 1.0 0.0 1.0 0.0 0.0
1.0 -1.0 1.0 1.0 1.0 1.0 0.0 0.0
1.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0
-1.0 1.0 1.0 1.0 0.0 0.0 0.0 1.0
1.0 -1.0 1.0 0.0 1.0 0.0 0.0 1.0
1.0 -1.0 1.0 0.0 1.0 0.0 0.0 1.0
-1.0 1.0 1.0 1.0 0.0 0.0 0.0 1.0
-1.0 -1.0 1.0 1.0 1.0 0.0 0.0 1.0
...
What was suppose to show up (except without the texture)
3D cube
What actually showed up was just fragments of triangle
TinyRoomDemo + 3D cube
Unsure what went wrong. Please do advice! Thank you very much :)
Vertex and Index buffer
struct OBJModel
{
XMFLOAT3 Pos;
XMFLOAT4 Rot;
OBJMaterial * Fill;
DataBuffer * VertexBuffer;
DataBuffer * IndexBuffer;
int NumIndices;
OBJModel() : Fill(nullptr), VertexBuffer(nullptr), IndexBuffer(nullptr) {};
void Init(TriangleSet * t)
{
NumIndices = t->numIndices;
VertexBuffer = new DataBuffer(DIRECTX.Device, D3D11_BIND_VERTEX_BUFFER, &t->Vertices[0], t->numVertices * sizeof(Vertex));
IndexBuffer = new DataBuffer(DIRECTX.Device, D3D11_BIND_INDEX_BUFFER, &t->Indices[0], t->numIndices * sizeof(short));
}
...
DIRECTX.Context->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
---------------------------------------------------------------------------------------------------------------------------------
06/06/2017 edited:
3D Model data:
Vertex Count: 798
Data:
28.3005 0.415886 -45.8282 0.7216 0.720211 0 0 -1
28.3005 -0.809079 -45.8282 0.732222 0.720211 0 0 -1
-27.7441 -0.809079 -45.8282 0.732222 0.847836 0 0 -1
28.3005 0.415886 68.1056 0.459891 0.720286 0 1 -0
28.3005 0.415886 -45.8282 0.719341 0.720286 0 1 -0
-27.7441 0.415886 -45.8282 0.719341 0.847911 0 1 -0
28.3005 -0.809079 68.1056 0.721603 0.720211 0 0 1
28.3005 0.415886 68.1056 0.732225 0.720211 0 0 1
-27.7441 0.415886 68.1056 0.732225 0.847836 0 0 1
28.3005 -0.809079 -45.8282 0.459891 0.720298 0 -1 -0
28.3005 -0.809079 68.1056 0.719341 0.720298 0 -1 -0
-27.7441 -0.809079 68.1056 0.719341 0.847923 0 -1 -0
28.3005 0.415886 68.1056 0.719341 0.70683 1 0 -0
...
From the data u provided for the house, it seems like 1 triangle is facing in one way and second is facing the opposite way.
Use rasterizer without back culling to draw this object
I'm trying to render a series of 2D shapes (Rectangle, Circle) etc in modern opengl, hopefully without the use of any transformation matrices. I would like for me to be able to specify the coordinates for say a rectangle of 2 triangles like so:
float V[] = { 20.0, 20.0, 0.0,
40.0, 20.0, 0.0,
40.0, 40.0, 0.0,
40.0, 40.0, 0.0,
20.0, 40.0, 0.0,
20.0, 20.0, 0.0 }
You can see that the vertex coordinates are specified in viewport? space (I believe thats what its called). Now, when this get rendered by opengl, it doesnt work because clip space goes from -1.0 to 1.0 with the origin in the center.
What would be the correct way for me to handle this? I initially thought adjusting glClipControl to upper left and 0 to 1 would work, but it didnt. With clip control set to upper left and 0 to 1, the origin was still at the center, but it did allow for the Y-Axis to increase as it moved downward (which is a start).
Ideally, I would love to get opengl to have 0.0,0.0 to be the top left and 1.0, 1.0 to be the bottom right, then I just normalise each vertex position, but I have no idea how to get opengl to use that type of coordinate system.
One can easily do these transformation without matrices in the vertex shader:
// From pixels to 0-1
vPos.xy /= vResolution.xy;
// Flip Y so that 0 is top
vPos.y = (1.-vPos.y);
// Map to NDC -1,+1
vPos.xy = vPos.xy*2.-1.;
I'm doing 3D perspective projection in OpenGL (webgl), doing it myself with uniform-matrices.
Everthing is working fine, but I have an aspect ration of 3:2 (600px x 400px) and this distorts all geometry rendered.
In 2D I used to fix this in the model matrix by dividing x and y through 1 / width and 1 / height respectively.
Now I also have z to worry about and I am pretty clueless how / where to transform z to not distort on my 3:2 aspect ratio.
The model matrix does not seem to offer any opportunity to do this and I don't know where / what to do in the projection matrix.
Edit:
Projection Matrix:
#_pMatrix = [
1, 0.0, 0.0, 0.0,
0.0, 1, 0.0, 0.0,
0.0, 0.0, -(f + n) / (f - n), -1,
0.0, 0.0, -2.0 * n * f / (f - n), 0.0
]
Column major order
Edit 2:
Weird distortions on n < 1
You're missing the *left*, *right*, *top*, *bottom* in your projection matrix.
2*n
----- 0 0 0
r-l
2*n
0 ----- 0 0
t-b
r+l t+b -(f+n)
----- ----- ------ -1
r-l t-b f-n
-2*f*n 0
0 0 ------
f-n
If you define (r-l)/(t-b) = 3/2 such that the viewing volume is the appropriate size for your model, then you should be set.
Here are some slides describing the various projection matrices and how they're derived:
http://www.cs.unm.edu/~angel/CS433/LECTURES/CS433_17.pdf
They're by Edward Angel, the author of Interactive Computer Graphics, which is where I got this matrix. Unfortunately the OpenGL Red Book doesn't seem to work through the math at all.
What I am trying to do is add a darker red color with alpha (0.1,0,0,0.2) on top of a bright red (1,0,0,1). For the first layer it works fine, the result is (0.9 ,0 ,0, 1); However when the red value gets to 0.5 it cannot drop below that value.
The first layer is demonstrated with the following equation, and works fine:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
ColorBuffer color = (1,0,0,1) Bright Red
SourceColor color = (0.1,0,0,0.2) Dark Red
GL_ONE = 1 , GL_ONE_MINUS_SRC_ALPHA = 1 - 0.2 = 0.8
Cf = (ColorSource * One) + (ColorBuffer * One Minus Src Alpha);
Cf = ((0.1,0,0)*1 ) + ((1,0,0) * 0.8);
Cf = 0.1,0,0 + 0.8,0,0;
Cf = 0.9,0,0 // Here is the result
Now further down the line after many layers, it will get to a point where the destination color is darker : 0.5, now the color never gets any darker as demonstrated below it starts with 0.5,0,0 , but results in 0.5,0,0:
Cf = ((0.1,0,0)*1 ) + ((0.5,0,0) * 0.8);
Cf = 0.1,0,0 + 0.4,0,0;
Cf = 0.5,0,0
Here is the result which means the color buffer has not changed and the color I am overlaying no longer has any effect.
How do I get my dark red to layer until it replaces the bright red?
SIMPLE PROCESSING SKETCH DEMONSTRATING THE PROBLEM - you will notice here I am trying GL_SRC_ALPHA and it still has that problem:
http://studionu.net/files/OPENGL_test.zip
The image below describes the problem on the LEFT on the RIGHT is the desired effect.
EDIT HERE IS THE CODE
So here is my code:
I set up a texture buffer object and attach to a FBO:
'gl.glGenTextures(1, drawTex, 0);
gl.glBindTexture(GL.GL_TEXTURE_2D, drawTex[0]);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MIN_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_MAG_FILTER, GL.GL_LINEAR);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_S, GL.GL_CLAMP_TO_EDGE);
gl.glTexParameteri(GL.GL_TEXTURE_2D, GL.GL_TEXTURE_WRAP_T, GL.GL_CLAMP_TO_EDGE);
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA8, 1000, 500, 0, GL.GL_BGRA, GL.GL_UNSIGNED_BYTE, null); '
' // Creating FBO.
gl.glGenFramebuffersEXT(1, drawFBO, 0);
gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, drawFBO[0]);
gl.glFramebufferTexture2DEXT(GL.GL_FRAMEBUFFER_EXT, GL.GL_COLOR_ATTACHMENT0_EXT, GL.GL_TEXTURE_2D, drawTex[0], 0);
int stat = gl.glCheckFramebufferStatusEXT(GL.GL_FRAMEBUFFER_EXT);
if (stat != GL.GL_FRAMEBUFFER_COMPLETE_EXT) System.out.println("FBO error");
gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0);'
I then Clear the frame buffer:
' gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, drawFBO[0]);
// Drawing to the first color attachement of drawFBO (this is where drawTex is attached to).
gl.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0_EXT);
gl.glViewport(0, 0, 1000, 500);
gl.glColor4f(0.0, 0.0, 0.0, 0.0);
gl.glClear(gl.GL_COLOR_BUFFER_BIT);
// Unbinding drawFBO. Now drawing to screen again.
gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0); '
Here I Bind the framebuffer and draw into it and I set the color here swell:
' gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, drawFBO[0]);
// Drawing to the first color attachement of drawFBO (this is where drawTex is attached to).
gl.glDrawBuffer(GL.GL_COLOR_ATTACHMENT0_EXT);
// Setting orthographic projection.
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrtho(0.0, 1000, 0.0, 500, -100.0, +100.0);
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glViewport(0, 0, 1000, 500);
gl.glDisable(GL.GL_TEXTURE_2D);
gl.glEnableClientState(GL.GL_VERTEX_ARRAY);
gl.glVertexPointer(2, GL.GL_FLOAT, 0, paintBuffer);
// gl.glHint (gl.GL_POINT_SMOOTH_HINT, gl.GL_NICEST);
// gl.glEnable(gl.GL_POINT_SMOOTH);
gl.glEnable(gl.GL_VERTEX_PROGRAM_POINT_SIZE);
gl.glEnable(gl.GL_BLEND);
gl.glEnable(gl.GL_DEPTH_TEST);
gl.glDepthFunc(gl.GL_NICEST);
gl.glDisable(gl.GL_ALPHA_TEST);
gl.glAlphaFunc(gl.GL_LESS, 0.01);
gl.glPointSize(35.0);
float kBrushOpacity = 0.05/3.0;
println(kBrushOpacity);
float colorchange = 1.0;
if (timer>200) {
colorchange = 0.5; //////COLOR GETS DARKER
}
if (timer>400) {
//colorchange = 0.2; //////COLOR GETS DARKER AGAIN
}
timer++;
gl.glDisableClientState( gl.GL_COLOR_ARRAY );
gl.glColor4f(colorchange, 0, 0.0, kBrushOpacity);
gl.glBlendFuncSeparate(gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_SRC_ALPHA,gl.GL_SRC_ALPHA, gl.GL_ONE_MINUS_DST_ALPHA);////////THIS IS THE OPENGL BLEND EQUATION FOR PREMULTIPLIED COLORS
gl.glDrawArrays(GL.GL_POINTS, 0, count); //Count tells us how many point exist to be drawn.
gl.glDisable(gl.GL_BLEND); //Dont use blend when drawing the texture to screen. Just draw it.
gl.glDisableClientState(GL.GL_VERTEX_ARRAY);
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPopMatrix();
// Unbinding drawFBO. Now drawing to screen again.
gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0); '
Here I Draw texture to screen:
' gl.glDrawArrays(GL.GL_POINTS, 0, count); //Count tells us how many point exist to be drawn.
gl.glDisable(gl.GL_BLEND); //Dont use blend when drawing the texture to screen. Just draw it.
gl.glDisableClientState(GL.GL_VERTEX_ARRAY);
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPopMatrix();
// Unbinding drawFBO. Now drawing to screen again.
gl.glBindFramebufferEXT(GL.GL_FRAMEBUFFER_EXT, 0);
// Setting orthographic projection.
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glOrtho(0.0, width, 0.0, height, -100.0, +100.0);
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPushMatrix();
gl.glLoadIdentity();
gl.glViewport(0, 0, width, height);
// Drawing texture to screen.
gl.glEnable(GL.GL_TEXTURE_2D);
gl.glActiveTexture(GL.GL_TEXTURE0);
gl.glBindTexture(GL.GL_TEXTURE_2D, drawTex[0]);
gl.glTexEnvf(GL.GL_TEXTURE_ENV, GL.GL_TEXTURE_ENV_MODE, GL.GL_REPLACE);
gl.glBegin(GL.GL_QUADS);
gl.glTexCoord2f(0.0, 1.0);
gl.glVertex2f(0.0, 0.0);
gl.glTexCoord2f(1.0, 1.0);
gl.glVertex2f(width, 0.0);
gl.glTexCoord2f(1.0, 0.0);
gl.glVertex2f(width, height);
gl.glTexCoord2f(0.0, 0.0);
gl.glVertex2f(0.0, height);
gl.glEnd();
gl.glBindTexture(GL.GL_TEXTURE_2D, 0);
gl.glMatrixMode(GL.GL_PROJECTION);
gl.glPopMatrix();
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPopMatrix(); '
May I ask why you've chosen GL_ONE as the source factor? The fact that your blend formula has a total multiplication value greater than one means that just blending the same color with itself with brighten it, which you're seeing as you're reaching an equilibrium while trying to darken it.
If you used the more common source factor GL_SRC_ALPHA I think it should behave as you expect. Is there a reason you have not chosen to use it?
If you use GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA as your source/dest blend factors, you will not get a bias toward brightening the image from performing a blend, because the multiplication factors sum to 1.
EDIT: I disagree that using GL_SRC_ALPHA would still have the same problem. I did a quick mockup of the formula in excel, as you can see below. The value of the blending eventually converges to the source color (0.1f). If you want to converge faster, use a larger src alpha value.
Parameters:
Source Red = 0.1
Source Alpha = 0.2
Dest Red = 1.0 (iteration==0), Result[iteration-1] (iteration != 0)
1-Src Alpha = 0.8
Result = Src Red * Src Alpha + Dest Red * One_Minus_Src_Alpha
Result:
Columns:
0: Iteration
1: Source Red
2: Source Alpha
3: Dest Red
4: 1-Src Alpha
5: Result
0 1 2 3 4 5
------------------------------------------------
0 0.1 0.2 1.00 0.8 0.82 (0.1 * 0.2 + 1.0 * 0.8 = 0.82)
1 0.1 0.2 0.82 0.8 0.68 (0.1 * 0.2 + 0.82 * 0.8 = 0.68)
2 0.1 0.2 0.68 0.8 0.56 (etc...)
3 0.1 0.2 0.56 0.8 0.47
4 0.1 0.2 0.47 0.8 0.39
5 0.1 0.2 0.39 0.8 0.34
6 0.1 0.2 0.34 0.8 0.29
7 0.1 0.2 0.29 0.8 0.25
8 0.1 0.2 0.25 0.8 0.22
9 0.1 0.2 0.22 0.8 0.20
10 0.1 0.2 0.20 0.8 0.18
11 0.1 0.2 0.18 0.8 0.16
12 0.1 0.2 0.16 0.8 0.15
13 0.1 0.2 0.15 0.8 0.14
14 0.1 0.2 0.14 0.8 0.13
15 0.1 0.2 0.13 0.8 0.13
16 0.1 0.2 0.13 0.8 0.12
17 0.1 0.2 0.12 0.8 0.12
18 0.1 0.2 0.12 0.8 0.11
19 0.1 0.2 0.11 0.8 0.11
20 0.1 0.2 0.11 0.8 0.11
21 0.1 0.2 0.11 0.8 0.11
22 0.1 0.2 0.11 0.8 0.11
23 0.1 0.2 0.11 0.8 0.10
24 0.1 0.2 0.10 0.8 0.10
25 0.1 0.2 0.10 0.8 0.10
The answer is to use the same method as drawing programs such as GIMP or Photoshop or SKetchbook. They never blend anything new drawn onto the canvas, its simply drawn at the end of each call as a texture quad. So fast you never see it...... Anyway , this worked for me hope it helps others.
I understand that GL_REPEAT parameter will ignore the integer portion of the texture coordinate, only the fractional part is used when doing texture mapping. The question is how OpenGL handles the texture border? For instance, if the texcoord is 2.0, the return value should be the same as the return value at texcoord 0.0 or at 1.0? I assume the "normal" texcoord is in [0.0,1.0] range, which is the most common texcoord range.
Thanks for your answer!
Your concrete example is interesting. Say you have a 1D texture of size 2 [a b]
What you actually get from a texture coordinate of 0.0 depends on filtering. It's not necessarily a. The only location where you have a and just a for LINEAR (ignoring mipmapping) is at texture coordinate 0.25. The same is true for b and 0.75.
At 0.0 and 1.0 (and 2.0, for that matter), you will get (a+b)/2 (again, LINEAR). It is not the case that 0 will give you a and 1 will give you b.
-0.25 0 0.25 .5 0.75 1 1.25 ...
_____________________________
| | |
B | A | B | A
_____________________________
In the case of NEAREST filtering, 0.0 and 1.0 are exactly on the edge of the texels, While I don't remember exactly what the specification says on that case, I would not rely on it.
All that said... All this is discussing the texture coordinates that are used by a fragment. They are not the ones you passed in, but the ones that were rasterized.
E.g. If you draw a quad that covers a 2 pixel region, with 1 texture coordinate from 0 to 1.
Here is a diagram of the pixels that get covered along with the interpolated coordinates:
0 0.25 .5 0.75 1
_________________
| | |
| P0 | P1 |
_________________
The texture coordinates that the fragments will use are indeed 0.25 and 0.75 (aka the rasterizer interpolates texture coordinates in the middle of the pixel), even though you passed in 0 and 1.
2.0 is clamped to 0.0, as it doesn't fall out of the [0, 1] range, it doesn't consider the texture border.