I had some fun making my first shaders and my first test subject was a 100x100 quad faced picture.
I thought I would learn how to use TRIANGLE_STRIP so I switched it, moved one of the vertex calls so it would look square again. Turned my shader on and there was a duplicate right behind it of only one face but it had the entire texture on it. I have only one set of draw calls for this shape....
Heres my shape code:
glBegin(GL_TRIANGLE_STRIP);
float vx;
float vy;
for(float x=0; x<100; x++){
for(float y=0; y<100; y++){
float vx=x/5.0;
float vy=y/5.0;
glTexCoord2f(0.01*x, 0.01*y);
glVertex3f(vx, vy, 0);
glTexCoord2f(0.01+0.01*x, 0.01*y);
glVertex3f(.2+vx, vy, 0);
glTexCoord2f(0.01*x, 0.01+0.01*y);
glVertex3f(vx, .2+vy, 0);
glTexCoord2f(0.01+0.01*x, 0.01+0.01*y);
glVertex3f(.2+vx, .2+vy, 0);
}}
glEnd();
And my (vertex) shader code:
uniform float uTime,uWaveintensity,uWavespeed;
uniform float uZwave1,uZwave2,uXwave,uYwave;
void main(){
vec4 position = gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
position.z=((sin(position.x+uTime*uWavespeed)*uZwave1)+(sin(position.y+uTime*uWavespeed))*uZwave2)*uWaveintensity;
position.x=position.x+(sin(position.x+uTime*uWavespeed)*uXwave)*uWaveintensity;
position.y=position.y+(sin(position.y+uTime*uWavespeed)*uYwave)*uWaveintensity;
gl_Position = gl_ModelViewProjectionMatrix * position;
}
If anyone has any info on drawing more efficiently with shared vertices(triangle_strips) I've googled but I don't understand any so far XD. I wanna know.
screenshot(s):
with 8x8 faces
same thing same angle,lines=ghost
I see whats happening now, but I don't know how to fix it.
I don't think you can create a 100x100 quad plane with triangle strips this way. Now you're going by rows and columns just in one direction, which means that the last 2 vertices of first row will create a triangle with the first vertex of the second row and that's not what you want.
I'd suggest you to start with 2x2 pattern just to learn how triangle strips work, then move to 3x3 and 4x4 to see what is a difference between odd and even situations. When you have some understanding of the problems you can create universal algorithm and change your size to 100.
After this all you can focus on the vertex shader to make it waving.
And for the future: never start from big data if you're learning how the things work. :)
EDIT:
Since I wrote this answer I learned that you already CAN make two dimmensional grid with one tri-strip, using degenerate triangles :).
When a triangle uses the same vertex twice it will be ignored by the rasterizer during rendering, so at the end of your first strip you can create a degenerate triangle using last vertex of first strip and first vertex of the second strip. It doesn't matter which of the two vertexes you'll use as the 3rd one, as long as they are in the correct order (e.g. 1,1,2 or 1,2,2). This way you've created a triangle that won't be drawn, but it will move the next 'starting' point to beginning of your 2nd strip, where you can continue building your mesh.
The drawback is that you create some triangles, that will be transformed but not drawn (there will be not many of them), but the advantage is that you run just one 'draw strip' command to GPU which is much faster.
Related
This question already has answers here:
OpenGL GL_LINES endpoints not joining
(5 answers)
Closed 1 year ago.
Take a look at the bottom-left corner of the green rectangles in the middle:
They're missing one pixel at the bottom left.
I drew those like this:
class Rect: public StaticModel {
public:
Rect() {
constexpr glm::vec2 vertices[] {
{-0.5,0.5}, // top left
{0.5,0.5}, // top right
{0.5,-0.5}, // bottom right
{-0.5,-0.5}, // bottom left
};
_buf.bufferData<glm::vec2>(vertices,BufferUsage::StaticDraw);
_idxBuf.bufferData<GLuint>({0,1,3,2,0,3,1,2},BufferUsage::StaticDraw);
}
void bind() const override {
_buf.bindVertex();
_idxBuf.bind();
}
void draw() const override {
gl::drawElements(8,DrawMode::Lines);
}
private:
VertexBuffer _buf{sizeof(glm::vec2)};
ElementArrayBuffer _idxBuf{};
};
That code is using a bunch of my helper methods/classes but you should be able to tell what it does. I tried drawing the rect using a simple GL_LINE_LOOP but that had the same problem, so now I'm trying GL_LINES and drawing all the lines in the same direction: top to bottom and left to right, but even still I'm missing a pixel.
These coordinates are going through orthographic projection:
gl_Position = projection * model * vec4(inPos, 0.0, 1.0);
So the shader is scaling those 0.5 coords up to pixel coords, but I don't think it's a rounding error.
Anything else I can try to get that corner to align?
OpenGL gives a lot of leeway for how implementations rasterize lines. It requires some desirable properties, but those do not prevent gaps when mixing x-major ('horizontal') and y-major ('vertical') lines.
First thing, the "spirit of the spec" is to rasterize half-open lines; i.e. include the first vertex and exclude the final one. For that reason you should ensure that each vertex appears exactly once as a source and once as destination:
_idxBuf.bufferData<GLuint>({0,1,1,2,2,3,3,0},BufferUsage::StaticDraw);
This is contrary to your attempt of drawing "top to bottom and left to right".
GL_LINE_LOOP already does that though, and you say that it doesn't solve the problem. That is indeed not guaranteed to solve the problem because you mix x-major and y-major lines here, but you still should follow the rule in order for the next point to work.
Next, I bet, some of your vertices fall right between the pixels; i.e. the window coordinates fractional part is exactly zero. When rasterizing such primitives the differences between different implementations (or in our case between x-major and y-major lines on the same implementation) become prominent.
To solve that you can snap your vertices to the pixel grid:
// xy - vertex in window coordinates, i.e. same as gl_FragCoord
xy = floor(xy) + 0.5
You can do this either in C++, or in the vertex shader. In either case you'll need to apply the projection and viewport transformations, and then undo them so that OpenGL can re-apply them afterwards. It's ugly, I know.
The only bullet-proof way to rasterize pixel-perfect lines, however, is to render triangles to cover the shape (either each line individually or the entire rectangle) and compute the coverage analytically from gl_FragCoord.xy in the fragment shader.
I am trying to compute the visibility between two planes or patches.
I have a wireframe of quads. Each quad has a normal vector with X, Y and Z coordinates. Each quad has 4 vertices. Each vertex has X, Y and Z coordinates.
Given two quads, how can I know if there is an occluder or another object in between these two patches (quads).
Therefore, I need to create a method that returns 1 if patches has no occluders or return 0 if patches has occluder.
The method I picture would be something like this:
GLint visibility(Patch i, Patch j) {
GLboolean isVisible;
vector<Patch> allPatches; // can be used to get all patches in the scene
// Check if there is any occluder between patch i and patch j
Some computations here
if(isVisible) {
return 1;
} else {
return 0;
}
}
I've heard of z-buffer algorithms and the hemicube implementation that would get this done. I already have the form-factors computed. I just need to finish this step to get shadows.
Make sure you give some form of answer with graphs or methods because I am not that genius
I found the solution. Basically I needed to use ray tracing techniques. Throw ray from one patch to another and check if ray intercepts the planes with barycentric equation computation. Once you find the control points you need to check if the control point lies on you quad.
I have the following issue when trying to map UV-coordinates to a sphere
Here is the code I'm using to get my UV-coordinates
glm::vec2 calcUV( glm::vec3 p)
{
p = glm::normalize(p);
const float PI = 3.1415926f;
float u = ((glm::atan(p.x, p.z) / PI) + 1.0f) * 0.5f;
float v = (asin(p.y) / PI) + 0.5f;
return glm::vec2(u, v);
}
The issue was very well explained at this stackoverflow question, although, I still don't get how can I fix it. From what I've been reading, I have to create a duplicate pair of vertices. Does anyone know some good and effcient way of doing it ?
The problem you have is, that at the seam your texture coordinates "roll" back to 0, so you get the whole texture mapped, mirrored onto the seam. To avoid this you should use GL_WRAP repeat mode and at the seam finish with vertices with texture coordinates >= 1 (don't roll back to 0). Remember that a vertex consists of the whole tuple of all its attributes and vertices with different attribute values are different in the whole, so there's no point in trying to "share" the vertices.
Another way to do it is simply to pass the object coordinates of the sphere into the pixel shader, and calculate the UV in "perfect" spherical space.
Be aware that you will need to pass the local derivatives so that you don't merely reduce your seam from several pixels to one.
Otherwise, yes. You need to duplicate vertices along the same edge as u=0, and likewise repeat the vertices at the poles. In this way, your object topology will become a rectangle: just like your texture.
Need some direction on 3d point cloud display using openGl in c++ (vs2008). I am trying to do a 3d point cloud display with a texture. I have 3 2D arrays (each same size 1024x512) representing x,y,z of each point. I think I am on the right track with
glBegin(GL_POINTS);
for(int i=0; i<1024; i++)
{
for(int j=0; j<512; j++)
{
glVertex3f(x[i][j], y[i][j], z[i][j]);
}
}
}
glEnd();
Now this loads all the vertices in the buffer (i think) but from here I am not sure how to proceed. Or I am completely wrong here.
Then I have another 2D array (same size) that contains color data (values from 0-255) that I want to use as texture on the 3D point cloud and display.
The point drawing code is fine as is.
(Long term, you may run into performance problems if you have to draw these points repeatedly, say in response to the user rotating the view. Rearranging the data from 3 arrays into 1 with x, y, z values next to each other would allow you to use faster vertex arrays/VBOs. But for now, if it ain't broke, don't fix it.)
To color the points, you need glColor before each glVertex. It has to be before, not after, because in OpenGL glVertex loosely means that's a complete vertex, draw it. You've described the data as a point cloud, so don't change glBegin to GL_POLYGON, leave it as GL_POINTS.
OK, you have another array with one byte color index values. You could start by just using that as a greyscale level with
glColor3ub(color[i][j], color[i][j], color[i][j]);
which should show the points varying from black to white.
To get the true color for each point, you need a color lookup table - I assume there's either one that comes with the data, or one you're creating. It should be declared something like
static GLfloat ctab[256][3] = {
1.0, 0.75, 0.33, /* Color for index #0 */
...
};
and used before the glVertex with
glColor3fv(ctab[color[i][j]]);
I've used floating point colors because that's what OpenGL uses internally these days. If you prefer 0..255 values for the colors, change the array to GLubyte and the glColor3fv to glColor3ub.
Hope this helps.
I am using Nvidia CG and Direct3D9 and have the question about the following code.
It compiles, but doesn't "loads" (using cgLoadProgram wrapper) and the resulting failure is described simplyas D3D failure happened.
It's a part of the pixel shader compiled with shader model set to 3.0
What may be interesting is that this shader loads fine in the following cases:
1) Manually unrolling the while statement (to many if { } statements).
2) Removing the line with the tex2D function in the loop.
3) Switching to shader model 2_X and manually unrolling the loop.
Problem part of the shader code:
float2 tex = float2(1, 1);
float2 dtex = float2(0.01, 0.01);
float h = 1.0 - tex2D(height_texture1, tex);
float height = 1.00;
while ( h < height )
{
height -= 0.1;
tex += dtex;
// Remove the next line and it works (not as expected,
// of course)
h = tex2D( height_texture1, tex );
}
If someone knows why this can happen or could test the similiar code in non-CG environment or could help me in some other way, I'm waiting for you ;)
Thanks.
I think you need to determine the gradients before the loop using ddx/ddy on the texture coordinates and then use tex2D(sampler2D samp, float2 s, float2 dx, float2 dy)
The GPU always renders quads not pixels (even on pixel borders - superfluous pixels are discarded by the render backend). This is done because it allows it to always calculate the screen space texture derivates even when you use calculated texture coordinates. It just needs to take the difference between the values at the pixel centers.
But this doesn't work when using dynamic branching like in the code in the question, because the shader processors at the individual pixels could diverge in control flow. So you need to calculate the derivates manually via ddx/ddy before the program flow can diverge.