I am porting some legacy OpenGL code to Web Assembly (using emscripten). This is using OpenGL 1.1
I have two vertex formats, and I swap between them (depending on whether I'm doing 2D drawing or 3D drawing). Here are the two formats:
struct Vertex2DRef
{
float mX;
float mY;
float mZ;
unsigned int mDiffuse;
float mTextureU;
float mTextureV;
};
struct Vertex2DNRef
{
float mX;
float mY;
float mZ;
float mNX;
float mNY;
float mNZ;
float mTextureU;
float mTextureV;
};
This is old, legacy code, so I'm just drawing straight from memory buffers. When I'm drawing 2D, I do this (Don't mind the XGL wrapping, it's just a define to also report if the call is producing errors):
XGL(glEnableClientState(GL_VERTEX_ARRAY));
XGL(glEnableClientState(GL_COLOR_ARRAY));
XGL(glEnableClientState(GL_TEXTURE_COORD_ARRAY));
XGL(glDisableClientState(GL_NORMAL_ARRAY));
int aStride=sizeof(Vertex2DRef);
Vertex2DRef* theRef=(Vertex2DRef*)theBuffer; // <- The incoming vertices are in "theBuffer"
XGL(glVertexPointer(3,GL_FLOAT,aStride,&theRef->mX));
XGL(glNormalPointer(GL_FLOAT,aStride,NULL));
XGL(glColorPointer(4,GL_UNSIGNED_BYTE,aStride,&theRef->mDiffuse));
XGL(glTexCoordPointer(2,GL_FLOAT,aStride,&theRef->mTextureU));
XGL(glDrawArrays(GL_TRIANGLES,0,theTriangleCount*3));
So the above works... I get my render exactly as I expect them.
HOWEVER: When I try to draw the other vertex format, the one with normals, but no diffuse:
XGL(glEnableClientState(GL_VERTEX_ARRAY));
XGL(glDisableClientState(GL_COLOR_ARRAY));
XGL(glEnableClientState(GL_TEXTURE_COORD_ARRAY));
XGL(glEnableClientState(GL_NORMAL_ARRAY));
int aStride=sizeof(Vertex2DNRef);
Vertex2DNRef* theRef=(Vertex2DNRef*)theBuffer;
XGL(glVertexPointer(3,GL_FLOAT,aStride,&theRef->mX));
XGL(glNormalPointer(GL_FLOAT,aStride,&theRef->mNX) );
XGL(glColorPointer(4,GL_UNSIGNED_BYTE,aStride,NULL));
XGL(glTexCoordPointer(2,GL_FLOAT,aStride,&theRef->mTextureU));
XGL(glDrawArrays(GL_TRIANGLES,0,theTriangleCount*3))
Now, I get nothing at all. That glDrawArrays line drops two errors-- GL_INVALID_OPERATION and GL_INVALID_VALUE. The reasons the documents list for those possible errors don't even apply (confirmed that theTriangleCount is not negative, and I don't use glBegin or glEnd anywhere in the program at all.
Can anyone help?
Related
A quick summary:
I've a simple Quad tree based terrain rendering system that builds terrain patches which then sample a heightmap in the vertex shader to determine the height of each vertex.
The exact same calculation is done on the CPU for object placement and co.
Super straightforward, but now after adding some systems to procedurally place objects I've discovered that they seem to be misplaced by just a small amount. To debug this I render a few crosses as single models over the terrain. The crosses (red, green, blue lines) represent the height read from the CPU. While the terrain mesh uses a shader to translate the vertices.
(I've also added a simple odd/even gap over each height value to rule out a simple offset issue. So those ugly cliffs are expected, the submerged crosses are the issue)
I'm explicitly using GL_NEAREST to be able to display the "raw" height value:
As you can see the crosses are sometimes submerged under the terrain instead of representing its exact height.
The heightmap is just a simple array of floats on the CPU and on the GPU.
How the data is stored
A simple vector<float> which is uploaded into a GL_RGB32F GL_FLOAT buffer. The floats are not normalized and my terrain usually contains values between -100 and 500.
How is the data accessed in the shader
I've tried a few things to rule out errors, the inital:
vec2 terrain_heightmap_uv(vec2 position, Heightmap heightmap)
{
return (position + heightmap.world_offset) / heightmap.size;
}
float terrain_read_height(vec2 position, Heightmap heightmap)
{
return textureLod(heightmap.heightmap, terrain_heightmap_uv(position, heightmap), 0).r;
}
Basics of the vertex shader (the full shader code is very long, so I've extracted the part that actually reads the height):
void main()
{
vec4 world_position = a_model * vec4(a_position, 1.0);
vec4 final_position = world_position;
// snap vertex to grid
final_position.x = floor(world_position.x / a_quad_grid) * a_quad_grid;
final_position.z = floor(world_position.z / a_quad_grid) * a_quad_grid;
final_position.y = terrain_read_height(final_position.xz, heightmap);
gl_Position = projection * view * final_position;
}
To ensure the slightly different way the position is determined I tested it using hardcoded values that are identical to how C++ reads the height:
return texelFetch(heightmap.heightmap, ivec2((position / 8) + vec2(1024, 1024)), 0).r;
Which gives the exact same result...
How is the data accessed in the application
In C++ the height is read like this:
inline float get_local_height_safe(uint32_t x, uint32_t y)
{
// this macro simply clips x and y to the heightmap bounds
// it does not interfer with the result
BB_TERRAIN_HEIGHTMAP_BOUND_XY_TO_SAFE;
uint32_t i = (y * _size1d) + x;
return buffer->data[i];
}
inline float get_height_raw(glm::vec2 position)
{
position = position + world_offset;
uint32_t x = static_cast<int>(position.x);
uint32_t y = static_cast<int>(position.y);
return get_local_height_safe(x, y);
}
float BB::Terrain::get_height(const glm::vec3 position)
{
return heightmap->get_height_raw({position.x / heightmap_unit_scale, position.z / heightmap_unit_scale});
}
What have I tried:
Comparing the Buffers
I've dumped the first few hundred values from the vector. And compared it with the floating point buffer uploaded to the GPU using Nvidia Nsight, they are equal, rounding/precision errors there.
Sampling method
I've tried texture, textureLod and texelFetch to rule out some issue there, they all give me the same result.
Rounding
The super strange thing, when I round all the height values. They are perfectly aligned which just screams floating point precision issues.
Position snapping
I've tried rounding, flooring and ceiling the position, to ensure the position always maps to the same texel. I also tried adding an epsilon offset to rule out a positional precision error (probably stupid because the terrain is stable...)
Heightmap sizes
I've tried various heightmaps, also of different sizes.
Heightmap patterns
I've created a heightmap containing a pattern to ensure the position is not just offsetet.
I'm attempting to incorporate a path finding algorithm I made into code but I'm running into a problem. I am trying to be flexible with my code and allow data sets of different lengths and then draw the points using openGL. My problem is that for the points I am using an array of pointers to accomplish the variable length and openGL doesn't like that when trying to convert data types. With the function glVertex2i() it wants GLint as its two parameters but when I try and convert my array to GLint I get a blank window. I understand this is a typedef but it wont take the regular int from the array. Please help!
struct Points { int x, y; }; //My struct to hold the x,y cords
int size; //This is the size of the array
Points *crds = new Points[size]; //The data for this array was input in another function
for (int i = 0; i < size; i++)
{
//These are some things to help configure the look of the points
glEnable(GL_POINT_SMOOTH);
glPointSize(100);
glColor3f(250, 250, 250);
glBegin(GL_POINTS);
for (int i = 0; i < size; i++)
{
glVertex2i((GLint)crds[i].x, (GLint)crds[i].y);
}
glEnd();
use GLint for the coords
because int is not guaranteed to be 32bit it differs from compiler to compiler and platform and can be 16/32/64 bit these days. Your solution should work too but if you use GLint then You do not need to cast the (GLint) in glVertex and also can use the vector version like this:
GLint pnt[100][2];
glVertex2iv(pnt[i]);
or like this:
GLint pnt[100<<1];
glVertex2iv(pnt[i<<1]);
But the real problem lies in following bullets...
matrices
we do not see any matrices nor the range of your points. The OpenGL uses unit matrices by default which means your points should be <-1,+1> to be visible which is not practical on integers. So if your points are in pixels and your screen is xs,ys resolution you should add this before your render:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(-1.0,-1.0,0.0);
glScalef(2.0/float(xs),2.0/float(ys),0.0);
glColor3f(250,250,250)
The floating point range of colors in OpenGL is <0.0,1.0> so you are setting wrong colors. Try this instead:
glColor3f(1.0,1.0,1.0);
glPointSize(100)
100 is too big as the size is in pixels try:
glPointSize(8);
That is all I can think of what could be wrong... Look here:
Drawing a line using individual pixels in OpenGl core
the related QA contains working example for both old and new API.
I had some fun making my first shaders and my first test subject was a 100x100 quad faced picture.
I thought I would learn how to use TRIANGLE_STRIP so I switched it, moved one of the vertex calls so it would look square again. Turned my shader on and there was a duplicate right behind it of only one face but it had the entire texture on it. I have only one set of draw calls for this shape....
Heres my shape code:
glBegin(GL_TRIANGLE_STRIP);
float vx;
float vy;
for(float x=0; x<100; x++){
for(float y=0; y<100; y++){
float vx=x/5.0;
float vy=y/5.0;
glTexCoord2f(0.01*x, 0.01*y);
glVertex3f(vx, vy, 0);
glTexCoord2f(0.01+0.01*x, 0.01*y);
glVertex3f(.2+vx, vy, 0);
glTexCoord2f(0.01*x, 0.01+0.01*y);
glVertex3f(vx, .2+vy, 0);
glTexCoord2f(0.01+0.01*x, 0.01+0.01*y);
glVertex3f(.2+vx, .2+vy, 0);
}}
glEnd();
And my (vertex) shader code:
uniform float uTime,uWaveintensity,uWavespeed;
uniform float uZwave1,uZwave2,uXwave,uYwave;
void main(){
vec4 position = gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
position.z=((sin(position.x+uTime*uWavespeed)*uZwave1)+(sin(position.y+uTime*uWavespeed))*uZwave2)*uWaveintensity;
position.x=position.x+(sin(position.x+uTime*uWavespeed)*uXwave)*uWaveintensity;
position.y=position.y+(sin(position.y+uTime*uWavespeed)*uYwave)*uWaveintensity;
gl_Position = gl_ModelViewProjectionMatrix * position;
}
If anyone has any info on drawing more efficiently with shared vertices(triangle_strips) I've googled but I don't understand any so far XD. I wanna know.
screenshot(s):
with 8x8 faces
same thing same angle,lines=ghost
I see whats happening now, but I don't know how to fix it.
I don't think you can create a 100x100 quad plane with triangle strips this way. Now you're going by rows and columns just in one direction, which means that the last 2 vertices of first row will create a triangle with the first vertex of the second row and that's not what you want.
I'd suggest you to start with 2x2 pattern just to learn how triangle strips work, then move to 3x3 and 4x4 to see what is a difference between odd and even situations. When you have some understanding of the problems you can create universal algorithm and change your size to 100.
After this all you can focus on the vertex shader to make it waving.
And for the future: never start from big data if you're learning how the things work. :)
EDIT:
Since I wrote this answer I learned that you already CAN make two dimmensional grid with one tri-strip, using degenerate triangles :).
When a triangle uses the same vertex twice it will be ignored by the rasterizer during rendering, so at the end of your first strip you can create a degenerate triangle using last vertex of first strip and first vertex of the second strip. It doesn't matter which of the two vertexes you'll use as the 3rd one, as long as they are in the correct order (e.g. 1,1,2 or 1,2,2). This way you've created a triangle that won't be drawn, but it will move the next 'starting' point to beginning of your 2nd strip, where you can continue building your mesh.
The drawback is that you create some triangles, that will be transformed but not drawn (there will be not many of them), but the advantage is that you run just one 'draw strip' command to GPU which is much faster.
I am using Nvidia CG and Direct3D9 and have the question about the following code.
It compiles, but doesn't "loads" (using cgLoadProgram wrapper) and the resulting failure is described simplyas D3D failure happened.
It's a part of the pixel shader compiled with shader model set to 3.0
What may be interesting is that this shader loads fine in the following cases:
1) Manually unrolling the while statement (to many if { } statements).
2) Removing the line with the tex2D function in the loop.
3) Switching to shader model 2_X and manually unrolling the loop.
Problem part of the shader code:
float2 tex = float2(1, 1);
float2 dtex = float2(0.01, 0.01);
float h = 1.0 - tex2D(height_texture1, tex);
float height = 1.00;
while ( h < height )
{
height -= 0.1;
tex += dtex;
// Remove the next line and it works (not as expected,
// of course)
h = tex2D( height_texture1, tex );
}
If someone knows why this can happen or could test the similiar code in non-CG environment or could help me in some other way, I'm waiting for you ;)
Thanks.
I think you need to determine the gradients before the loop using ddx/ddy on the texture coordinates and then use tex2D(sampler2D samp, float2 s, float2 dx, float2 dy)
The GPU always renders quads not pixels (even on pixel borders - superfluous pixels are discarded by the render backend). This is done because it allows it to always calculate the screen space texture derivates even when you use calculated texture coordinates. It just needs to take the difference between the values at the pixel centers.
But this doesn't work when using dynamic branching like in the code in the question, because the shader processors at the individual pixels could diverge in control flow. So you need to calculate the derivates manually via ddx/ddy before the program flow can diverge.
Imagine the following scenario: you have a set of RPG character spritesheets in PNG format and you want to use them in an OpenGL application.
The separate characters are (usually) 16 by 24 pixels in size (that is, 24 pixels tall) and may be at any width and height without leaving padding. Kinda like this:
(source: kafuka.org)
I already have the code to determine an integer-based clipping rectangle given a frame index and size:
int framesPerRow = sheet.Width / cellWidth;
int framesPerColumn = sheet.Height / cellHeight;
framesTotal = framesPerRow * framesPerColumn;
int left = frameIndex % framesPerRow;
int top = frameIndex / framesPerRow;
//Clipping rect's width and height are obviously cellWidth and cellHeight.
Running this code with frameIndex = 11, cellWidth = 16, cellHeight = 24 would return a cliprect (32, 24)-(48, 48) assuming it's Right/Bottom opposed to Width/Height.
The actual question
Now, given a clipping rectangle and an X/Y coordinate to place the sprite on, how do I draw this in OpenGL? Having the zero coordinate in the top left is preferred.
You have to start thinking in "texture space" where the coordinates are in the range [0, 1].
So if you have a sprite sheet:
class SpriteSheet {
int spriteWidth, spriteHeight;
int texWidth, texHeight;
int tex;
public:
SpriteSheet(int t, int tW, int tH, int sW, int sH)
: tex(t), texWidth(tW), texHeight(tH), spriteWidth(sW), spriteHeight(sH)
{}
void drawSprite(float posX, float posY, int frameIndex);
};
All you have to do is submit both vertices and texture vertices to OpenGL:
void SpriteSheet::drawSprite(float posX, float posY, int frameIndex) {
const float verts[] = {
posX, posY,
posX + spriteWidth, posY,
posX + spriteWidth, posY + spriteHeight,
posX, posY + spriteHeight
};
const float tw = float(spriteWidth) / texWidth;
const float th = float(spriteHeight) / texHeight;
const int numPerRow = texWidth / spriteWidth;
const float tx = (frameIndex % numPerRow) * tw;
const float ty = (frameIndex / numPerRow + 1) * th;
const float texVerts[] = {
tx, ty,
tx + tw, ty,
tx + tw, ty + th,
tx, ty + th
};
// ... Bind the texture, enable the proper arrays
glVertexPointer(2, GL_FLOAT, verts);
glTexCoordPointer(2, GL_FLOAT, texVerts);
glDrawArrays(GL_TRI_STRIP, 0, 4);
}
};
Franks solution is already very good.
Just a (very important) sidenote, since some of the comments suggested otherwise.
Please don't ever use glBegin/glEnd.
Don't ever tell someone to use it.
The only time it is OK to use glBegin/glEnd is in your very first OpenGL program.
Arrays are not much harder to handle, but...
... they are faster.
... they will still work with newer OpenGL versions.
... they will work with GLES.
... loading them from files is much easier.
I'm assuming you're learning OpenGL and only needs to get this to work somehow. If you need raw speed, there's shaders and vertex buffers and all sorts of both neat and complicated things.
The simplest way is to load the PNG into a texture (assuming you have the ability to load images into memory, you do need htat), then draw it with a quad setting appropriate texture coordinates (they go from 0 to 1 with floating point coordinates, so you need to divide by texture width or height accordingly).
Use glBegin(GL_QUADS), glTexcoord2f(), glVertex2f(), glEnd() for the simplest (but not fastest) way to draw this.
For making zero top left, either use gluOrtho() to set up the view matrix differently from normal GL (look up the docs for that function, set top to 0 and bottom to 1 or screen_height if you want integer coords) or just make change your drawing loop and just do glVertex2f(x/screen_width, 1-y/screen_height).
There are better and faster ways to do this, but this is probably one of the easiest if you're learning raw OpenGL from scratch.
A suggestion, if I may. I use SDL to load my textures, so what I did is :
1. I loaded the texture
2. I determined how to separate the spritesheet into separate sprites.
3. I split them into separate surfaces
4. I make a texture for each one (I have a sprite class to manage them).
5. Free the surfaces.
This takes more time (obviously) on loading, but pays of later.
This way it's a lot easier (and faster), as you only have to calculate the index of the texture you want to display, and then display it. Then, you can scale/translate it as you like and call a display list to render it to whatever you want. Or, you could do it in immediate mode, either works :)