I'm having trouble understanding the core concept of spaces in OpenGL. I've been reading an online book on modern 3D graphics for a couple weeks now and i often find myself confused with all of the spaces used in a program. To be specific, spaces such as: Model space , World space, Camera space, Clip space. I can't seem to wrap my mind around the order that i should be transforming the matrix from and into, an example from one of my tutorial programs:
//.vert shader of a program
#version 330
layout(location = 0) in vec4 position;
uniform mat4 cameraToClipMatrix;
uniform mat4 worldToCameraMatrix;
uniform mat4 modelToWorldMatrix;
void main()
{
vec4 temp = modelToWorldMatrix * position;
temp = worldToCameraMatrix * temp;
gl_Position = cameraToClipMatrix * temp;
}
cameraToClip , worldToCamera, XtoY, ZtoQ, how can i get an understanding of these spaces in OpenGL, websites? videos? references? Or should i just go back and re-read the information on these spaces in the tutorial until it attatches to my brain.
I really don't know how to explain it any better than I did. Especially when the matrices are named about as clearly as they can be.
Think of a matrix like a function. A function has inputs and it returns a value. You must pass the correct input or your compiler will complain.
Consider these functions:
Float intToFloat(Int i);
Double floatToDouble(Float f);
Real doubleToReal(Double d);
Where Int, Float, Double, and Real are user-defined C++ types.
Let's say I need to write this function:
Real intToReal(Int i);
So all I have is an Int. Of the above functions, there is exactly one function I can call: intToFloat. The name says it all: it takes an int and turns it into a float. Therefore, given an Int, the only thing I can do with it is call intToFloat.
Int i = ...;
Float f = intToFloat(i);
Well, now I have a Float. There is again only one function I can call: floatToDouble.
Double d = floatToDouble(d);
And with that, I can only call doubleToReal. Which means our intToReal function is:
Real intToReal(Int i)
{
Int i = ...;
Float f = intToFloat(i);
Double d = floatToDouble(d);
return doubleToReal(d);
}
Just like the matrix example.
The most important thing that a Vertex Shader does is transform positions from their original space (called model space) to the space that OpenGL defines called clip space. That's job #1 for most vertex shaders.
The matrices are just like those functions, converting the position into intermediate spaces along the way.
There can be no answer to this question until after it is solved. What will teach well enough to some people, for them to grok concepts, will not do the same for everybody. My best advice is to learn to be a 3D modeler before you become a 3D programmer. That's what I did. Once you have good familiarity with visualization of the data, then you can form mental models more easily, and code with them in mind. And when you need further visualizations to help you create algorithms, you'll be able to create them without using code.
Related
I use oglplus - it's a c++ wrapper for OpenGL.
I have a problem with defining instanced data for my particle renderer - positions work fine but something goes wrong when I want to instance a bunch of ints from the same VBO.
I am going to skip some of the implementation details to not make this problem more complicated. Assume that I bind VAO and VBO before described operations.
I have an array of structs (called "Particle") that I upload like this:
glBufferData(GL_ARRAY_BUFFER, sizeof(Particle) * numInstances, newData, GL_DYNAMIC_DRAW);
Definition of the struct:
struct Particle
{
float3 position;
//some more attributes, 9 floats in total
//(...)
int fluidID;
};
I use a helper function to define the OpenGL attributes like this:
void addInstancedAttrib(const InstancedAttribDescriptor& attribDesc, GLSLProgram& program, int offset=0)
{
//binding and some implementation details
//(...)
oglplus::VertexArrayAttrib attrib(program, attribDesc.getName().c_str());
attrib.Pointer(attribDesc.getPerVertVals(), attribDesc.getType(), false, sizeof(Particle), (void*)offset);
attrib.Divisor(1);
attrib.Enable();
}
I add attributes for positions and fluidids like this:
InstancedAttribDescriptor posDesc(3, "InstanceTranslation", oglplus::DataType::Float);
this->instancedData.addInstancedAttrib(posDesc, this->program);
InstancedAttribDescriptor fluidDesc(1, "FluidID", oglplus::DataType::Int);
this->instancedData.addInstancedAttrib(fluidDesc, this->program, (int)offsetof(Particle,fluidID));
Vertex shader code:
uniform vec3 FluidColors[2];
in vec3 InstanceTranslation;
in vec3 VertexPosition;
in vec3 n;
in int FluidID;
out float lightIntensity;
out vec3 sphereColor;
void main()
{
//some typical MVP transformations
//(...)
sphereColor = FluidColors[FluidID];
gl_Position = projection * vertexPosEye;
}
This code as whole produces this output:
As you can see, the particles are arranged in the way I wanted them to be, which means that "InstanceTranslation" property is setup correctly. The group of the particles to the left have FluidID value of 0 and the ones to the right equal to 1. The second set of particles have proper positions but index improperly into FluidColors array.
What I know:
It's not a problem with the way I set up the FluidColors uniform. If I hard-code the color selection in the shader like this:
sphereColor = FluidID == 0? FluidColors[0] : FluidColors1;
I get:
OpenGL returns GL_NO_ERROR from glGetError so there's no problem with the enums/values I provide
It's not a problem with the offsetof macro. I tried using hard-coded values and they didn't work either.
It's not a compatibility issue with GLint, I use simple 32bit Ints (checked this with sizeof(int))
I need to use FluidID as a instanced attrib that indexes into the color array because otherwise, if I were to set the color for a particle group as a simple vec3 uniform, I'd have to batch the same particle types (with the same FluidID) together first which means sorting them and it'd be too costly of an operation.
To me, this seems to be an issue of how you set up the fluidID attribute pointer. Since you use the type int in the shader, you must use glVertexAttribIPointer() to set up the attribute pointer. Attributes you set up with the normal glVertexAttribPointer() function work only for float-based attribute types. They accept integer input, but the data will be converted to float when the shader accesses them.
In oglplus, you apparently have to use VertexArrayAttrib::IPointer() instead of VertexArrayAttrib::Pointer() if you want to work with integer attributes.
I have the following vertex shader:
#version 330
layout (location = 0) in vec3 Position;
uniform mat4 gWVP;
out vec4 Color;
void main()
{
gl_Position = gWVP * vec4(Position, 1.0);
};
How can I get, for example, the third value of vec3? The first my thought was: "Maybe I can get it by multiplying this vector(Position) on something?" But I am not sure that something like "vertical vector type" exists.
So, what is the best way? I need this value to set the color of the pixel.
There are at least 4 options:
You can access vector components with component names x, y, z, w. This is mostly used for vectors that represent points/vectors. In your example, that would be Position.z.
You can use component names r, g, b, a. This is mostly used for vectors that represent colors. In your example, you could use Position.b, even though that would not be very readable. On the other hand, Color.b would be a good option for the other variable.
You can use component names s, t, p, q. This is mostly used for vectors that represent texture coordinates. In our example, Position.p would also give you the 3rd component.
You can use the subscript notation with 0-based indices. In your example, Position[2] also gives he 3rd element.
Each vector has overloaded access to elements. In this case, using Position.z should work.
I wrote a simple Sphere Tracer in Processing (Java) and am porting it to WebGL / GLSL. When I wrote it in Processing I had a base class Shape and would extend it for specific shapes such as Box, Plane, Sphere, etc. Each specific shape had members that were relevant to it, for example Sphere instances had a radius, Box instances had a length vector, etc. In addition each had a shape specific distance function.
Unfortunately I cannot use classes like this in GLSL and so I made a single struct that can represent any shape (I refer to it as Object below):
struct Object {
vec3 pos, len, nDir;
float rad;
} objects[4];
Then I wrote a distance function for each kind of shape:
float boxSignedDist(Object inBox, vec3 inPos) {
vec3 boxDelta = abs(inPos-inBox.pos)-inBox.len;
return min(max(boxDelta.x, max(boxDelta.y, boxDelta.z)), 0.0)+length(max(boxDelta, 0.0));
}
float planeSignedDist(Object inPlane, vec3 inPos) {
return dot(inPos-inPlane.pos, inPlane.nDir);
}
float roundBoxUnsignedDist(Object inRoundBox, vec3 inPos) {
return length(max(abs(inPos-inRoundBox.pos)-inRoundBox.len, 0.0))-inRoundBox.rad;
}
float sphereSignedDist(Object inSphere, vec3 inPos) {
return length(inPos-inSphere.pos)-inSphere.rad;
}
Now I have run into a different problem which is wrapping shape specific distance functions with another function such as a rotation, it is not obvious how to do this efficiently in GLSL. I added a member to Object, int type, and then made a few #defines for each shape I support at the moment:
#define BOX_SIGNED 1
#define PLANE_SIGNED 2
#define ROUNDBOX_UNSIGNED 3
#define SPHERE_SIGNED 4
struct Object {
int type;
vec3 pos, len, nDir;
float rad;
} objects[4];
So that now I can write a rotation wrapper to a distance function like this:
float rotateY(Object inObject, vec3 inPos, float inRadians) {
inPos -= inObject.pos;
inObject.pos = vec3(0.0, 0.0, 0.0);
float cRad = cos(inRadians);
float sRad = sin(inRadians);
if (inObject.type == BOX_SIGNED)
return boxSignedDist(inObject, vec3(cRad*inPos.x-sRad*inPos.z, inPos.y, cRad*inPos.z+sRad*inPos.x));
else if (inObject.type == PLANE_SIGNED)
return planeSignedDist(inObject, vec3(cRad*inPos.x-sRad*inPos.z, inPos.y, cRad*inPos.z+sRad*inPos.x));
else if (inObject.type == ROUNDBOX_UNSIGNED)
return roundBoxUnsignedDist(inObject, vec3(cRad*inPos.x-sRad*inPos.z, inPos.y, cRad*inPos.z+sRad*inPos.x));
else if (inObject.type == SPHERE_SIGNED)
return sphereSignedDist(inObject, vec3(cRad*inPos.x-sRad*inPos.z, inPos.y, cRad*inPos.z+sRad*inPos.x));
else
return 0.0;
}
It seems ridiculous that this would be necessary, is there a better way to do it? It would be nice if rotateY could receive a function pointer to just call the appropriate function instead of the all the else if
GLSL is quite a limited language really. The compiler does a great job at optimizing certain things but isn't perfect.
A few things to remember:
Local memory is expensive, both in just declaring and in access
Dynamically indexed arrays are put in local memory.
Arrays and objects are padded to align to 16 byte boundaries. An int[4] array takes the same memory as a vec4[4] array. Your Object should group vec3s with floats.
There's no such thing as a function call. Everything is inlined.
Arguments passed to functions are copied in and copied out. The complier doesn't always optimized out these copies when the functions are inlined. Keep as much global as possible.
Switch statements don't have jump operators, they are expanded to nested if-statements.
Divergence is a tricky thing to optimize out. Your if (type == ... code could be improved by constructing the rotated inPos beforehand, but I can't see a way around the if-statements. Perhaps you could write permutations of functions for each object type (or use macros) and trace the types in batches separately?
You might get some good ideas looking at what people have written for https://www.shadertoy.com/.
Finally, GLSL subroutines have a similar intent as function pointers, but are used on a global scale for all shader executions and won't help here.
Does the order and/or size of shader in/out variables make any difference in memory use or performance? For example, are these:
// vert example:
out vec4 colorRadius;
// tess control example:
out vec4 colorRadius[];
// frag example:
in smooth vec4 colorRadius;
equivalent to these:
// vert example:
out vec3 color;
out float radius;
// tess control example:
out vec3 color[];
out float radius[];
// frag example:
in smooth vec3 color;
in smooth float radius;
Is there any additional cost with the second form or will the compiler pack them together in memory and treat them exactly the same?
The compiler could pack these things together. But it doesn't have to, and there's little evidence that compilers commonly do this. So the top version will at least be no slower than the bottom version.
At the same time, this is more of a micro-optimization. So unless you know that this is a bottleneck, just let it go. It's best to write clear, easily understood code and optimize it when you know where your problems are, than to optimize it not knowing if it's going to be a concern.
Hey.
I new to OpenGL ES but I've had my share of experience with normal OpenGL.
I've been told that using interlaced arrays for the vertex buffers is a lot faster due to the optimisation for avoiding cache misses.
I've developed a vertex format that I will use that looks like this
struct SVertex
{
float x,y,z;
float nx,ny,nz;
float tx,ty,tz;
float bx,by,bz;
float tu1,tv1;
float tu2,tv2;
};
Then I used "glVertexAttribPointer(index,3,GL_FLOAT,GL_FALSE,stride,v);" to point to the vertex array. The index is the one of the attribute I want to use and everything else is ok except the stride. It worked before I decided to add this into the equation. I passed the stride both as sizeof(SVertex) and like 13*4 but none of them seem to work.
If it has any importance I draw the primitives like this glDrawElements(GL_TRIANGLES,surface->GetIndexCount()/3,GL_UNSIGNED_INT,surface->IndPtr());
In the OpenGL specs it's written that the stride should be the size in bytes from the end of the attribute( in this case z) to the next attribute of the same kind(in this case x). So by my calculations this should be 13(nx,ny,nz,tx,ty....tuv2,tv2) times 4 (the size of a float).
Oh and one more thing is that the display is just empty.
Could anyone please help me with this?
Thanks a lot.
If you have a structure like this, then stride is just sizeof SVertex and it's the same for every attribute. There's nothing complicated here.
If this didn't work, look for your error somewhere else.
For example here:
surface->GetIndexCount()/3
This parameter should be the number of vertices, not primitives to be sent - hence I'd say that this division by three is wrong. Leave it as:
surface->GetIndexCount()
Then I used
"glVertexAttribPointer(index,3,GL_FLOAT,GL_FALSE,stride,v);"
to point to the vertex array. The
index is the one of the attribute I
want to use and everything else is ok
except the stride
This does not work for texcoord (you have 2x 2 floats or 1x 4 floats).
About the stride, like Kos said, I think you should pass a stride of 16 * sizeof(float) (the size of your SVertex).
Also another thing worth mentioning. You say you want to optimize for performance. Why dont you compress your vertex to the max, and suppress redundant values? This would save a lot of bandwidth.
x, y, z are OK, but nx and ny are self sufficient if your normals are normalized (which may be the case). You can extract in the vertex shader nz (assuming you have shader capabilities). The same thing applies for tx and ty. You don't need bx, by, bz at all since you know it's the cross product of normal and tangent.
struct SPackedVertex
{
float x,y,z,w; //pack all on vector4
float nx,ny,tx,ty;
float tu1,tv1;tu2,tv2;
};