Related
I am fairly new to OpenGL and trying to achieve instancing using uniform arrays. However the number of instances I am trying to invoke is larger than the MAX_UNIFORM_LOCATIONS limit:
QOpenGLShader::link: error: count of uniform locations > MAX_UNIFORM_LOCATIONS(262148 > 98304)error: Too many vertex shader default uniform block components
error: Too many vertex shader uniform components
What other ways are possible that will work with that large a number of objects? So far this is my shader code:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform vec3 positions[262144];
void main() {
vec3 t = vec3(positions[gl_InstanceID].x, positions[gl_InstanceID].y, positions[gl_InstanceID].z);
float val = 0;
mat4 wm = myMatrix * mat4(1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, t.x, t.y, t.z, 1) * worldMatrix;
color = vec3(0.4, 1.0, 0);
vert = vec3(wm * vertex);
vertNormal = mat3(transpose(inverse(wm))) * normal;
gl_Position = projMatrix * camMatrix * wm * vertex;
}
If it should matter, I am using QOpenGLExtraFunctions.
There are many alternatives for overcoming the limitations of uniform storage:
UBOs, for example; they usually have a larger storage capacity than non-block uniforms. Now in your case, that probably won't work, since storing 200,000 vec3svec4s will require more storage than most implementations allow UBOs to provide. What you need is unbounded storage.
Instanced Arrays
Instanced arrays use the instanced rendering mechanism to automatically fetch vertex attributes based on the instance index. This requires that your VAO setup work change a bit.
Your shader would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec3 position;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
void main() {
vec3 t = position;
/*etc*/
}
Here, the shader itself never uses gl_InstanceID. That happens automatically based on your VAO.
That setup code would have to include the following:
glBindBuffer(GL_ARRAY_BUFFER, buffer_containing_instance_data);
glVertexAttribPointer(2, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(GLfloat), 0);
glVertexAttribDivisor(2, 1);
This code assumes that the instance data is at the start of the buffer and is 3 floats-per-value (tightly packed). Since you're using vertex attributes, you can use the usual vertex attribute compression techniques on them.
The last call, to glVertexAttribDivisor is what tells OpenGL that it will only move to the next value in the array once per instance, rather than based on the vertex's index.
Note that by using instanced arrays, you also gain the ability to use the baseInstance glDraw* calls. The baseInstance in OpenGL is only respected by instanced arrays; gl_InstanceID never is affected by it.
Buffer Textures
Buffer textures are linear, one-dimensional textures that get their data from a buffer object's storage.
Your shader logic would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
uniform samplerBuffer positions;
void main() {
vec3 t = texelFetch(positions, gl_InstanceID).xyz;
/*etc*/
}
Buffer textures can only be accessed via the direct texel fetching functions like texelFetch.
Buffer textures in GL 4.x can use a few 3-channel formats, but earlier GL versions don't give you that option (not without an extension). So you may want to expand your data to a 4-channel value rather than 3 channel.
Another problem is that buffer textures do have a maximum size limitation. And the required minimum is only 64KB of size, so the instanced array method will probably be more reliable (since it has no size restriction). However, all non-Intel OpenGL implementations give a huge size for buffer textures.
SSBOs
Shader storage buffer objects are like UBOs, only you can both read and write to them. That latter tool isn't important for you. The main advantage here is that the minimum required OpenGL size for them is have a minimum required size of 16MB (and implementations generally return a size limit on the order of available video memory). So size limits aren't a problem.
Your shader code would look like this:
layout(location = 0) in vec4 vertex;
layout(location = 1) in vec3 normal;
out vec3 vert;
out vec3 vertNormal;
out vec3 color;
uniform mat4 projMatrix;
uniform mat4 camMatrix;
uniform mat4 worldMatrix;
uniform mat4 myMatrix;
uniform sampler2D sampler;
buffer PositionSSBO
{
vec4 positions[];
};
void main() {
vec3 t = positions[gl_InstanceID].xyz;
/*etc*/
}
Note that we explicitly use a vec4 here. That's because you should never use vec3 in a buffer-backed interface block (ie: UBO/SSBO).
In code, SSBOs work much like UBOs. You bind them for use with glBindBufferRange.
How do I assign an explicit uniform location when I want to use the uniform in different shader stages of the same program?
When automatic assignment is used, uniforms in different stages are assigned to the same location when the identifiers match. But how can I define the location in the shader using the
layout (location = ...)
syntax?
Following quote from:
https://www.opengl.org/wiki/Uniform_(GLSL)/Explicit_Uniform_Location
It is illegal to assign the same uniform location to two uniforms in the same shader or the same program. Even if those two uniforms have the same name and type, and are defined in different shader stages, it is not legal to explicitly assign them the same uniform location; a linker error will occur.
Following quote from the GLSL Spec:
No two default-block uniform variables in the program can have the same location,
even if they are unused, otherwise a compile-time or link-time error will be generated.
I'm using OpenGL 4.3.
Due to immense READING THE CODE, I figured out, that the uniform is unused.
That leads to the following situation: On a GTX 780 the following code runs without problems (although it seems it shouldn't). On an Intel HD 5500 onboard graphics chip the code produces a SHADER_ID_LINK error at link time, according to the GL_ARB_DEBUG_OUTPUT extension. It states, that the uniform location overlaps another uniform.
Vertex Shader:
#version 430 core
layout(location = 0) in vec4 vPosition;
layout(location = 2) in vec4 vTexCoord;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
out vec4 fPosition;
out vec4 fTexCoord;
void main() { ... }
Fragment Shader:
#version 430 core
in vec4 fPosition;
in vec4 fTexCoord;
layout(location = 0) out vec4 Albedo;
layout(location = 1) out vec4 Normal;
layout(location = 0) uniform mat4 WorldMatrix; // <-- unused in both stages
layout(location = 1) uniform mat4 InverseViewProjectionMatrix;
layout(location = 2) uniform samplerCube Cubemap;
void main() { ... }
However, when the uniform is used, no problems occour. Assumed I interpret the GLSL Spec right, this seems to be not as it's supposed. Although, this is exactly how I would like it to function.
Still, there is the problem of overlapping uniforms, when the uniform is not used.
see complete GL+VAO/VBO+GLSL+shaders example in C++
extracted from that example:
On GPU side:
#version 400 core
layout(location = 0) in vec3 pos;
you need to specify GLSL version to use this
not sure from which they add layout location but for 400+ it will work for sure
VBO pos is set to location 0
On CPU side:
// globals
GLuint vbo[4]={-1,-1,-1,-1};
GLuint vao[4]={-1,-1,-1,-1};
const GLfloat vao_pos[]=
{
// x y z //ix
-1.0,-1.0,-1.0, //0
+1.0,-1.0,-1.0, //1
+1.0,+1.0,-1.0, //2
-1.0,+1.0,-1.0, //3
-1.0,-1.0,+1.0, //4
+1.0,-1.0,+1.0, //5
+1.0,+1.0,+1.0, //6
-1.0,+1.0,+1.0, //7
};
// init
GLuint i;
glGenVertexArrays(4,vao);
glGenBuffers(4,vbo);
glBindVertexArray(vao[0]);
i=0; // VBO location
glBindBuffer(GL_ARRAY_BUFFER,vbo[i]);
glBufferData(GL_ARRAY_BUFFER,sizeof(vao_pos),vao_pos,GL_STATIC_DRAW);
glEnableVertexAttribArray(i);
glVertexAttribPointer(i,3,GL_FLOAT,GL_FALSE,0,0);
when you attach data to location
then you need to have set the layout location for it the same in all shaders it use it inside single stage. You can not assign the same location to more than one VBO at once (that is the state you copied all about).
By single stage is meant single/set of glDrawArrays/glDrawElements calls without changing shader setup. If you have more shader program stages (more than one of fragment/vertex/geometry...) then the location can be set differently for each stage but inside each stage all its shader programs must have the same location setup.
By single stage start you can assume each glUseProgram(prog_id); call and it ends by glUseProgram(0); or another stage start ...
[edit2] here are the uniforms for non nVidia drivers
vertex shader:
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
void main()
{
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
fragment shader:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
void main()
{
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
col=vec4(c,1.0);
}
These are rewritten shaders from the linked example with layout location used for uniforms. You have to add:
#extension GL_ARB_explicit_uniform_location : enable
for 400 profile to make it work
On CPU side use glGetUniformLocation as usual
id=glGetUniformLocation(prog_id,"lt_pnt_pos"); glUniform3fv(id,1,lt_pnt_pos);
id=glGetUniformLocation(prog_id,"lt_pnt_col"); glUniform3fv(id,1,lt_pnt_col);
id=glGetUniformLocation(prog_id,"lt_amb_col"); glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_model" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=glGetUniformLocation(prog_id,"m_normal" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=glGetUniformLocation(prog_id,"m_view" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=glGetUniformLocation(prog_id,"m_proj" ); glUniformMatrix4fv(id,1,GL_FALSE,m);
Or the defined position:
id=64; glUniform3fv(id,1,lt_pnt_pos);
id=67; glUniform3fv(id,1,lt_pnt_col);
id=70; glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id= 0; glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=16; glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=32; glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=48; glUniformMatrix4fv(id,1,GL_FALSE,m);
Looks like nVidia compiler handles the locations differently. In case it does not work properly try workaround for buggy drivers to set locations with different step per data type:
1 location: float,int,bool
2 locations double
3 locations vec3
4 locations vec4
6 locations dvec3
8 locations dvec4
9 locations mat3
16 locations mat4
etc ...
I am writing some font drawing shaders in OpenGL 3.3. I will render my font into a texture atlas and then generate some display lists for some text I want to draw. I would like the rendering of text to consume the least amount of resources (CPU, GPU memory, GPU time). How can I accomplish this?
Looking at Freetype-gl, I noticed that the author generates 6 indices and 4 vertices per character.
Since I am using OpenGL 3.3, I have some additional freedom. My plan was to generate 1 vertex per character plus one integer "code" per character. The character code can be used in texelFetch operations to retrieve texture coördinates and character size information. A geometry shader turns the size information and vertex into a triangle strip.
Is texelFetch going to be slower than sending more vertices/texture coördinates? Is this worth doing?, or is there are reason why it's not done in the font libraries I looked at?
Final code:
Vertex shader:
#version 330
uniform sampler2D font_atlas;
uniform sampler1D code_to_texture;
uniform mat4 projection;
uniform vec2 vertex_offset; // in view space.
uniform vec4 color;
uniform float gamma;
in vec2 vertex; // vertex in view space of each character adjusted for kerning, etc.
in int code;
out vec4 v_uv;
void main()
{
v_uv = texelFetch(
code_to_texture,
code,
0);
gl_Position = projection * vec4(vertex_offset + vertex, 0.0, 1.0);
}
Geometry shader:
#version 330
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
uniform sampler2D font_atlas;
uniform mat4 projection;
in vec4 v_uv[];
out vec2 g_uv;
void main()
{
vec4 pos = gl_in[0].gl_Position;
vec4 uv = v_uv[0];
vec2 size = vec2(textureSize(font_atlas, 0)) * (uv.zw - uv.xy);
vec2 pos_opposite = pos.xy + (mat2(projection) * size);
gl_Position = vec4(pos.xy, 0, 1);
g_uv = uv.xy;
EmitVertex();
gl_Position = vec4(pos.x, pos_opposite.y, 0, 1);
g_uv = uv.xw;
EmitVertex();
gl_Position = vec4(pos_opposite.x, pos.y, 0, 1);
g_uv = uv.zy;
EmitVertex();
gl_Position = vec4(pos_opposite.xy, 0, 1);
g_uv = uv.zw;
EmitVertex();
EndPrimitive();
}
Fragment shader:
#version 330
uniform sampler2D font_atlas;
uniform vec4 color;
uniform float gamma;
in vec2 g_uv;
layout (location = 0) out vec4 fragment_color;
void main()
{
float a = texture(font_atlas, g_uv).r;
fragment_color.rgb = color.rgb;
fragment_color.a = color.a * pow(a, 1.0 / gamma);
}
I wouldn't expect there to be a significant performance difference between your proposed method vs storing the quad vertex positions and texture coordinates in a vertex buffer. On the one hand your method requires a smaller vertex buffer and less work for the CPU. On the other hand the texelFetch calls will be more-or-less at random locations, and not make the best use of the cache. This last point may not be very significant as I guess that texture wont be very large. Also, the execution model of geometry shaders mean they can quickly become the bottleneck of the pipeline.
To answer "is this worth doing?" - I suspect not for performance reasons. Unfortunately you can't tell until you implement it and measure the performance. I think it's quite a cool idea though, so I don't think you'd be wasting your time trying it out.
Maybe you can use Atomic Counter to handle current position in text.
Here is an interresting paper on memory bandwidth
GPU perf...
You can cache the result in a fbo.
For realy fast rendering as you said, you may build a geom shader taking points as input and outputing quads and sample a texture to get additional on glyph info.
This appear effectively the best solution...
My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGL® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGL® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.
Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!