Our application crashes on old Nvidia drivers..
Debug code is here
Looking around, here they say it is often due to an incorrect vertex attribute setup
This is how I setup my vbo and vao:
/**
* Init Vbo/vao.
*/
float[] vertexData = new float[]{
0, 0,
1, 0,
1, 1};
debugVbo = new int[1];
gl3.glGenBuffers(1, debugVbo, 0);
gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, debugVbo[0]);
{
FloatBuffer buffer = GLBuffers.newDirectFloatBuffer(vertexData);
gl3.glBufferData(GL3.GL_ARRAY_BUFFER, vertexData.length * Float.BYTES, buffer, GL3.GL_STATIC_DRAW);
}
gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, 0);
debugVao = new int[1];
gl3.glGenVertexArrays(1, debugVao, 0);
gl3.glBindVertexArray(debugVao[0]);
{
gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, debugVbo[0]);
{
gl3.glEnableVertexAttribArray(0);
{
gl3.glVertexAttribPointer(0, 2, GL3.GL_FLOAT, false, 0, 0);
}
}
gl3.glBindBuffer(GL3.GL_ARRAY_BUFFER, 0);
}
gl3.glBindVertexArray(0);
}
And this is how I render:
public static void render(GL3 gl3) {
gl3.glClear(GL3.GL_DEPTH_BUFFER_BIT | GL3.GL_COLOR_BUFFER_BIT);
gl3.glUseProgram(textureProgram);
{
gl3.glBindVertexArray(debugVao[0]);
{
gl3.glActiveTexture(GL3.GL_TEXTURE0);
gl3.glBindTexture(GL3.GL_TEXTURE_2D, texture[0]);
gl3.glBindSampler(0, EC_Samplers.pool[EC_Samplers.Id.clampToEdge_nearest_0maxAn.ordinal()]);
{
gl3.glDrawArrays(GL3.GL_TRIANGLES, 0, 3);
}
gl3.glBindTexture(GL3.GL_TEXTURE_2D, 0);
gl3.glBindSampler(0, 0);
}
gl3.glBindVertexArray(0);
}
gl3.glUseProgram(0);
}
This is my VS:
#version 330
layout (location = 0) in vec2 position;
uniform mat4 modelToCameraMatrix;
uniform mat4 cameraToClipMatrix;
out vec2 fragmentUV;
void main()
{
gl_Position = cameraToClipMatrix * modelToCameraMatrix * vec4(position, 0, 1);
fragmentUV = position;
}
And my FS:
#version 330
in vec2 fragmentUV;
out vec4 outputColor;
uniform sampler2D textureNode;
void main()
{
outputColor = texture(textureNode, fragmentUV);
}
I read and re-read the same code since 2 days now, I can't find anything wrong.
I tried also defining a stride of 2*4=8, but same outcome..
I can't believe it.
Problem lied somewhere else, where I was initializing my samplers
public static void init(GL3 gl3) {
pool = new int[Id.size.ordinal()];
gl3.glGenSamplers(Id.size.ordinal(), pool, 0);
gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
GL3.GL_TEXTURE_WRAP_S, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
GL3.GL_TEXTURE_WRAP_T, GL3.GL_CLAMP_TO_EDGE);
gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
GL3.GL_TEXTURE_MIN_FILTER, GL3.GL_NEAREST);
gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
GL3.GL_TEXTURE_MAG_FILTER, GL3.GL_NEAREST);
gl3.glSamplerParameteri(pool[Id.clampToEdge_nearest_0maxAn.ordinal()],
GL3.GL_TEXTURE_MAX_ANISOTROPY_EXT, 0);
}
The crash was caused by setting the max anisotropy to 0... 1 resolved the crash..
Ps: also glSamplerParameterf instead glSamplerParameteri since it is a float value..
Anyway it is weird because that code was since since a lot of time and never trigger the violation previously.. I don't know.. maybe some latter code modification made in a way that the Nvidia driver couldn't detect anymore the problem and fix it by itself, who knows..
Related
I'm referring to the OpenGL SuperBible. I use their framework to create an own program. I wanted to do something with an Interface Block (specifically a Uniform Block). If I call
glGetActiveUniformsiv(program, 1, uniformIndices, GL_UNIFORM_OFFSET, uniformOffsets);
I get an error, namely GL_INVALID_VALUE.
But if I call the same function with a 0 instead of a 1, it doesn't make that error. I assumed then, that I have no active uniforms. I should have 3 of them, however.
How do I activate them? Here's my shader:
#version 450 core
layout (location = 0) in vec4 position;
layout (location = 1) in vec4 color;
out vec4 vs_color;
uniform TransformBlock {
mat4 translation;
mat4 rotation;
mat4 projection_matrix;
};
void main(void)
{
mat4 mvp = projection_matrix * translation * rotation ;
gl_Position = mvp * position;
vs_color = color;
}
Here is some code from the startup method:
static const GLchar* uniformNames[3] = {
"TransformBlock.translation",
"TransformBlock.rotation",
"TransformBlock.projection_matrix",
};
GLuint uniformIndices[3];
glUseProgram(program);
glGetUniformIndices(program, 3, uniformNames, uniformIndices);
GLint uniformOffsets[3];
GLint matrixStrides[3];
glGetActiveUniformsiv(program, 3, uniformIndices, GL_UNIFORM_OFFSET, uniformOffsets);
glGetActiveUniformsiv(program, 3, uniformIndices, GL_UNIFORM_MATRIX_STRIDE, matrixStrides);
unsigned char* buffer1 = (unsigned char*)malloc(4096);
//fill buffer1 in a for-loop
GLuint block_index = glGetUniformBlockIndex(program, "TransformBlock");
glUniformBlockBinding(program, block_index, 0);
glBindBufferBase(GL_UNIFORM_BUFFER, 0, (GLuint)buffer1);
free(buffer1);
However, as a consequence of the function returning GL_INVALID_VALUE there's an error with the calls:
*((float *)(buffer1 + offset)) = ...
and the whole program interrupts. Without adding the offset, I don't get an error here, so I think the second error depends on the first error.
I think it goes wrong at glGetUniformIndices, because you prefixed your uniform names with TransformBlock. You don't use that to access the uniforms with that prefix in the GLSL code, either. If you wanted that, you'd had to set an instance name for the uniform block, the block name is not relevant for accessing / naming the uniforms at all. It is only used for matching interfaces if you link together multiple shaders accessing the same interface block.
I am currently trying to render text in OpenGL using bitmap files. When it's by itself, the font looks as expected.
Exhibit A:
When adding a separate texture (a picture) in a separate VAO OR more text in the same VAO, "This engine can render text!" still looks the same.
However, when adding both the texture in a separate VAO AND more text in the same VAO, the texture of "This engine can render text!" gets modified.
Exhibit B:
What's really strange to me is that the textures seem to be blended, and that it only affects a few vertices rather than the entire VBO.
Is this a problem of OpenGL/poor drivers, or is it something else? I double checked the vertices and the 'his' aren't being rendered with the picture texture active. I am using OSX which is notorious for poor OpenGL support, if it might help.
My rendering loop:
//scene is just a class that bundles all the necessary information for rendering.
//when rendering text, it is all batched inside of one scene,
//so independent textures of text characters should be impossible
glUseProgram(prgmid);
for(auto& it : scene->getTextures() )
{
//load textures
const Texture::Data* data = static_cast<const Texture::Data*>(it->getData() );
glActiveTexture(GL_TEXTURE0 + it->getID() );
glBindTexture(GL_TEXTURE_2D, it->getID() );
glUniform1i(glGetUniformLocation(prgmid, data->name), it->getID() );
}
for(auto& it : scene->getUniforms() )
{
processUniforms(scene, it, prgmid);
}
glBindVertexArray(scene->getMesh()->getVAO() );
glDrawElements(GL_TRIANGLES, scene->getMesh()->getDrawCount(), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
Shaders of the text:
//Vertex
#version 330 core
layout (location = 0) in vec4 pos;
layout (location = 1) in vec2 fontTexCoord;
out vec2 fontTexCoords;
uniform mat4 __projection;
void main()
{
fontTexCoords = vec2(fontTexCoord.x, fontTexCoord.y);
gl_Position = __projection * pos;
}
//Frag
#version 330 core
in vec2 fontTexCoords;
out vec4 color;
uniform sampler2D fontbmp;
void main()
{
color = texture(fontbmp, fontTexCoords);
if(color.rgb == vec3(0.0, 0.0, 0.0) ) discard;
}
Shaders of the picture:
//vert
#version 330 core
layout (location = 0) in vec4 pos;
layout (location = 1) in vec2 texCoord;
out vec2 TexCoords;
uniform mat4 __projection;
uniform float __spriteFrameRatio;
uniform float __spriteFramePos;
uniform float __flipXMult;
uniform float __flipYMult;
void main()
{
TexCoords = vec2(((texCoord.x + __spriteFramePos) * __spriteFrameRatio) * __flipXMult, texCoord.y * __flipYMult);
gl_Position = __projection * pos;
}
//frag
#version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D __image;
uniform vec4 __spriteColor;
uniform bool __is_texture;
void main()
{
if(__is_texture)
{
color = __spriteColor * texture(__image, TexCoords);
}
else
{
color = __spriteColor;
}
}
EDIT:
I believe the code that is causing the problem has to do with generating the buffers. It's called everytime when rendered for each scene (VAO, VBO, EBO, texture) object.
if(!REALLOCATE_BUFFER && !ATTRIBUTE_ADDED) return;
glBindVertexArray(_vao);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ebo);
if(REALLOCATE_BUFFER)
{
size_t vsize = _vert.size() * sizeof(decltype(_vert)::value_type);
size_t isize = _indc.size() * sizeof(decltype(_indc)::value_type);
if(_prevsize != vsize)
{
_prevsize = vsize;
glBufferData(GL_ARRAY_BUFFER, vsize, &_vert[0], _mode);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, isize, &_indc[0], _mode);
}
else
{
glBufferSubData(GL_ARRAY_BUFFER, 0, vsize, &_vert[0]);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, isize, &_indc[0]);
}
}
if(ATTRIBUTE_ADDED)
{
for(auto& itt : _attrib)
{
glVertexAttribPointer(itt.index, itt.size, GL_FLOAT, itt.normalized, _currstride * sizeof(GLfloat), (GLvoid*)(itt.pointer * sizeof(GLfloat) ) );
glEnableVertexAttribArray(itt.index);
}
}
glBindVertexArray(0);
When we comment out glBufferSubData so that glBufferData is always called, the problem area flickers and iterates through all textures, including the ones in other VAOs.
EDIT 2:
For some reason, everything works as expected when the text is rendered with a different mode than the picture, say GL_STREAM_DRAW and GL_DYNAMIC_DRAW, for instance. How can this be?
So the thing I messed up was that getDrawCount() was returning the number of VERTICES rather than the number of indices. Astonishingly, this didn't cause OpenGL to throw any errors and fixing it solved the flickering.
Part of the geometry shader code looks like this:
layout(std430) restrict coherent buffer BufferName
{
uint bufferName[][MAX_EXPECTED_ENTRIES];
};
...
void main()
{
...
bufferName[a][b] = someValue;
...
}
Everything works as expected until I add a writeonly statement to either BufferName, bufferName or both.
With the writeonly statement I get the following error: error C7586: OpenGL does not allow reading writeonly variable 'bufferName'.
What's going on here?
All I do is writing to bufferName and the spec says that writeonly is allowed. This also happens for one-dimensional arrays within a storage block.
Thanks in advance.
Here's the full shader code (BufferName from the example is now IDsPerVertex):
#version 450
#define MAX_EXPECTED_VERTEX_PRIMITIVE_IDS 10
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
layout(std430) restrict coherent buffer IDsPerVertex
{
uint iDsPerVertex[][MAX_EXPECTED_VERTEX_PRIMITIVE_IDS];
};
layout(std430) restrict coherent buffer Counter
{
uint counter[];
};
in int ID[3]; // contains gl_VertexID
out vec4 debugColor;
uint index[3];
void writeIDsPerVertex()
{
// get the next free location for each vertex
index[0] = atomicAdd(counter[ID[0]], 1u);
index[1] = atomicAdd(counter[ID[1]], 1u);
index[2] = atomicAdd(counter[ID[2]], 1u);
// write the triangle primitive ID to each vertex list
iDsPerVertex[ID[0]][index[0]] = gl_PrimitiveIDIn;
iDsPerVertex[ID[1]][index[1]] = gl_PrimitiveIDIn;
iDsPerVertex[ID[2]][index[2]] = gl_PrimitiveIDIn;
}
void passThrough()
{
for(int i = 0; i < gl_in.length(); i++)
{
gl_Position = projection * view * model * gl_in[i].gl_Position;
debugColor = vec4(1);
EmitVertex();
}
EndPrimitive();
}
void main()
{
writeIDsPerVertex();
passThrough();
}
The full error message my environment gives me:
SHADER PROGRAM 37 LOG
Geometry info
-------------
(0) : error C7586: OpenGL does not allow reading writeonly variable 'iDsPerVertex'
So I have been creating a game and I want to support opengl version 2.1 with shaders. I implemented what I could according to tutorials online and when I run the game nothing shows up, Note: If the shaders are changed to the latest version I support everything works fine.. (I use VBOs)
Here are the shaders:
Fragment shader:
uniform sampler2D texture1;
varying vec4 pass_Color;
varying vec2 pass_TextureCoord;
void main(void) {
gl_FragColor = pass_Color;
vec2 texcoord = vec2(pass_TextureCoord.xy);
vec4 color = texture2D(texture1, texcoord) * pass_Color ;
gl_FragColor = color;
}
*Vertex shader: *
attribute vec4 in_Position;
attribute vec4 in_Color;
attribute vec2 in_TextureCoord;
varying vec4 pass_Color;
varying vec2 pass_TextureCoord;
uniform vec4 cameraPos;
uniform mat4 projection;
void main(void) {
gl_Position = ( (vec4(cameraPos.x*projection[0][0],cameraPos.y*projection[1][1],cameraPos.z*projection[0][0],cameraPos.w*projection[0][0])) + (in_Position * projection)) ;
pass_Color = in_Color;
pass_TextureCoord = in_TextureCoord;
}
Note: In the vertex shader I calculate the position, this is correct for sure because I use this exact line to calculate position in the newer shader and it works great.
How I create the VBOs:
vboId = GL15.glGenBuffers();
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId);
GL15.glBufferData(GL15.GL_ARRAY_BUFFER, vboData, GL15.GL_STATIC_DRAW);
// Put the position coordinates in attribute list 0
GL20.glVertexAttribPointer(GL20.glGetAttribLocation(game.getResourceManager().getShaderProgramID(), "in_Position"), TexturedVertex.positionElementCount, GL11.GL_FLOAT,
false, TexturedVertex.stride, TexturedVertex.positionByteOffset);
// Put the color components in attribute list 1
GL20.glVertexAttribPointer(GL20.glGetAttribLocation(game.getResourceManager().getShaderProgramID(), "in_Color"), TexturedVertex.colorElementCount, GL11.GL_FLOAT,
false, TexturedVertex.stride, TexturedVertex.colorByteOffset);
// Put the texture coordinates in attribute list 2
GL20.glVertexAttribPointer(GL20.glGetAttribLocation(game.getResourceManager().getShaderProgramID(), "in_TextureCoord"), TexturedVertex.textureElementCount, GL11.GL_FLOAT,
false, TexturedVertex.stride, TexturedVertex.textureByteOffset);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0);
This is how I render: (Left out useless things)
GL20.glUseProgram(game.getResourceManager().getShaderProgramID());
//Send the camera location and the projection matrix
int loc3 = GL20.glGetUniformLocation(game.getResourceManager().getShaderProgramID(), "projection");
FloatBuffer buf2 = BufferUtils.createFloatBuffer(16);
GL11.glGetFloat(GL11.GL_PROJECTION_MATRIX,buf2);
GL20.glUniformMatrix4(loc3,false,buf2);
int loc4 = GL20.glGetUniformLocation(game.getResourceManager().getShaderProgramID(), "cameraPos");
GL20.glUniform4f(loc4, game.getGameCamera().getCameraLocation().x*-1*Constants.PHYS_PIXEL_TO_METER_RATIO, game.getGameCamera().getCameraLocation().y*-1*Constants.PHYS_PIXEL_TO_METER_RATIO, 0,1);
////////////////////////////RENDERING\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);
GL20.glEnableVertexAttribArray(0);
GL20.glEnableVertexAttribArray(1);
GL20.glEnableVertexAttribArray(2);
// Bind to the index VBO that has all the information about the order of the vertices
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, VBOIndeciesID);
// Draw the vertices
GL11.glDrawElements(GL11.GL_TRIANGLES,6 , GL11.GL_UNSIGNED_INT, 0);
// Put everything back to default (deselect)
GL15.glBindBuffer(GL15.GL_ELEMENT_ARRAY_BUFFER, 0);
GL20.glDisableVertexAttribArray(0);
GL20.glDisableVertexAttribArray(1);
GL20.glDisableVertexAttribArray(2);
GL20.glUseProgram(0);
EDIT:
I check for an error on almost everything I left it out because it gave me no output. This is my method for checking for an error:
private void exitOnGLError(String errorMessage) { //Method to check if any opengl errors occured
int errorValue = GL11.glGetError();
if (errorValue != GL11.GL_NO_ERROR) {
String errorString = GLU.gluErrorString(errorValue);
System.err.println("ERROR - " + errorMessage + ": " + errorString);
if (Display.isCreated()) Display.destroy();
System.exit(-1);
}
}
Linking shaders:
loadShader("res/shaders/vert21.glsl",GL20.GL_VERTEX_SHADER);
loadShader("res/shaders/frag21.glsl",GL20.GL_FRAGMENT_SHADER);
shaderProgramID = GL20.glCreateProgram(); //Create a new shader program
for (Integer id : shaders) {
GL20.glAttachShader(shaderProgramID, id); //attach all the custom shaders to the program
}
// Position information will be attribute 0
GL20.glBindAttribLocation(shaderProgramID, 0, "in_Position");
// Color information will be attribute 1
GL20.glBindAttribLocation(shaderProgramID, 1, "in_Color");
// Textute information will be attribute 2
GL20.glBindAttribLocation(shaderProgramID, 2, "in_TextureCoord");
GL20.glLinkProgram(shaderProgramID); //Link the program to lwjgl
GL20.glValidateProgram(shaderProgramID); //Compile and make sure program was setup correctly
GL20.glUseProgram(shaderProgramID); //Use the program so we can pick the texture unit before continuing
setTextureUnit0(shaderProgramID); //pick the unit and set it in the shader to use
Loading and compiling:
StringBuilder shaderSource = new StringBuilder();
int shaderID = 0;
try { //Read shader
BufferedReader reader = new BufferedReader(new InputStreamReader(ClassLoader.getSystemResourceAsStream(filename)));
String line;
while ((line = reader.readLine()) != null) {
shaderSource.append(line).append("\n");
}
reader.close();
} catch (IOException e) {
System.err.println("Could not read file.");
Logger.getGlobal().log(Level.WARNING, e.getMessage(), e);
System.exit(-1);
} catch (NullPointerException e) {
try {
BufferedReader reader = new BufferedReader(new InputStreamReader( new FileInputStream(filename)));
String line;
while ((line = reader.readLine()) != null) {
shaderSource.append(line).append("\n");
}
reader.close();
} catch (IOException e1) {
e1.printStackTrace(); //To change body of catch statement use File | Settings | File Templates.
}
}
shaderID = GL20.glCreateShader(type); //Create shader
GL20.glShaderSource(shaderID, shaderSource); //Link source
GL20.glCompileShader(shaderID); //Compile
//Check if compiled correctly
if (GL20.glGetShaderi(shaderID, GL20.GL_COMPILE_STATUS) == GL11.GL_FALSE) {
System.err.println(shaderID+" | Shader wasn't able to be compiled correctly.");
System.out.println(GL20.glGetShaderInfoLog(shaderID,GL20.glGetShaderi(shaderID,GL20.GL_INFO_LOG_LENGTH)));
}
this.exitOnGLError("loadShader");
I'm checking the logs, and everything compiles fine. The shader looks like this:
Vertex:
#version 330 core
layout(location = 0) in struct InData {
vec3 position;
vec4 color;
} inData;
out struct OutData {
vec3 position;
vec4 color;
} outData;
void main()
{
outData.position = inData.position;
outData.color = inData.color;
}
Fragment:
#version 330 core
in struct InData {
vec2 position;
vec4 color;
} inData;
out vec4 color;
void main(){
color = inData.color;
}
I'm preparing the shader like this:
public Shader(string src, ShaderType type)
{
shaderId = GL.CreateShader(type);
GL.ShaderSource(shaderId, GetShader(src));
GL.CompileShader(shaderId);
EventSystem.Log.Message(GL.GetShaderInfoLog(shaderId));
EventSystem.Log.Message("GLERROR: " + GL.GetError());
packs = new List<ShaderPack>();
}
public void Attach(ShaderPack pack)
{
packs.Add(pack);
GL.AttachShader(pack.ProgramID, shaderId);
EventSystem.Log.Message(GL.GetProgramInfoLog(pack.ProgramID));
EventSystem.Log.Message("GLERROR: " + GL.GetError());
}
Then I compile the shader:
public void Compile()
{
if(program >= 0)
GL.DeleteProgram(program);
program = GL.CreateProgram();
foreach (var s in shaders.Values)
s.Attach(this);
EventSystem.Log.Message(GL.GetProgramInfoLog(program));
EventSystem.Log.Message("GLERROR: " + GL.GetError());
}
And then I'm trying to use it:
mesh = new Mesh();
mesh.AddTriangle(
new Vector3(0, 0, 0), new Vector4(1, 0, 0, 1),
new Vector3(0, sizeY, 0), new Vector4(0, 1, 0, 1),
new Vector3(sizeX, sizeY, 0), new Vector4(0, 0, 1, 1));
mesh.RefreshBuffer();
shaderPack.Apply();
shaderPack.SetVertexAttribute<Mesh.MeshData1>("vertex", 0, mesh.meshData);
EventSystem.Log.Message("GLERROR: " + GL.GetError());
In Apply GL.UseProgram is called and GetError returns "Invalid Operation"
UPDATE:
Okay I changed the code:
public void Compile()
{
if(program >= 0)
GL.DeleteProgram(program);
program = GL.CreateProgram();
foreach (var s in shaders.Values)
s.Attach(this);
// GL.LinkProgram(program);
//GL.ValidateProgram(program);
GL.ValidateProgram(program);
EventSystem.Log.Message("Validate: " + GL.GetProgramInfoLog(program) + " - " + GL.GetError());
}
public void Apply()
{
GL.UseProgram(program);
EventSystem.Log.Message("GLERROR (Apply): " + GL.GetError());
}
And the output is
[23:25:55][Log]: Validate: - NoError
[23:25:55][Log]: GLERROR (Apply): InvalidOperation
edit: Okay I changed the vertex shaders:
#version 330 core
layout(location = 0) in struct InData {
vec3 position;
vec4 color;
} inData;
void main()
{
gl_Position = vec4(inData.position, 1);
}
...
#version 330 core
//in struct InData {
// vec2 position;
// vec4 color;
//} inData;
out vec4 color;
void main(){
color = vec4(1,0,0,1);
}
It compiles without errors, but I have a blank screen...
Pre-Rollback:
EDIT: Okay, I suspect the problem lies here:
public void VertexAttribute<T>(int loc, ShaderPack p, T[] dataArray) where T : struct
{
int buf;
GL.GenBuffers(1, out buf);
GL.BindBuffer(BufferTarget.ArrayBuffer, buf);
GL.BufferData<T>(BufferTarget.ArrayBuffer, (IntPtr)(dataArray.Length * Marshal.SizeOf(typeof(T))), dataArray, BufferUsageHint.StaticDraw);
GL.EnableVertexAttribArray(0);
GL.VertexAttribPointer<T>(loc, 2, VertexAttribPointerType.Float, false, 0, ref dataArray[0]);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
}
I'm passing this with an Array of the following type:
public struct MeshData1
{
public Vector3 vertex;
public Vector4 color;
public MeshData1(Vector3 vertex, Vector4 color)
{
this.vertex = vertex;
this.color = color;
}
}
And the input looks like this:
#version 330 core
layout(location = 0) in struct InData {
vec3 position;
vec4 color;
} inData;
void main()
{
gl_Position = vec4(inData.position, 1.0);
}
What am I doing wrong?
Two problems immediately come to mind:
You never linked the attached shader stages in your program object (most important)
The string output by glGetProgramInfoLog (...) is only generated/updated after linking or validating a GLSL program.
To fix this, you should make a call to glLinkProgram (...) after attaching your shaders, and also understand that up until you do this the program info log will be undefined.
glValidateProgram (...) is another way of updating the contents of the program info log. In addition to generating the info log, this function will also return whether your program is in a state suitable for execution or not. The result of this operation is stored in a per-program state called GL_VALIDATE_STATUS.