I have a total of 5 render targets. I use the same shader to write to the first 4, then in a seperate pass write to the last one.
Before calling rendering to the first 4 targets I call:
GLenum drawBuffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2, GL_COLOR_ATTACHMENT3};
glDrawBuffers(4, drawBuffers);
However, when I call it for second pass and only want to write to the last one, the 5th target, why does the following give strange results?
GLenum drawBuffers[] = { GL_COLOR_ATTACHMENT4 };
glDrawBuffers(1, drawBuffers);
Why do I have to instead use:
GLenum drawBuffers[] = { GL_NONE, GL_NONE, GL_NONE, GL_NONE, GL_COLOR_ATTACHMENT4 };
glDrawBuffers(5, drawBuffers);
Is this simply how glDrawBuffers() works, or is being caused by something else?
EDIT: fixed code with regards Jarrods comment
Yes, this is simply how glDrawBuffers works. But there's a reason for that.
Fragment shaders write color outputs that map to certain "color numbers". These have no relation to GL_COLOR_ATTACHMENT''i'' numbers. The entire point of glDrawBuffers is to map those fragment color numbers to actual buffers in the framebuffer.
http://www.opengl.org/sdk/docs/man/xhtml/glDrawBuffers.xml
The 2nd parameter must be of the type const GLenum* i.e. a pointer "to an array of symbolic constants specifying the buffers into which fragment colors or data values will be written".
So just passing GL_COLOR_ATTACHMENT4 as the 2nd param is the wrong type. You need to pass a pointer to an array of GLEnum.
I find that there's something wrong in glDrawBuffers().
for exmple
tmp = {attement_color0, attachement_color2}
glDrawBuffers(tmp).
in shader:
gl_fragdata[0]=...
gl_fragdata[2]=...
or u can use layout location to define the attments output.
But sometimes, at least in my PC, it does not work. I mean the attachment_color2 does NOT have the exact output.
Related
In this question about growing buffers, someone answers with the following code.
// Bind the old buffer to `GL_COPY_READ_BUFFER`
glBindBuffer (GL_COPY_READ_BUFFER, old_buffer);
// Allocate data for a new buffer
glGenBuffers (1, &new_buffer);
glBindBuffer (GL_COPY_WRITE_BUFFER, new_buffer);
glBufferData (GL_COPY_WRITE_BUFFER, ...);
// Copy `old_buffer_size`-bytes of data from `GL_COPY_READ_BUFFER`
// to `GL_COPY_WRITE_BUFFER` beginning at 0.
glCopyBufferSubData (GL_COPY_READ_BUFFER, GL_COPY_WRITE_BUFFER, 0, 0, old_buffer_size);
This is my understanding of the above, in order to copy A to B:
Bind A
Generate B
Bind B
Write null contents to B at larger size
Copy A to B
My problem is that A is the original vertex buffer for the shader, but so is B, (since changing the size is the goal). In my code (C# opentk) the shader tells me the ID of the buffer that coincides with a named shader variable using GL.GetActiveAttrib, I can't find how to make it use a different buffer than the one it gives.
So I either have to reassign the shader to use B afterwards, or do a double copy:
Bind A
Generate B
Bind B
Write null contents to B at larger size
Copy A to B
Reassign shader to use B
Write null contents to A at larger size (size of B)
Copy B to A
Is it possible make it use a specific buffer or to avoid double copying?
Once you have set the size of the buffer with glBufferData this size becomes inmutable. That's why a copy to another bigger buffer B is needed.
The good news is that you can re-bind the buffer with a different target. This is what you can do in your buffer-manager (it seems you wrongly call it "shader". A shader is a program in the GPU). No need a second copy.
GL_COPY_READ_BUFFER and GL_COPY_WRITE_BUFFER targets are useful if you want to keep previous bindings. This is not your case, so the copy is simpler:
//The buffer must be bound before copying it
glBindBuffer(GL_ARRAY_BUFFER, oldBuf); //or GL_ELEMENT_ARRAY_BUFFER or whatever it was
//New buffer
glGenBuffers (1, &newBuf);
//GL_COPY_WRITE_BUFFER is needed because GL_ARRAY_BUFFER is in use by oldBuf
glBindBuffer (GL_COPY_WRITE_BUFFER, newBuf);
glBufferData (GL_COPY_WRITE_BUFFER, newSize, ...);
glCopyBufferSubData (GL_ARRAY_BUFFER, GL_COPY_WRITE_BUFFER, 0, 0, old_buffer_size);
Now the copy is done, buffer oldBuf can be safetely deleted.
Two things left to do:
a) Tell the buffer-manager to use 'newBuf' insted of 'oldBuf'
b) Before using this new buffer, re-bind it to the target you need
glBindBuffer(GL_ARRAY_BUFFER, newBuf);
Well, there's one more thing to do. When the VAO was bound and the association "attribute - buffer to read from" was established by glVertexAttribPointer it used the oldBuf buffer. But now you want another buffer. So you must bind that VAO again, bind the new buffer and use glVertexAttribPointer.
A better solution may be to have a big buffer, and use only part of it. Let's say you have a VBO for one million vertices. 1M x 3 floats x 4 bytes = 12M bytes. That isn't a size any not-too-old GPU can't handle.
If your data changes in size, but not beyond that 1M vertices, then the easiest way is to use glBufferSubData and upload new data, even it is just added data. No new buffer, no copy, no VAO state change.
I am using VTK to render an depth image, and have set up the connections. However, when it comes to saving the output of the renderer as png file I get the warning, that the PngWriter only supports unsigned char and unsigned short inputs.
vtkSmartPointer<vtkPNGWriter> imageWriter = vtkSmartPointer<vtkPNGWriter>::New();
imageWriter->SetInputConnection(zFilter->GetOutputPort());
imageWriter->SetFileName( qPrintable(QString("outdepth_%1.png").arg(i)) );
imageWriter->Write();
This is my code (inside a loop) , basically I need sth like (unsigned short) zFilter->GetOutputPort() - which of course does not makye any sense at all, it is just for clarification what should be casted
You can use vtkImageCast to cast image from scalar type that zFilter produces to unsigned short.
For that purpose you can use vtkImageCast::SetOutputScalarTypeToUnsignedShort().
Your code will then look something like this:
vtkSmartPointer<vtkImageCast> cast = vtkSmartPointer<vtkImageCast>::New();
cast->SetInputConnection(zFilter->GetOutputPort());
cast->SetOutputScalarTypeToUnsignedShort();
vtkSmartPointer<vtkPNGWriter> imageWriter = vtkSmartPointer<vtkPNGWriter>::New();
imageWriter->SetInputConnection(cast->GetOutputPort());
imageWriter->Write();
What's wrong with this code for a 3.30 OpenGL and GLSL version?
const char *vertSrcs[2] = { "#define A_MACRO\n", vShaderSrc };
const char *fragSrcs[2] = { "#define A_MACRO\n", fShaderSrc };
glShaderSource(vId, 2, vertSrcs, NULL);
glShaderSource(fId, 2, fragSrcs, NULL);
I gather the shader state with GL_COMPILE_STATUS after the its compiled, get this error:
Vertex shader failed to compile with the following errors:
The purpose for this macro is to change type qualifier on a color I passed from the vertex to the fragment, maybe there is another way to do that using a uniform and the layout but the question is why would the shader fail? I wonder if there is another OpenGL command which must specify the 2 char pointers.
By the way, this works:
...
glShaderSource(vId, 1, &vertSrcs[1], NULL);
...
EDIT: since I can't answer my own questions, found the solution
Very strange problem, the string loader happens to be the problem. Without using ifstream along with std::ios::in | std::ios::binary flags, the string was loaded with some garbage information to the end even with null terminated, therefore concatenating the strings will produce the error.
For some reason the gl compiler didn't complain before with the single string version, besides calling
inStream.flags(std::ios::in | std::ios::binary)
wasn't enough, it need to be specified when opening, didn't find any doc for this.
The very first nonempty line of a GLSL shader must be a #version preprocessor statement. Anything else is an error.
(OS: Windows 7, Compiler: Visual Studio 2010 C++ compiler)
I've got a correctly working OpenGL program that draws some spheres and models, applies some shaders etc.. etc..
Now I thought it would be nice to add some text, so I added the following three lines to my draw method:
glColor3f(0.5f, 0.5f, 0.5f);
glRasterPos2f(0, 0);
glutBitmapString(GLUT_BITMAP_HELVETICA_12, (unsigned char*)"some text");
Now somehow this all makes my program get stuck in an infinite "access violation" loop, which I can't seem to fix. I even commented all the other draw code out, to just output the text, and it still gives the access violation error, I'm at a loss here because there is nothing that seems to affect this. So does anybody have some pointers ;)) on how to fix this issue?
I could post all my draw code, but I even tried an empty project, so I'm pretty sure it's not the rest of the code.
Edit: I tried narrowing down the error even more, and it seems that glRasterPos2f is throwing the acces violation (weirdly enough). It's not between any glBegin and glEnd calls, and there is no OpenGL error.
Edit2: After some advice I tried the following code, I got rid of the access violation, but still no text is displayed
glColor3f(0.5f, 0.5f, 0.5f);
glRasterPos2f(0.0f, 0.0f);
std::string str("Hello World");
char* p = new char[strlen(str.c_str() + 1)];
strcpy(p, str.c_str());
glutBitmapString(GLUT_BITMAP_HELVETICA_12, (const unsigned char*)p);
glutBitmapString is of type const unsigned char* and not of unsigned char*. Would that help? I had problems with string casting as well.
And I found out, that I wasn't closing my string, because the unsigned char* should be 1 longer than a string. Here was a snippet that solved it for another method with the same parameter type:
string fullPath = TextureManager::s_sTEXTUREFOLDER + filename;
char *filePath = new char[strlen(fullPath.c_str())+1];
strcpy(filePath,fullPath.c_str());
And then you have your char*.
I recently got the same error while attempting to use glBitmapString(). I was using vs2008. I set a breakpoint at the function call and stepped into it (using freeglut_font.c). Inside I noticed that the exception was being thrown upon what was described to be glut not being initialized. So inside my initialization function I added...
char* argv[] = {"some","stuff"}; int argc=2;
glutInit(&argc,argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowPosition(100,100);
glutInitWindowSize(800,600);
Where of course you can use whatever argc/argv you please. This, as well as what was suggested by Marnix solved the exception errors for me.
Note that I don't actually create a window with glut.
Try putting the string in a temporary variable. The fact that you have to cast should raise a red flag.
glColor3f(0.5f, 0.5f, 0.5f);
glRasterPos2f(0, 0);
unsigned char* s = "some text";
glutBitmapString(GLUT_BITMAP_HELVETICA_12, s);
if const doesn't work, then glutBitmapString() may be modifying the string.
I have a C++ problem here which I simply cannot understand.
I have 2 slightly different functions. Both of them should do the very same thing. But only one works properly.
Method 1: input of the method is 'const string samplerName = "test"'
void setUniformSampler(Gluint program, const string samplerName, GLuint sampler) {
GLint uniformLocation = glGetUniformLocation(program, samplerName.c_str()); // returns -1
if(uniformLocation >= 0) {
glUniform1i(uniformLocation, sampler);
} else {
throw exception(...);
}
}
Method 2:
void setUniformSampler(Gluint program, GLuint sampler) {
GLint uniformLocation = glGetUniformLocation(program, "test"); // returns 0
if(uniformLocation >= 0) {
glUniform1i(uniformLocation, sampler);
} else {
throw exception(...);
}
}
As you can see, glGetUniformLocation returns 2 different values. The correct return value would be "0", not "-1". So I wonder, what exactly is the difference between the two calls?
quote: "c_str() generates a null-terminated sequence of characters (c-string) with the same content as the string object and returns it as a pointer to an array of characters". And that is precisely what the method glGetUniformLocation(...) needs as its second parameter. So, why does only Method 2 above succeed? Is it a compiler problem?
I'm working with MS Visual Studio 2008 on Win7.
I've been searching for this bug for almost 2 days now. I really want to clarify this. It drove me crazy...
Thanks
Walter
EDIT:
This doesn't work either.
void setUniformSampler(Gluint program, const string samplerName, GLuint sampler) {
const GLchar* name = samplerName.c_str();
GLint uniformLocation = glGetUniformLocation(program, name); // still returns -1
if(uniformLocation >= 0) {
glUniform1i(uniformLocation, sampler);
} else {
throw exception(...);
}
}
Your parameter is const, and you can't call a non-const function on a const object. Maybe that's the problem? The function needs a pointer to a null-terminated string. Make sure that's what you're giving it.
check the implementation and the type of the parameter in glGetUniformLocation(parameter), "test" is a const literal whose life and location never change in your executable's life, while c_str() is dynamically calculated from a string object, it dies when the string object dies.
In other words, you need to check in side of glGetUniformLocation()to find the reason, which I guess is related to some CRT string functions.
You might be victim to a mixup of wide character vs slim (i.e. 8-bit) character strings. If both your shader source and the uniform name are defined as static strings, then they will agree on the string representation. string::c_str might change this, as c_str will always return a string of char, i.e. it is not wchar aware.
Technically it should make difference, since a bug free shader compiler use a unambigous internal representation. But there may be a bug, that the difference between wide and slim characters are interpreted as different identifiers.
What happens if you pass the shader source, too through string::c_str? Also check the debugger hexedit view on the string variable. If it looks like this:
00000000 74 65 73 74 0a |test.|
You got a 8-bit character string. If it looks like this:
00000000 74 00 65 00 73 00 74 00 0a 00 |t.e.s.t...|
then you got a wide string. And then compare this with the variable in which the shader is supplied.