I am coding a level editor for a game I am developing. I use JOGL and I seem to have a problem. I am used to LWJGL openGL calls and adjusting to core opengl is a little confusing since lwjgl seem to have simplified a lot of stuff.
So my problem is that I created a model that holds vao ID/name and vertex count and a model loader that creates the model and a renderer. The renderer is not a batched at the moment. I will work on it later. The problem is that opengl throws a GL_INVALID_OPERATION error. Not sure what is causing it. Everything else including the basic triangle I drew to test the environment works, so there seems to be a problem somewhere in my loader or renderer.
Here's the code:
Model:
public class JoglModel {
private int vaoID;
private int vertexCount;
public JoglModel(int vertexCount, int vaoID) {
this.vertexCount = vertexCount;
this.vaoID = vaoID;
}
public int getVertexCount() {
return vertexCount;
}
public int getVaoID() {
return vaoID;
}
}
Loader:
public class ModelLoader {
private GL2 gl;
private List<int[]> vaos = new ArrayList<int[]>();
private List<int[]> vbos = new ArrayList<int[]>();
public ModelLoader(GL2 gl){
this.gl = gl;
}
public JoglModel loadToVao(float[] positions){
int vaoID = createVAO();
storeDataInAttributeList(0,positions);
unbind();
return new JoglModel(vaoID,positions.length/3);
}
private int createVAO(){
int[] vaoID = new int[1];
gl.glGenVertexArrays(vaoID.length, vaoID, 0);
vaos.add(vaoID);
gl.glBindVertexArray(vaoID[0]);
return vaoID[0];
}
private void storeDataInAttributeList(int attributeNumber,float[] data){
int[] vboID = new int[1];
gl.glGenBuffers(vboID.length,vboID,0);
vbos.add(vboID);
gl.glBindBuffer(gl.GL_ARRAY_BUFFER,vboID[0]);
FloatBuffer floatBuffer = createFloatBuffer(data);
gl.glBufferData(gl.GL_ARRAY_BUFFER,floatBuffer.remaining(),floatBuffer,gl.GL_STATIC_DRAW);
gl.glVertexAttribPointer(attributeNumber,3,gl.GL_FLOAT,false,0,0);
gl.glBindBuffer(gl.GL_ARRAY_BUFFER,0);
}
private FloatBuffer createFloatBuffer(float[] data){
FloatBuffer floatBuffer = FloatBuffer.allocate(data.length);
floatBuffer.put(data);
floatBuffer.flip();
return floatBuffer;
}
private void unbind(){}
public void clear(){
for(int[] vao : vaos){
gl.glDeleteVertexArrays(vao.length,vao,0);
}
for(int[] vbo: vbos){
gl.glDeleteBuffers(vbo.length,vbo,0);
}
vaos.clear();
vbos.clear();
}
}
Renderer:
public class JoglRenderer {
private GL2 gl;
public JoglRenderer(GL2 gl){
this.gl = gl;
}
public void begin(){
gl.glClearColor(1f,0f,0f,1f);
gl.glClear(gl.GL_CLEAR_BUFFER);
}
public void render(JoglModel joglModel){
gl.glBindVertexArray(joglModel.getVaoID());
gl.glEnableVertexAttribArray(0);
gl.glDrawArrays(gl.GL_TRIANGLES,0,joglModel.getVertexCount());
gl.glDisableVertexAttribArray(0);
gl.glBindVertexArray(0);
/*
gl.glBegin(gl.GL_TRIANGLES);
gl.glColor3f(1, 0, 0);
gl.glVertex2f(-1, -1);
gl.glColor3f(0, 1, 0);
gl.glVertex2f(0, 1);
gl.glColor3f(0, 0, 1);
gl.glVertex2f(1, -1);
gl.glEnd();
*/
}
public void checkError() {
String errorString = "";
int error = gl.glGetError();
if (error != GL.GL_NO_ERROR) {
switch (error) {
case GL.GL_INVALID_ENUM:
errorString = "GL_INVALID_ENUM";
break;
case GL.GL_INVALID_VALUE:
errorString = "GL_INVALID_VALUE";
break;
case GL.GL_INVALID_OPERATION:
errorString = "GL_INVALID_OPERATION";
break;
case GL.GL_INVALID_FRAMEBUFFER_OPERATION:
errorString = "GL_INVALID_FRAMEBUFFER_OPERATION";
break;
case GL.GL_OUT_OF_MEMORY:
errorString = "GL_OUT_OF_MEMORY";
break;
default:
errorString = "UNKNOWN";
break;
}
}
System.out.println(errorString);
}
}
the commented out triangle part works just fine. There also seems to be an error in clear screen method but that's not my concern right now. Can any one point out where the problem could be?
Thanks
(EDIT)
So i figured out the opengl error. I was accidentally passing the vaoID as the vertex count and vice versa . so i fixed that the error is gone. But nothing is being rendered. Any ideas?
I write here few considerations, since comments are too short for that:
loadToVao could lead you wrong, you don't load anything to a vao, the vao is useful to remember which vertices attributes arrays are enabled, their layout/format and which vbo they refer to, so that you don't have to call them every frame. It can also store the bound element array. So glEnableVertexAttribArray and glDisableVertexAttribArray shouldn't go in the render() function
the renderer should always be there as default, so I'd suggest to have a main and there initialize your renderer (the GLEventListener)
I'd not bind the vao in the createVAO
Do not store the GL element. Keep it transient (pass as argument everytime) or get from GLContext. The first option may increase complexity (since every gl call need to have the GL object from the class implementing GLEventListener) but simplifies debugging (because you do know exactly in which order the gl calls get executed).
if you need just one vao, avoid creating a List for that, same for the vbo.
I suggest you to use static final int variables to hold the vertices attribute indices. It improves readability and avoid potential bugs.
Unless you do not need direct buffers, Use GLBuffers to allocate (direct) buffers.
What is that gl.GL_FLOAT? I never saw that. Use Float.BYTES or GLBuffers.SIZEOF_FLOAT instead.
As #BDL already said, look at glClear and call checkError like here, passing every time a different string so that you can easily find out which is the problematic call if something throw an error.
Jogl does have a GL_COLOR_BUFFER_BIT, just write it and call for auto completition, your IDE should suggest you the right location or automatically insert the right import if you set it up properly
What it looks also missing (maybe you didn't report it) is glVertexAttribPointer
If still it does not work, come back to the basic test triangle, be sure it works and then start building up from there. Move it outside the renderer in its own class, rich it with more geometry, use indexed drawing, ecc. Each step control it works, if it doesn't, then the error lies in your last modifications.
Related
Hey guys should I use the DeviceContext functions like IASetVertexBuffers, IASetPrimitiveTopology, VSSetShader by creation like
void init() {
//create window and stuff
devicecontext->IASetVertexBuffers(...);
}
void draw() {
//draw
}
or in loop like
void init() {
//create window and stuff
}
void draw() {
devicecontext->IASetVertexBuffers(...);
//draw
}
and here is my code that im actually using
void ARenderer::Draw(AMesh * mesh, AShader* shader)
{
ARenderer::SetViewport(currentviewport);
ARenderer::ApplyShader(shader);
///Drawing
uint32_t stride = sizeof(AVertex);
uint32_t offset = 0;
dxmanager->DeviceContext->IASetVertexBuffers(0, 1, mesh->GetBuffer().GetAddressOf(), &stride, &offset);
dxmanager->DeviceContext->IASetPrimitiveTopology(static_cast<D3D11_PRIMITIVE_TOPOLOGY>(mesh->GetPrimitive()));
dxmanager->DeviceContext->Draw(mesh->GetVertexCount(), 0);
}
Chances are that you will want to draw more than one object or thing in your application, it means you will have to call them several times per frame already, so initialization time is not an option.
It is safer for you to always set all the necessary states prior to a draw until it manifests performance issues in your application. That usually never happen in small applications. Once you are done with features and correctness, you can try to be smarter on sending things, not before.
Originally using glDrawElementsInstancedBaseVertex to draw the scene meshes. All the meshes vertex attributes are being interleaved in a single buffer object. In total there are only 30 unique meshes. So I've been calling draw 30 times with instance counts, etc. but now I want to batch the draw calls into one using glMultiDrawElementsIndirect. Since I have no experience with this command function, I've been reading articles here and there to understand the implementation with little success. (For testing purposes all meshes are instanced only once).
The command structure from the OpenGL reference page.
struct DrawElementsIndirectCommand
{
GLuint vertexCount;
GLuint instanceCount;
GLuint firstVertex;
GLuint baseVertex;
GLuint baseInstance;
};
DrawElementsIndirectCommand commands[30];
// Populate commands.
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh{ m_meshes[index] };
commands[index].vertexCount = mesh->elementCount;
commands[index].instanceCount = 1; // Just testing with 1 instance, ATM.
commands[index].firstVertex = mesh->elementOffset();
commands[index].baseVertex = mesh->verticeIndex();
commands[index].baseInstance = 0; // Shouldn't impact testing?
}
// Create and populate the GL_DRAW_INDIRECT_BUFFER buffer... bla bla
Then later down the line, after setup I do some drawing.
// Some prep before drawing like bind VAO, update buffers, etc.
// Draw?
if (RenderMode == MULTIDRAW)
{
// Bind, Draw, Unbind
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, m_indirectBuffer);
glMultiDrawElementsIndirect (GL_TRIANGLES, GL_UNSIGNED_INT, nullptr, 30, 0);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, 0);
}
else
{
for (size_t index { 0 }; index < 30; ++index)
{
const Mesh* mesh { m_meshes[index] };
glDrawElementsInstancedBaseVertex(
GL_TRIANGLES,
mesh->elementCount,
GL_UNSIGNED_INT,
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
1,
mesh->verticeIndex());
}
}
Now the glDrawElements... still works fine like before when switched. But trying glMultiDraw... gives indistinguishable meshes but when I set the firstVertex to 0 for all commands, the meshes look almost correct (at least distinguishable) but still largely wrong in places?? I feel I'm missing something important about indirect multi-drawing?
//Indirect data
commands[index].firstVertex = mesh->elementOffset();
//Direct draw call
reinterpret_cast<GLvoid*>(mesh->elementOffset()),
That's not how it works for indirect rendering. The firstVertex is not a byte offset; it's the first vertex index. So you have to divide the byte offset by the size of the index to compute firstVertex:
commands[index].firstVertex = mesh->elementOffset() / sizeof(GLuint);
The result of that should be a whole number. If it wasn't, then you were doing unaligned reads, which probably hurt your performance. So fix that ;)
This question already has answers here:
What is the proper way to modify OpenGL vertex buffer?
(3 answers)
Closed 2 years ago.
I've got a training app written in winapi
So, I've got GL initialized there and I've got node-based system, that can be described by couple of classes
class mesh
{
GLuint vbo_index; //this is for having unique vbo
float *vertex_array;
float *normal_array;
unsigned int vertex_count;
etc.. //all those mesh things.
....
}
class node
{
bool is_mesh; //the node may or may not represent a mesh
mesh * mesh_ptr; //if it does then this pointer is a valid address
}
I've also got 2 global variables for keeping record of renderable mesh..
mesh **mesh_table;
unsigned int mesh_count;
Right now I'm experimenting on 2 objects. So I create 2 nodes of type mesh::cube with customizable number of x y and z segments. Expected behaviour of my app is let the user click between 2 of the nodes CUBE0, CUBE1 and show their customizable attributes - segments x, segments y, segments z. The user tweaks both objecs' parameters and they are being rendered out on top of each other in wireframe mode, so we can see the changing in their topology in real time.
When the node is being created for the first time, if the node type is mesh, then the mesh object is generated and it's mesh_ptr is written into the mesh_table and mesh_count increments. After that my opengl window class creates a unique vertex buffer object for the new mesh and stores it's index in the mesh_ptr.vbo_index
void window_glview::add_mesh_to_GPU(mesh* mesh_data)
{
glGenBuffers(1,&mesh_data->vbo_index);
glBindBuffer(GL_ARRAY_BUFFER ,mesh_data->vbo_index);
glBufferData(GL_ARRAY_BUFFER ,mesh_data->vertex_count*3*4,mesh_data->vertex_array,GL_DYNAMIC_DRAW);
glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL);//set vertex attrib (0)
glEnableVertexAttribArray(5);
}
After that the user is able to tweak the parameters and each time the parameter value changes the object's mesh information is being re-evaluated based on the new parameter values, while still being the same mesh instance, after that VBO data is being updated by
void window_glview::update_vbo(mesh *_mesh)
{
glBindBuffer(GL_ARRAY_BUFFER,_mesh->vbo_vertex);
glBufferData(GL_ARRAY_BUFFER,_mesh->vertex_count*12,_mesh->vertex_array,GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER,0);
}
and the whole scene redrawn by
for (unsigned short i=0;i<mesh_count;i++)
draw_mesh(mesh_table[i],GL_QUADS,false);
SwapBuffers(hDC);
The function for a single mesh is
bool window_glview::draw_mesh(mesh* mesh_data,unsigned int GL_DRAW_METHOD,bool indices)
{
glUseProgram(id_program);
glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_index);
GLuint id_matrix_loc = glGetUniformLocation(id_program, "in_Matrix");
glUniformMatrix4fv(id_matrix_loc,1,GL_TRUE,cam.matrixResult.get());
GLuint id_color_loc=glGetUniformLocation(id_program,"uColor");
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glUniform3f(id_color_loc,mesh_color[0],mesh_color[1],mesh_color[2]);
glDrawArrays(GL_DRAW_METHOD,0,mesh_data->vertex_count);
glBindBuffer(GL_ARRAY_BUFFER,0);
glUseProgram(0);
return true;
}
The problem is that only the last object in stack is being drawn that way, and the other object's points are all in 0 0 0, so in the viewport it's rendered one cube with proper parameters and one cube just as a DOT
QUESTION: Where did I go wrong?
You have a fundamental misunderstanding of what glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_vertex); does.
That sets the bound array buffer, which is actually only used by a handful of commands (mostly glVertexAttrib{I|L}Pointer (...)), binding the buffer itself is not going to do anything useful.
What you need to do is something along the lines of this:
bool window_glview::draw_mesh(mesh* mesh_data,unsigned int GL_DRAW_METHOD,bool indices)
{
glUseProgram(id_program);
//
// Setup Vertex Pointers in addition to binding a VBO
//
glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_vertex);
glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL);//set vertex attrib (0)
glEnableVertexAttribArray(5);
GLuint id_matrix_loc = glGetUniformLocation(id_program, "in_Matrix");
glUniformMatrix4fv(id_matrix_loc,1,GL_TRUE,cam.matrixResult.get());
GLuint id_color_loc=glGetUniformLocation(id_program,"uColor");
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glUniform3f(id_color_loc,mesh_color[0],mesh_color[1],mesh_color[2]);
glDrawArrays(GL_DRAW_METHOD,0,mesh_data->vertex_count);
glBindBuffer(GL_ARRAY_BUFFER,0);
glUseProgram(0);
return true;
}
Now, if you really want to make this simple and be able to do this just by changing a single object binding, I would suggest you look into Vertex Array Objects. They will persistently store the vertex pointer state.
in your draw glBindBuffer(GL_ARRAY_BUFFER,mesh_data->vbo_index); doesn't actually do anything; the information about the vertex attribute is not bound to the buffer at all. it is set in the glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL); call which gets overwritten each time a new mesh is uploaded.
either create and use a VAO or move that call from add_mesh_to_GPU to draw_mesh:
for the VAO you would do:
void window_glview::add_mesh_to_GPU(mesh* mesh_data)
{
glGenVertexArrays(1, &mesh_data->vao_index);//new GLInt field
glBindVertexArray(mesh_data->vao_index);
glGenBuffers(1,&mesh_data->vbo_index);
glBindBuffer(GL_ARRAY_BUFFER ,mesh_data->vbo_index);
glBufferData(GL_ARRAY_BUFFER ,mesh_data->vertex_count*3*4,mesh_data->vertex_array,GL_DYNAMIC_DRAW);
glVertexAttribPointer(5,3,GL_FLOAT,GL_FALSE,0,NULL);//set vertex attrib (0)
glEnableVertexAttribArray(5);
glBindVertexArray(0);
}
bool window_glview::draw_mesh(mesh* mesh_data,unsigned int GL_DRAW_METHOD,bool indices)
{
glBindVertexArray(mesh_data->vao_index);
glUseProgram(id_program);
GLuint id_matrix_loc = glGetUniformLocation(id_program, "in_Matrix");
glUniformMatrix4fv(id_matrix_loc,1,GL_TRUE,cam.matrixResult.get());
GLuint id_color_loc=glGetUniformLocation(id_program,"uColor");
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
glUniform3f(id_color_loc,mesh_color[0],mesh_color[1],mesh_color[2]);
glDrawArrays(GL_DRAW_METHOD,0,mesh_data->vertex_count);
glUseProgram(0);
glBindVertexArray(0);
return true;
}
I have a fairly simple DirectX 11 framework setup that I want to use for various 2D simulations. I am currently trying to implement the 2D Wave Equation on the GPU. It requires I keep the grid state of the simulation at 2 previous timesteps in order to compute the new one.
How I went about it was this - I have a class called FrameBuffer, which has the following public methods:
bool Initialize(D3DGraphicsObject* graphicsObject, int width, int height);
void BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const;
void EndRender() const;
// Return a pointer to the underlying texture resource
const ID3D11ShaderResourceView* GetTextureResource() const;
In my main draw loop I have an array of 3 of these buffers. Every loop I use the textures from the previous 2 buffers as inputs to the next frame buffer and I also draw any user input to change the simulation state. I then draw the result.
int nextStep = simStep+1;
if (nextStep > 2)
nextStep = 0;
mFrameArray[nextStep]->BeginRender(0.0f,0.0f,0.0f,1.0f);
{
mGraphicsObj->SetZBufferState(false);
mQuad->GetRenderer()->RenderBuffers(d3dGraphicsObj->GetDeviceContext());
ID3D11ShaderResourceView* texArray[2] = { mFrameArray[simStep]->GetTextureResource(),
mFrameArray[prevStep]->GetTextureResource() };
result = mWaveShader->Render(d3dGraphicsObj, mQuad->GetRenderer()->GetIndexCount(), texArray);
if (!result)
return false;
// perform any extra input
I_InputSystem *inputSystem = ServiceProvider::Instance().GetInputSystem();
if (inputSystem->IsMouseLeftDown()) {
int x,y;
inputSystem->GetMousePos(x,y);
int width,height;
mGraphicsObj->GetScreenDimensions(width,height);
float xPos = MapValue((float)x,0.0f,(float)width,-1.0f,1.0f);
float yPos = MapValue((float)y,0.0f,(float)height,-1.0f,1.0f);
mColorQuad->mTransform.position = Vector3f(xPos,-yPos,0);
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
}
mGraphicsObj->SetZBufferState(true);
}
mFrameArray[nextStep]->EndRender();
prevStep = simStep;
simStep = nextStep;
ID3D11ShaderResourceView* currTexture = mFrameArray[nextStep]->GetTextureResource();
// Render texture to screen
mGraphicsObj->SetZBufferState(false);
mQuad->SetTexture(currTexture);
result = mQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
mGraphicsObj->SetZBufferState(true);
The problem is nothing is happening. Whatever I draw appears on the screen(I draw using a small quad) but no part of the simulation is actually ran. I can provide the shader code if required, but I am certain it works since I've implemented this before on the CPU using the same algorithm. I'm just not certain how well D3D render targets work and if I'm just drawing wrong every frame.
EDIT 1:
Here is the code for the begin and end render functions of the frame buffers:
void D3DFrameBuffer::BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const {
ID3D11DeviceContext *context = pD3dGraphicsObject->GetDeviceContext();
context->OMSetRenderTargets(1, &(mRenderTargetView._Myptr), pD3dGraphicsObject->GetDepthStencilView());
float color[4];
// Setup the color to clear the buffer to.
color[0] = clearRed;
color[1] = clearGreen;
color[2] = clearBlue;
color[3] = clearAlpha;
// Clear the back buffer.
context->ClearRenderTargetView(mRenderTargetView.get(), color);
// Clear the depth buffer.
context->ClearDepthStencilView(pD3dGraphicsObject->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
void D3DFrameBuffer::EndRender() const {
pD3dGraphicsObject->SetBackBufferRenderTarget();
}
Edit 2 Ok, I after I set up the DirectX debug layer I saw that I was using an SRV as a render target while it was still bound to the Pixel stage in out of the shaders. I fixed that by setting shader resources to NULL after I render with the wave shader, but the problem still persists - nothing actually gets ran or updated. I took the render target code from here and slightly modified it, if its any help: http://rastertek.com/dx11tut22.html
Okay, as I understand correct you need a multipass-rendering to texture.
Basiacally you do it like I've described here: link
You creating SRVs with both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET bind flags.
You ctreating render targets from textures
You set first texture as input (*SetShaderResources()) and second texture as output (OMSetRenderTargets())
You Draw()*
then you bind second texture as input, and third as output
Draw()*
etc.
Additional advices:
If your target GPU capable to write to UAVs from non-compute shaders, you can use it. It is much more simple and less error prone.
If your target GPU suitable, consider using compute shader. It is a pleasure.
Don't forget to enable DirectX debug layer. Sometimes we make obvious errors and debug output can point to them.
Use graphics debugger to review your textures after each draw call.
Edit 1:
As I see, you call BeginRender and OMSetRenderTargets only once, so, all rendering goes into mRenderTargetView. But what you need is to interleave:
SetSRV(texture1);
SetRT(texture2);
Draw();
SetSRV(texture2);
SetRT(texture3);
Draw();
SetSRV(texture3);
SetRT(backBuffer);
Draw();
Also, we don't know what is mRenderTargetView yet.
so, before
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
somewhere must be OMSetRenderTargets .
Probably, it s better to review your Begin()/End() design, to make resource binding more clearly visible.
Happy coding! =)
I'm learning how to use a stencil buffer, but so far have been unsuccessful at getting a even a simple example to work. In fact, despite trying various combinations of parameters for glStencilOp and glStencilFunc I have not been able to see any evidence that the stencil buffer is working at all. I'm starting to suspect my graphics driver (Mac Pro, Mac OS X 10.8.5) or JOGL (2.0.2) doesn't support it... or I'm missing something really basic.
Here's what I'm seeing:
I'm expecting to see the red diamond clipped by the green diamond. What am I doing wrong?
public class Test {
public static void main(String[] args) {
GLProfile glprofile = GLProfile.getDefault();
final GLCapabilities glcapabilities = new GLCapabilities(glprofile);
final GLCanvas glcanvas = new GLCanvas(glcapabilities);
final GLU glu = new GLU();
glcanvas.addGLEventListener(new GLEventListener() {
#Override
public void reshape(GLAutoDrawable glautodrawable, int x, int y, int width, int height) {}
#Override
public void init(GLAutoDrawable glautodrawable) {
GL2 gl = glautodrawable.getGL().getGL2();
glcapabilities.setStencilBits(8);
gl.glMatrixMode(GLMatrixFunc.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(45, 1, 1, 10000);
glu.gluLookAt(0, 0, 100, 0, 0, 0, 0, 1, 0);
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void dispose(GLAutoDrawable glautodrawable) {}
#Override
public void display(GLAutoDrawable glautodrawable) {
GL2 gl = glautodrawable.getGL().getGL2();
gl.glEnable(GL.GL_STENCIL_TEST);
gl.glClearStencil(0x0);
gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT | GL.GL_STENCIL_BUFFER_BIT);
gl.glStencilFunc(GL.GL_ALWAYS, 1, 1);
gl.glStencilOp(GL.GL_REPLACE, GL.GL_REPLACE, GL.GL_REPLACE);
gl.glStencilMask(0xFF);
//gl.glColorMask(false, false, false, false);
//gl.glDepthMask(false);
gl.glColor3f(0, 1, 0);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-25.0f, 0.0f);
gl.glVertex2f(0.0f, 15.0f);
gl.glVertex2f(25.0f, 0.0f);
gl.glVertex2f(0.0f, -15.0f);
gl.glEnd();
gl.glStencilMask(0);
gl.glStencilFunc(GL2.GL_EQUAL, 1, 1);
gl.glStencilOp(GL2.GL_KEEP, GL2.GL_KEEP, GL2.GL_KEEP);
//gl.glColorMask(true, true, true, true);
//gl.glDepthMask(true);
gl.glColor3f(1, 0, 0);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-20.0f, 0.0f);
gl.glVertex2f(0.0f, 20.0f);
gl.glVertex2f(20.0f, 0.0f);
gl.glVertex2f(0.0f, -20.0f);
gl.glEnd();
}
});
final JFrame jframe = new JFrame("One Triangle Swing GLCanvas");
jframe.addWindowListener(new WindowAdapter() {
#Override
public void windowClosing(WindowEvent windowevent) {
jframe.dispose();
System.exit(0);
}
});
jframe.getContentPane().add(glcanvas, BorderLayout.CENTER);
jframe.setSize(640, 480);
jframe.setVisible(true);
}
}
Zero298 has the right idea, though fails to explain why what you tried in your code does not work. This becomes more apparent when you understand how framebuffer pixel formats work in OpenGL; I will touch on this a little bit below, but first just to re-hash the proper solution:
public static void main(String[] args) {
GLProfile glprofile = GLProfile.getDefault ();
GLCapabilities glcapabilities = new GLCapabilities (glprofile);
// You must do this _BEFORE_ creating a render context
glcapabilities.setStencilBits (8);
final GLCanvas glcanvas = new GLCanvas (glcapabilities);
final GLU glu = new GLU ();
The important thing is that you do this before creating your render context ("canvas"). The stencil buffer is not something you can enable or disable whenever you need it -- you first have to select a pixel format that reserves storage for it. Since pixel formats are fixed from the time you create your render context onward, you need to do this before new GLCanvas (...).
You can actually use an FBO to do stencil operations in a render context that does not have a stencil buffer, but this is much more advanced than you should be considering at the moment. Something to consider if you ever want to do MSAA though, FBOs are a much nicer way of changing pixel formats at run-time than creating and destroying your render context ("canvas").
You need a call to glStencilMask() it's what controls what gets written or not. Set it to do or don't write, draw a stencil (in your case, the diamond), set the glStencilMask() again, and then draw what you want to get clipped.
This has a good sample: Stencil Buffer explanation
EDIT:
OK, I think I found the problem. You need to set your capabilities up at the top of the program.
final GLCapabilities glcapabilities = new GLCapabilities(glprofile);
glcapabilities.setStencilBits(8);
final GLCanvas glcanvas = new GLCanvas(glcapabilities);
The important part being:
glcapabilities.setStencilBits(8);
Thanks to: enabling stencil in jogl