OpenGL texture colors are wrong - opengl

I've made a simple program that cretes an Ortho perspective, and puts a texture containing a png on a quad
However, I can't figure out why some of the colors are displayed all jumbled.
The png looks like this (the white rectangle in the middle is transparent):
The quad in my OpenGL program looks like this:
Below is the code for initializing OpenGL as well as what goes on in the method called by the OpenGL thread.
I'm using JOGL.
public void init(GLAutoDrawable gLDrawable) {
gl.glGenTextures(1, textureId, 0);
gl.glBindTexture(GL2.GL_TEXTURE_2D, textureId[0]);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MIN_FILTER, GL2.GL_NEAREST);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_MAG_FILTER, GL2.GL_LINEAR);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_S, GL2.GL_REPEAT);
gl.glTexParameterf(GL2.GL_TEXTURE_2D, GL2.GL_TEXTURE_WRAP_T, GL2.GL_REPEAT);
BufferedImage image = null;
try {
image = ImageIO.read(new File("d:\\temp\\projects\\openglTest1\\texTest.png"));
} catch (IOException e1) {e1.printStackTrace();}
DataBufferByte dataBufferByte = (DataBufferByte) image.getRaster().getDataBuffer();
Buffer imageBuffer = ByteBuffer.wrap(dataBufferByte.getData());
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL2.GL_RGBA, image.getWidth(), image.getHeight(), 0, GL2.GL_RGBA, GL.GL_UNSIGNED_BYTE, imageBuffer);
gl.glEnable(GL2.GL_TEXTURE_2D);
gl.glBlendFunc(GL2.GL_ONE, GL2.GL_ONE_MINUS_SRC_ALPHA);
gl.glEnable(GL2.GL_BLEND_SRC);
gl.glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
gl.glClearDepth(1.0f);
gl.glEnable(GL.GL_DEPTH_TEST);
gl.glDepthFunc(GL.GL_LEQUAL);
gl.glHint(GL2ES1.GL_PERSPECTIVE_CORRECTION_HINT, GL.GL_NICEST);
}
//this is called by the OpenGL Thread
public void display(GLAutoDrawable gLDrawable) {
gl.glClear(GL.GL_COLOR_BUFFER_BIT);
gl.glClear(GL.GL_DEPTH_BUFFER_BIT);
gl.glEnableClientState(GLPointerFunc.GL_VERTEX_ARRAY);
gl.glEnableClientState(GLPointerFunc.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL2.GL_CCW);
gl.glVertexPointer(3, GL.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GL.GL_TRIANGLES, indices.length, GL.GL_UNSIGNED_BYTE, indexBuffer);
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_TEXTURE_COORD_ARRAY);
}
This is puzzling to me because, while I'm not an OpenGL expert I tried to understand what all the above OpenGL commands do before using them. In fact, I've dont the same thing on Android, and everything is displayed ok, but when doing it in Java with JOGL I get this result described here. The only thing I'm doing different is the way I load the png image. On Adroid there's a helper method like this:
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmapStatic, 0);
while with JOGL I'm doing my own loading via:
try {
image = ImageIO.read(new File("d:\\temp\\projects\\openglTest1\\texTest.png"));
} catch (IOException e1) {e1.printStackTrace();}
DataBufferByte dataBufferByte = (DataBufferByte) image.getRaster().getDataBuffer();
Buffer imageBuffer = ByteBuffer.wrap(dataBufferByte.getData());
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL2.GL_RGBA, image.getWidth(), image.getHeight(), 0, GL2.GL_RGBA, GL.GL_UNSIGNED_BYTE, imageBuffer);
as detailed above.
==UPDATE==
As per jcadam's comment, I've tried setting the format of the pixel data to GL_BGRA like so:
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL2.GL_RGBA, image.getWidth(), image.getHeight(), 0, GL2.GL_BGRA, GL.GL_UNSIGNED_BYTE, imageBuffer);
The colors are still jumbled, but it's a different jumble this time:
How can I find out what particular format my png image is in?
== UPDATE 2 - solution implementation ==
Ok, first, I want to thank jcadam, rotoglup and Tim for pointing me in the right direction.
In short, the issue was that the way in which Java is ordering the pixels when decoding an image is not always the good order for passing to OpenGL. More precisely, if you do not have an Alpha Channel in your image, then it's ok but if you do have an alpha channel the order is bad and some colors will be jumbled.
Now, I started off by making my own manual implementation which works ok for 32bit PNGs and 24 bit JPEGs:
public void texImage2D(File imageLocation,GL gl) {
BufferedImage initialImage = null;
try {
initialImage = ImageIO.read(imageLocation);
} catch (IOException e1) {
throw new RuntimeException(e1.getMessage(), e1);
}
int imgHeight = initialImage.getHeight(null);
int imgWidth = initialImage.getWidth(null);
ColorModel cm = initialImage.getColorModel();
boolean hasAlpha = cm.hasAlpha();
Buffer buffer = null;
int openGlInternalFormat = -1;
int openGlImageFormat = -1;
if(!hasAlpha) {
DataBufferByte dataBufferByte = (DataBufferByte) initialImage.getRaster().getDataBuffer();
buffer = ByteBuffer.wrap(dataBufferByte.getData());
openGlInternalFormat = GL2.GL_RGB;
openGlImageFormat = GL2.GL_BGR;
} else {
openGlInternalFormat = GL2.GL_RGBA;
openGlImageFormat = GL2.GL_RGBA;
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_BYTE, imgWidth, imgHeight, 4, null);
ComponentColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB),
new int[] { 8, 8, 8, 8 },
true, false,
ComponentColorModel.TRANSLUCENT,
DataBuffer.TYPE_BYTE);
BufferedImage bufImg = new BufferedImage(colorModel,
raster, false,
null);
Graphics2D g = bufImg.createGraphics();
g.drawImage(initialImage, null, null);
DataBufferByte imgBuf = (DataBufferByte) raster.getDataBuffer();
byte[] bytes = imgBuf.getData();
buffer = ByteBuffer.wrap(bytes);
g.dispose();
}
gl.glTexImage2D(GL.GL_TEXTURE_2D, 0, openGlInternalFormat, imgWidth, imgHeight, 0, openGlImageFormat, GL.GL_UNSIGNED_BYTE, buffer);
}
however I later found out that JOGL has its own helper tools for this, and this is in fact what I ended up using:
//this code should be called in init(), to load the texture:
InputStream stream = new FileInputStream("d:\\temp\\projects\\openglTest1\\texTest.png");
TextureData data = TextureIO.newTextureData(gl.getGLProfile(),stream, false, "png");
Texture myTexture = TextureIO.newTexture(data);
//this code should be called in the draw/display method, before the vertices drawing call
myTexture.enable(gl);
myTexture.bind(gl);

It looks like ABGR to me. If you just look at the colors:
png red (A1,B0,G0,R1) looks like
opengl red (R1,G0,B0,A1)
png bluegreen (A1, B1, G1, R0) looks like
opengl white (R1, G1, B1, A0)
png blue (A1, B1, G0, R0) looks like
opengl yellow (R1, G1, B0, A0)
png clear (A0, B?, G?, R?) could be
ogl bluegreen (R0, B?, G?, A?)
If opengl transparency is disabled then the alpha channel wouldn't matter.

Hmm... It looks like a pixel format problem. You could get more specific and try GL_RGBA8, GL_RGBA16, etc. Is this an 8-bit PNG rather than 24 or 32? Is there not an alpha channel (in which case use GL_RGB rather than GL_RGBA)?

Just out of a quick search (I don't have any actual experience with Java ImageIO), it seems that Java has a native ARGB byte ordering, you may take a look at this source code for inspiration.

Related

How do you upload texture data to a Sparse Texture using TexSubImage in OpenGL?

I am following apitest on github, and am seeing some very strange behavior in my renderer.
It seems like the Virtual Pages are not receiving the correct image data.
Original Image is 500x311:
When i render this image using a Sparse Texture, i must resize the backing store to 512x384 (to be a mutliple of the page size) and my result is:
As you can see it looks like a portion of the subimage (a sub sub image) was loaded to each individual virtual page.
To test this, i cropped the image to the size of just 1 virtual page (256x128): here is the result:
as expected, the single virutal page was filled with the exact, correct, cropped image.
Lastly, I increased the crop size to be 2 virtual pages worth, 256x256, one on top of another. here is the result:
This proves that calling texSubimage with an amount of texelData larger than Virtual_Page_Size causes errors.
Does care need to be taken when passing data to glsubimage that is larger than the virtual page size? I see no logic for this in apitest so think this could be a driver issue. Or I am missing something major.
Here is some code:
I stored the Texture in a Texture Array and to simplify turned the array into just a texture2d. both produce the same exact result. Here is the Texture Memory Allocation:
_check_gl_error();
glGenTextures(1, &mTexId);
glBindTexture(GL_TEXTURE_2D, mTexId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_SPARSE_ARB, GL_TRUE);
// TODO: This could be done once per internal format. For now, just do it every time.
GLint indexCount = 0,
xSize = 0,
ySize = 0,
zSize = 0;
GLint bestIndex = -1,
bestXSize = 0,
bestYSize = 0;
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_NUM_VIRTUAL_PAGE_SIZES_ARB, 1, &indexCount);
if(indexCount == 0) {
fprintf(stdout, "No Virtual Page Sizes for given format");
fflush(stdout);
}
_check_gl_error();
for (GLint i = 0; i < indexCount; ++i) {
glTexParameteri(GL_TEXTURE_2D, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, i);
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_VIRTUAL_PAGE_SIZE_X_ARB, 1, &xSize);
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_VIRTUAL_PAGE_SIZE_Y_ARB, 1, &ySize);
glGetInternalformativ(GL_TEXTURE_2D, internalformat, GL_VIRTUAL_PAGE_SIZE_Z_ARB, 1, &zSize);
// For our purposes, the "best" format is the one that winds up with Z=1 and the largest x and y sizes.
if (zSize == 1) {
if (xSize >= bestXSize && ySize >= bestYSize) {
bestIndex = i;
bestXSize = xSize;
bestYSize = ySize;
}
}
}
_check_gl_error();
mXTileSize = bestXSize;
glTexParameteri(GL_TEXTURE_2D, GL_VIRTUAL_PAGE_SIZE_INDEX_ARB, bestIndex);
_check_gl_error();
//Need to ensure that the texture is a multiple of the tile size.
physicalWidth = roundUpToMultiple(width, bestXSize);
physicalHeight = roundUpToMultiple(height, bestYSize);
// We've set all the necessary parameters, now it's time to create the sparse texture.
glTexStorage2D(GL_TEXTURE_2D, levels, GL_RGBA8, physicalWidth, physicalHeight);
_check_gl_error();
for (GLsizei i = 0; i < slices; ++i) {
mFreeList.push(i);
}
_check_gl_error();
mHandle = glGetTextureHandleARB(mTexId);
_check_gl_error();
glMakeTextureHandleResidentARB(mHandle);
_check_gl_error();
mWidth = physicalWidth;
mHeight = physicalHeight;
mLevels = levels;
Here is what happens after the allocation:
glTextureSubImage2DEXT(mTexId, GL_TEXTURE_2D, level, 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, data);
I have tried making width and height the physical width of the backing store AND the width/height of the incoming image content. Neither produce desired results. I exclude mip levels for now. When I was using Mip levels and the texture array i was getting different results but similar behavior.
Also the image is loaded from SOIL and before i implemented sparse textures, that worked very well (before sparse i implemented bindless).

What's wrong with this simple OpenGL/JOGL stencil test?

I'm learning how to use a stencil buffer, but so far have been unsuccessful at getting a even a simple example to work. In fact, despite trying various combinations of parameters for glStencilOp and glStencilFunc I have not been able to see any evidence that the stencil buffer is working at all. I'm starting to suspect my graphics driver (Mac Pro, Mac OS X 10.8.5) or JOGL (2.0.2) doesn't support it... or I'm missing something really basic.
Here's what I'm seeing:
I'm expecting to see the red diamond clipped by the green diamond. What am I doing wrong?
public class Test {
public static void main(String[] args) {
GLProfile glprofile = GLProfile.getDefault();
final GLCapabilities glcapabilities = new GLCapabilities(glprofile);
final GLCanvas glcanvas = new GLCanvas(glcapabilities);
final GLU glu = new GLU();
glcanvas.addGLEventListener(new GLEventListener() {
#Override
public void reshape(GLAutoDrawable glautodrawable, int x, int y, int width, int height) {}
#Override
public void init(GLAutoDrawable glautodrawable) {
GL2 gl = glautodrawable.getGL().getGL2();
glcapabilities.setStencilBits(8);
gl.glMatrixMode(GLMatrixFunc.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(45, 1, 1, 10000);
glu.gluLookAt(0, 0, 100, 0, 0, 0, 0, 1, 0);
gl.glMatrixMode(GLMatrixFunc.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void dispose(GLAutoDrawable glautodrawable) {}
#Override
public void display(GLAutoDrawable glautodrawable) {
GL2 gl = glautodrawable.getGL().getGL2();
gl.glEnable(GL.GL_STENCIL_TEST);
gl.glClearStencil(0x0);
gl.glClear(GL.GL_COLOR_BUFFER_BIT | GL.GL_DEPTH_BUFFER_BIT | GL.GL_STENCIL_BUFFER_BIT);
gl.glStencilFunc(GL.GL_ALWAYS, 1, 1);
gl.glStencilOp(GL.GL_REPLACE, GL.GL_REPLACE, GL.GL_REPLACE);
gl.glStencilMask(0xFF);
//gl.glColorMask(false, false, false, false);
//gl.glDepthMask(false);
gl.glColor3f(0, 1, 0);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-25.0f, 0.0f);
gl.glVertex2f(0.0f, 15.0f);
gl.glVertex2f(25.0f, 0.0f);
gl.glVertex2f(0.0f, -15.0f);
gl.glEnd();
gl.glStencilMask(0);
gl.glStencilFunc(GL2.GL_EQUAL, 1, 1);
gl.glStencilOp(GL2.GL_KEEP, GL2.GL_KEEP, GL2.GL_KEEP);
//gl.glColorMask(true, true, true, true);
//gl.glDepthMask(true);
gl.glColor3f(1, 0, 0);
gl.glBegin(GL2.GL_QUADS);
gl.glVertex2f(-20.0f, 0.0f);
gl.glVertex2f(0.0f, 20.0f);
gl.glVertex2f(20.0f, 0.0f);
gl.glVertex2f(0.0f, -20.0f);
gl.glEnd();
}
});
final JFrame jframe = new JFrame("One Triangle Swing GLCanvas");
jframe.addWindowListener(new WindowAdapter() {
#Override
public void windowClosing(WindowEvent windowevent) {
jframe.dispose();
System.exit(0);
}
});
jframe.getContentPane().add(glcanvas, BorderLayout.CENTER);
jframe.setSize(640, 480);
jframe.setVisible(true);
}
}
Zero298 has the right idea, though fails to explain why what you tried in your code does not work. This becomes more apparent when you understand how framebuffer pixel formats work in OpenGL; I will touch on this a little bit below, but first just to re-hash the proper solution:
public static void main(String[] args) {
GLProfile glprofile = GLProfile.getDefault ();
GLCapabilities glcapabilities = new GLCapabilities (glprofile);
// You must do this _BEFORE_ creating a render context
glcapabilities.setStencilBits (8);
final GLCanvas glcanvas = new GLCanvas (glcapabilities);
final GLU glu = new GLU ();
The important thing is that you do this before creating your render context ("canvas"). The stencil buffer is not something you can enable or disable whenever you need it -- you first have to select a pixel format that reserves storage for it. Since pixel formats are fixed from the time you create your render context onward, you need to do this before new GLCanvas (...).
You can actually use an FBO to do stencil operations in a render context that does not have a stencil buffer, but this is much more advanced than you should be considering at the moment. Something to consider if you ever want to do MSAA though, FBOs are a much nicer way of changing pixel formats at run-time than creating and destroying your render context ("canvas").
You need a call to glStencilMask() it's what controls what gets written or not. Set it to do or don't write, draw a stencil (in your case, the diamond), set the glStencilMask() again, and then draw what you want to get clipped.
This has a good sample: Stencil Buffer explanation
EDIT:
OK, I think I found the problem. You need to set your capabilities up at the top of the program.
final GLCapabilities glcapabilities = new GLCapabilities(glprofile);
glcapabilities.setStencilBits(8);
final GLCanvas glcanvas = new GLCanvas(glcapabilities);
The important part being:
glcapabilities.setStencilBits(8);
Thanks to: enabling stencil in jogl

Ambiguous results with Frame Buffers in libgdx

I am getting the following weird results with the FrameBuffer class in libgdx.
Here is the code that is producing this result:
// This is the rendering code
#Override
public void render(float delta) {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
stage.act();
stage.draw();
fbo.begin();
batch.begin();
batch.draw(heart, 0, 0);
batch.end();
fbo.end();
test = new Image(fbo.getColorBufferTexture());
test.setPosition(256, 256);
stage.addActor(test);
}
//This is the initialization code
#Override
public void show() {
stage = new Stage(Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
atlas = Assets.getAtlas();
batch = new SpriteBatch();
background = new Image(atlas.findRegion("background"));
background.setFillParent(true);
heart = atlas.findRegion("fluttering");
fbo = new FrameBuffer(Pixmap.Format.RGBA8888, heart.getRegionWidth(), heart.getRegionHeight(), false);
stage.addActor(background);
Image temp = new Image(new TextureRegion(heart));
stage.addActor(temp);
}
Why is it that I am getting the heart that I drew on the frame buffer to get flipped and be smaller than the original one though the frame buffer width and height are the same as that of the image (71 x 72).
Your SpriteBatch is using the wrong projection matrix. Since you are rendering to a custom sized FrameBuffer you will have to manually set one.
projectionMatrix = new Matrix4();
projectionMatrix.setToOrtho2D(0, 0, heart.getRegionWidth(), heart.getRegionHeight());
batch.setProjectionMatrix(projectionMatrix);
To solve this, the frame buffer has to have a width and height equal to that of stage, like this:
fbo = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);

cocos2d and glReadPixels don't work?

i have a problem with cocos2d and glReadPixels because don't work correctly.
I found in web a code for pixel perfect collision and i modified for my app, but with the animation or more fast animation don't work.
This is the code:
-(BOOL) isCollisionBetweenSpriteA:(CCSprite*)spr1 spriteB:(CCSprite*)spr2 pixelPerfect:(BOOL)pp
{
BOOL isCollision = NO;
CGRect intersection = CGRectIntersection([spr1 boundingBox], [spr2 boundingBox]);
// Look for simple bounding box collision
if (!CGRectIsEmpty(intersection))
{
// If we're not checking for pixel perfect collisions, return true
if (!pp) {return YES;}
// Get intersection info
unsigned int x = intersection.origin.x;
unsigned int y = intersection.origin.y;
unsigned int w = intersection.size.width;
unsigned int h = intersection.size.height;
unsigned int numPixels = w * h;
//NSLog(#"\nintersection = (%u,%u,%u,%u), area = %u",x,y,w,h,numPixels);
// Draw into the RenderTexture
[_rt beginWithClear:0 g:0 b:0 a:0];
// Render both sprites: first one in RED and second one in GREEN
glColorMask(1, 0, 0, 1);
[spr1 visit];
glColorMask(0, 1, 0, 1);
[spr2 visit];
glColorMask(1, 1, 1, 1);
// Get color values of intersection area
ccColor4B *buffer = malloc( sizeof(ccColor4B) * numPixels );
glReadPixels(x, y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
/******* All this is for testing purposes *********/
// Draw the intersection rectangle in BLUE (testing purposes)
/**************************************************/
[_rt end];
// Read buffer
unsigned int step = 1;
for(unsigned int q=0; q<1; q+=step)
{
ccColor4B color = buffer[q];
if (color.r > 0 && color.g > 0)
{
isCollision = YES;
break;
}
}
// Free buffer memory
free(buffer);
}
return isCollision;
}
where is the problem?I tried but nothing.
Thank you very much.
regards.
If you are using iOS6, have a look at this post for a solution:
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) self.layer;
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat,
nil];
The explanation is that iOS6 fixes some bugs in iOS Open GL implementation, so that the GL buffer is (correctly) cleared each time it is presented to the screen. Here what Apple writes about this:
Important: You must call glReadPixels before calling EAGLContext/-presentRenderbuffer: to get defined results unless you're using a retained back buffer.
The correct solution would be calling glReadPixels before the render buffer is presented to the screen. After that, it invalidated.
The solution above is just a workaround to make the image sort of "sticky".
Be aware that it can impact your app rendering performance. The point is that if you are using cocos2d, you cannot easily call glReadPixels before the the render buffer is presented.
Hope it helps.

OpenTK: Using different colors with a VBO

Situation: I am drawing with OpenGL in C# with the library OpenTK.
.
Problem: I cannot choose which one of my buffers/sets of vertices to draw.
.
Setup-Function:
var vertices = new Vertex[..];
Create the vertices
foreach( .. )
{
Byte4 color = new Byte4();
color.R = 255;
color.G = 0;
color.B = 0;
color.A = 100;
Vertex vertex;
vertex.Position = new Vector3(.....);
vertex.Color = color;
vertices[index] = vertex;
}
Generate / bind buffers.
vbo_size = vertices.Length;
GL.GenBuffers(1, out vbo_id);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_id);
GL.BufferData<Vertex>(BufferTarget.ArrayBuffer, (IntPtr)(vbo_size * Vertex.SizeInBytes), vertices, BufferUsageHint.StaticDraw);
GL.InterleavedArrays(InterleavedArrayFormat.C4ubV3f, 0, IntPtr.Zero);
* Vertex.SizeInBytes is 16 if this matters.
.
Render-code:
GL.Enable(EnableCap.DepthTest);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
..
GL.Enable(EnableCap.ColorArray);
GL.DrawArrays(BeginMode.Points, 0, vbo_size);
GL.Disable(EnableCap.ColorArray);
..
glControl1.SwapBuffers();
.
What id like to do:
In the setup-code i create my vertices (Vertex include position and color). I create one set right now, but i would like to create one more (just the same code with different color-values). I did this, and of course it is fine to create it and bind it to a secondary buffer (vbo_id/vbo_secondary_id). But how do I draw it?
Something like this is what I am looking for:
RenderNormalColors()
{
GL.UseVboId(vbo_id);
GL.DrawArrays(BeginMode.Points, 0, vbo_size);
}
RenderAlternativeColors()
{
GL.UseVboId(vbo_id_secondary);
GL.DrawArrays(BeginMode.Points, 0, vbo_size);
}
The GL.DrawArrays seems to take everything without control of what to draw.
Everything in the vertices/arrays will/is be identical apart from the colors. I just need to render the same objects - thousands of points - with another "colorscheme".
Any help would be appriciated.
So I fixed it. Kind of how I originally thought the solution would be.
I created one array for the vertices (positions) and two separate arrays with colors (C# "Color").
vertices = new Vector3[evaluations.Count];
colors = new int[evaluations.Count];
altcolors = new int[evaluations.Count];
Then I bound them to different buffers.
vbo_size = vertices.Length; // Necessary for rendering later on
GL.GenBuffers(1, out vbo_id);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_id);
GL.BufferData(BufferTarget.ArrayBuffer,
new IntPtr(vertices.Length * BlittableValueType.StrideOf(vertices)), // strideof means what?
vertices, BufferUsageHint.StaticDraw);
GL.GenBuffers(1, out vbo_color_id);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_color_id);
GL.BufferData(BufferTarget.ArrayBuffer,
new IntPtr(colors.Length * BlittableValueType.StrideOf(vertices)),
colors, BufferUsageHint.StaticDraw);
GL.GenBuffers(1, out vbo_color_id_alt);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_color_id_alt);
GL.BufferData(BufferTarget.ArrayBuffer,
new IntPtr(altcolors.Length * BlittableValueType.StrideOf(vertices)),
altcolors, BufferUsageHint.StaticDraw);
Notice the "vbo_color_id" and "vbo_color_id_alt". These are used in the Render()
selected_vbo = either color_id_alt or color_id
GL.BindBuffer(BufferTarget.ArrayBuffer, selected_vbo);
GL.ColorPointer(4, ColorPointerType.UnsignedByte, sizeof(int), IntPtr.Zero);
GL.EnableClientState(ArrayCap.ColorArray);
GL.EnableClientState(ArrayCap.VertexArray);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_id);
GL.VertexPointer(3, VertexPointerType.Float, Vector3.SizeInBytes, new IntPtr(0));
GL.DrawArrays(BeginMode.Points, 0, vbo_size);
I used this to be able to select points in a point cloud. Every point gets, in the alternative colorscheme, an unique color (stored in a dictionary which points to the points id (index)).
When I click the mouse it retreives the current pixel and checks the list. If it finds a color that are present in the set it will know what point was clicked.
This is pretty good since I did not have to use raycasting or octtrees or similar checks. Worth noticing though is that this will make it impossible to find anything behind the points that are currently showed on the screen.
I render the alternative colors, pick the pixel-under-mouse-color but I do not use "swapbuffer()" so it will never show on the screen. Then I render it again with the corerct colors.
Pretty nifty.
public void RenderAlternativeColorsAndPick(int x, int y)
{
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_color_id_alt);
GL.ColorPointer(4, ColorPointerType.UnsignedByte, sizeof(int), IntPtr.Zero);
GL.EnableClientState(ArrayCap.ColorArray);
GL.EnableClientState(ArrayCap.VertexArray);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo_id);
GL.VertexPointer(3, VertexPointerType.Float, Vector3.SizeInBytes, new IntPtr(0));
GL.DrawArrays(BeginMode.Points, 0, vbo_size);
// Psuedo code sorry
GL.GetPixelColor(x,y)
SelectedPoint = dictionary<color,int>.findValuebyKey(thePixelsColor)
}
Hope this helps someone in the future.