Can anybody explain these WebGL snippets? - glsl

I am learning webgl and fully confused now.
I am going through this website and the comments written with code half explains for a beginner like me.
For example:
var canvas = document.getElementById('canvas');
var gl = getWebGLContext(canvas);
if(!gl) {
return;
}
//Setup GLSL
var program = createProgramFromScripts(gl, ["2d-vertex-shader", "2d-fragment-shader"]);
gl.useProgram(program);
//Look up where the vertex data needs to go
var positionLocation = gl.getAttribLocation(program, 'a_position');
//create a buffer and put a single CLIPSPACE rectangle in it.
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([-1.0, -1.0,
1.0, -1.0,
-1.0, 1.0,
-1.0, 1.0,
1.0, -1.0,
1.0, 1.0]), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
//draw
gl.drawArrays(gl.TRIANGLES, 0, 6);
In the above snippet, the line
var positionLocation = gl.getAttribLocation(program, 'a_position');
indicates it got the position where the vertex needs to go , but I didn't find anything specific in vertex shaders
attribute vec2 a_position;
void main() {
gl_Position = vec4(a_position, 0, 1);
}
How can we say where is the position?
Also the Float32Array , why are we using that at all ,is there any scenario where we can use it in real time, I am totally confused with these shaders.
I also read GLSL essentials ,to get some shaders knowledge, but still confused with these things. Can somebody put some light on to it?

The Float32Array contains all the vertices that will go through the shading pipeline.
In your vertex shader you assign gl_Positionto be a 4-dimensional vector with x and y belonging to your inserted vertices. So a_Position contains the values you passed in your array and the vertex shader will be run for every single vertex out there.
So this shader hardly does anything. In a real application, you can do several transformations and lighting operations etc. here.
If you run this program you should see 2 triangles being drawn (1 rectangle). That's because the array contains 6 2d-values that assign to 2 triangles.
Check out more information on the openGL pipeline here.

Related

Translating a 3d model to 2d using assimp

I'm using c++ to translate a 3d model entered using command line arguments into a 2d picture in assimp. However I'm not sure of the best way to go about it. I have the basic hard coding for to create a set object but I need to redo it using vectors and loops. What's the best way to go about it?
void createSimpleQuad(Mesh &m) {
// Clear out vertices and elements
m.vertices.clear();
m.indices.clear();
// Create four corners
Vertex upperLeft, upperRight;
Vertex lowerLeft, lowerRight;
Vertex upperMiddle;
// Set positions of vertices
// Note: glm::vec3(x, y, z)
upperLeft.position = glm::vec3(-0.5, 0.5, 0.0);
upperRight.position = glm::vec3(0.5, 0.5, 0.0);
lowerLeft.position = glm::vec3(-0.5, -0.5, 0.0);
lowerRight.position = glm::vec3(0.5, -0.5, 0.0);
upperMiddle.position = glm::vec3(-0.9, 0.5, 0.0);
// Set vertex colors (red, green, blue, white)
// Note: glm::vec4(red, green, blue, alpha)
upperLeft.color = glm::vec4(1.0, 0.0, 0.0, 1.0);
upperRight.color = glm::vec4(0.0, 1.0, 0.0, 1.0);
lowerLeft.color = glm::vec4(0.0, 0.0, 1.0, 1.0);
lowerRight.color = glm::vec4(1.0, 1.0, 1.0, 1.0);
upperMiddle.color = glm::vec4(0.5, 0.15, 0.979797979, 1.0);
// Add to mesh's list of vertices
m.vertices.push_back(upperLeft);
m.vertices.push_back(upperRight);
m.vertices.push_back(lowerLeft);
m.vertices.push_back(lowerRight);
m.vertices.push_back(upperMiddle);
// Add indices for two triangles
m.indices.push_back(0);
m.indices.push_back(3);
m.indices.push_back(1);
m.indices.push_back(0);
m.indices.push_back(2);
m.indices.push_back(3);
m.indices.push_back(0);
m.indices.push_back(2);
m.indices.push_back(4);
}
If you want to generate a 2D-picture out of a 3D-Model you need to:
Import the model
Render it via a common render-lib into a texture or manually by using our viewer and take a snapshot
At this moment there is no post-process to generate a 2D-View automatically in Assimp.
But when you want to do this with your own render-code this is not so hard to do. After importing your model you have to:
Get the bounding box for your imported asset, just check the opengl-samples in the assimp-repo for some tips
Calculate the diameter for this bounding box.
Create a camera, for OpenGL you can use glm for calculating the View-Matrix
Place the asset at (0|0|0) world coordinate system
Move your camera by the diameter at let it view onto (0|0|0)
Render the view into a 2D-Texture or just take a screenshot

shapes skewed when rotated, using openGL, glm math, orthographic projection

For practice I am setting up a 2d/orthographic rendering pipeline in openGL to be used for a simple game, but I am having issues related to the coordinate system.
In short, rotations distort 2d shapes, and I cannot seem to figure why. I am also not entirely sure that my coordinate system is sound.
First I looked for previous answers, but the following (the most relevant 2D opengl rotation causes sprite distortion) indicates that the problem was an incorrect ordering of transformations, but for now I am using just a view matrix and projection matrix, multiplied in the correct order in the vertex shader:
gl_Position = projection * view * model vec4(1.0); //(The model is just the identity matrix.)
To summarize my setup so far:
- I am successfully uploading a quad that should stretch across the whole screen:
GLfloat vertices[] = {
-wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top left
-wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom left
wf, -hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // bottom right
wf, hf, 0.0f, 0.0, 0.0, 1.0, 1.0, // top right
};
GLuint indices[] = {
0, 1, 2, // first Triangle
2, 3, 0, // second Triangle
};
wf and hf are 1, and I am trying to use a -1 to 1 coordinate system so I don't need to scale by the resolution in shaders (though I am not sure that this is correct to do.)
My viewport and orthographic matrix:
glViewport(0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
...
glm::mat4 mat_ident(1.0f);
glm::mat4 mat_projection = glm::ortho(-1.0f, 1.0f, -1.0f, 1.0f, -1.0f, 1.0f);
... though this clearly does not factor in the screen width and height. I have seen others use width and height instead of 1s, but this seems to break the system or display nothing.
I rotate with a static method that modifies a struct containing a glm::quaternion (time / 1000) to get seconds:
main_cam.rotate((GLfloat)curr_time / TIME_UNIT_TO_SECONDS, 0.0f, 0.0f, 1.0f);
// which does: glm::angleAxis(angle, glm::vec3(x, y, z) * orientation)
Lastly, I pass the matrix as a uniform:
glUniformMatrix4fv(MAT_LOC, 1, GL_FALSE, glm::value_ptr(mat_projection * FreeCamera_calc_view_matrix(&main_cam) * mat_ident));
...and multiply in the vertex shader
gl_Position = u_matrix * vec4(a_position, 1.0);
v_position = a_position.xyz;
The full-screen quad rotates on its center (0, 0 as I wanted), but its length and width distort, which means that I didn't set something correctly.
My best guess is that I haven't created the right ortho matrix, but admittedly I have had trouble finding anything else on stack overflow or elsewhere that might help debug. Most answers suggest that the matrix multiplication order is wrong, but that is not the case here.
A secondary question is--should I not set my coordinates to 1/-1 in the context of a 2d game? I did so in order to make writing shaders easier. I am also concerned about character/object movement once I add model matrices.
What might be causing the issue? If I need to multiply the arguments to gl::ortho by width and height, then how do I transform coordinates so v_position (my "in"/"varying" interpolated version of the position attribute) works in -1 to 1 as it should in a shader? What are the implications of choosing a particular coordinates system when it comes to ease of placing entities? The game will use sprites and textures, so I was considering a pixel coordinate system, but that quickly became very challenging to reason about on the shader side. I would much rather have THIS working.
Thank you for your help.
EDIT: Is it possible that my varying/interpolated v_position should be set to the calculated gl_Position value instead of the attribute position?
Try accounting for the aspect ratio of the window you are displaying on in the first two parameters of glm::ortho to reflect the aspect ratio of your display.
GLfloat aspectRatio = SCREEN_WIDTH / SCREEN_HEIGHT;
glm::mat4 mat_projection = glm::ortho(-aspectRatio, aspectRatio, -1.0f, 1.0f, -1.0f, 1.0f);

glVertexAttrib4fv won't pass to shader input at location 0

I'm trying to learn OpenGL and Rust at the same time. I'm using the OpenGL Superbible Sixth Edition, and got stuck in chapter 3 which introduces the function glVertexAttrib4fv to offset the position of a triangle. It worked fine when I did it in C++, but when I tried to translate it to Rust, the triangle disappeared. I've tried to reduce the example as much as possible to the following code (cargo dependencies are glutin = "*" and gl = "*"):
main.rs
extern crate glutin;
extern crate gl;
use std::io::Read;
fn main() {
unsafe {
let win = glutin::Window::new().unwrap();
win.make_current().unwrap();
gl::load_with(|s| win.get_proc_address(s));
let program = build_shader_program();
gl::UseProgram(program);
let mut vao = std::mem::uninitialized();
gl::GenVertexArrays(1, &mut vao);
gl::BindVertexArray(vao);
let red = [1.0, 0.0, 0.0, 1.0];
let mut running = true;
while running {
for event in win.poll_events() {
if let glutin::Event::Closed = event {
running = false;
}
}
win.swap_buffers().unwrap();
gl::ClearBufferfv(gl::COLOR, 0, &red[0]);
let attrib = [0.5, 0.0, 0.0, 0.0];
panic_if_error("before VertexAttrib4fv");
gl::VertexAttrib4fv(0, &attrib[0]);
panic_if_error("after VertexAttrib4fv");
gl::DrawArrays(gl::TRIANGLES, 0, 3);
}
}
}
fn panic_if_error(message: &str) {
unsafe {
match gl::GetError() {
gl::NO_ERROR => (),
_ => panic!("{}", message),
}
}
}
fn load_file_as_cstring(path: &str) -> std::ffi::CString {
let mut contents = Vec::new();
let mut file = std::fs::File::open(path).unwrap();
file.read_to_end(&mut contents).unwrap();
std::ffi::CString::new(contents).unwrap()
}
fn load_and_compile_shader(path: &str, shader_type: u32) -> u32 {
let contents = load_file_as_cstring(path);
unsafe {
let shader_id = gl::CreateShader(shader_type);
let source_ptr = contents.as_ptr();
gl::ShaderSource(shader_id, 1, &source_ptr, std::ptr::null());
gl::CompileShader(shader_id);
let mut result = std::mem::uninitialized();
gl::GetShaderiv(shader_id, gl::COMPILE_STATUS, &mut result);
assert_eq!(result, gl::TRUE as i32);
shader_id
}
}
fn build_shader_program() -> u32 {
let vert = load_and_compile_shader("a.vert", gl::VERTEX_SHADER);
let frag = load_and_compile_shader("a.frag", gl::FRAGMENT_SHADER);
unsafe {
let program_id = gl::CreateProgram();
gl::AttachShader(program_id, vert);
gl::AttachShader(program_id, frag);
gl::LinkProgram(program_id);
let mut result = std::mem::uninitialized();
gl::GetProgramiv(program_id, gl::LINK_STATUS, &mut result);
assert_eq!(result, gl::TRUE as i32);
program_id
}
}
a.frag
#version 430 core
out vec4 color;
void main() {
color = vec4(1.0, 1.0, 1.0, 1.0);
}
a.vert
#version 430 core
layout (location = 0) in vec4 offset;
void main() {
const vec4 vertices[3] =
vec4[3](vec4( 0.25, -0.25, 0.5, 1.0),
vec4(-0.25, -0.25, 0.5, 1.0),
vec4( 0.25, 0.25, 0.5, 1.0));
gl_Position = vertices[gl_VertexID]; // LINE 1
// gl_Position = vertices[gl_VertexID] + offset; // LINE 2
}
This code, as is, produces a white triangle in the middle of a red window.
Now, my expectation is that when I comment out LINE 1 in the vertex shader, and uncomment LINE 2, the triangle should move a quarter of a screen to the right, due to this code in "main.rs":
let attrib = [0.5, 0.0, 0.0, 0.0];
panic_if_error("before VertexAttrib4fv");
gl::VertexAttrib4fv(0, &attrib[0]);
panic_if_error("after VertexAttrib4fv");
But instead, the triangle disappears altogether. The panic_if_error call before and after gl::VertexAttrib4fv ensures that gl::GetError returns gl::NO_ERROR.
Question: Does anybody know why this is happening?
Other things of note. While I was searching for the answer to this, I came upon this question, where the user is having a similar problem (except in C++, where I had no problem). Anyway, one of the comments there incidentally lead me to try changing the location from 0 to 1, as in this:
layout (location = 1) in vec4 offset;
for the vertex shader, and this for the call to gl::VertexAttrib4fv:
gl::VertexAttrib4fv(1, &attrib[0]);
Well, that worked, but I have no idea why, and would still like to know what the problem is with using location 0 there (since that's what the book shows, and it worked fine in C++).
You need to make sure that you have a Core Profile context. If you do not specify this, you may be creating a Compatibility Profile context. In the Compatibility Profile, vertex attribute 0 has a special meaning. From the OpenGL 3.2 Compatibility Profile spec:
Setting generic vertex attribute zero specifies a vertex; the four vertex coordinates are taken from the values of attribute zero. A Vertex2, Vertex3, or Vertex4 command is completely equivalent to the corresponding VertexAttrib* command with an index of zero. Setting any other generic vertex attribute updates the current values of the attribute. There are no current values for vertex attribute zero.
In other words, vertex attribute 0 is an alias for the fixed function vertex position in the compatibility profile.
The above does not apply in the Core Profile. Vertex attribute 0 has not special meaning, and can be used like any other vertex attribute.
Based on what you already found, you need to use the with_gl_profile method with argument Core to specify that you want to use the core profile when creating the window.

How Can I use glClipPlane more than 6 times in OPENGL?

I have a Sphere . I would like to clip some planes like below picture. I need more than 10 clipping plane but maximum glClipPlane limit is 6. How can I solve this problem.
My Sample Code below;
double[] eqn = { 0.0, 1.0, 0.0, 0.72};
double[] eqn2 = { -1.0, 0.0, -0.5, 0.80 };
double[] eqnK = { 0.0, 0.0, 1.0, 0.40 };
/* */
Gl.glClipPlane(Gl.GL_CLIP_PLANE0, eqn);
Gl.glEnable(Gl.GL_CLIP_PLANE0);
/* */
Gl.glClipPlane(Gl.GL_CLIP_PLANE1, eqn2);
Gl.glEnable(Gl.GL_CLIP_PLANE1);
Gl.glClipPlane(Gl.GL_CLIP_PLANE2, eqnK);
Gl.glEnable(Gl.GL_CLIP_PLANE2);
//// draw sphere
Gl.glColor3f(0.5f, .5f, 0.5f);
Glu.gluSphere(quadratic, 0.8f, 50, 50);
Glu.gluDeleteQuadric(quadratic);
Gl.glDisable(Gl.GL_CLIP_PLANE0);
Gl.glDisable(Gl.GL_CLIP_PLANE1);
Gl.glDisable(Gl.GL_CLIP_PLANE2);
You should consider multi-pass rendering and the stencil buffer.
Say you need 10 user clip-planes and you are limited to 6, you can setup the first 6, render the scene into the stencil buffer and then do a second pass with the remaining 4 clip planes. You would then use the stencil buffer to reject parts of the screen that were clipped on the prior pass. So this way you get the effect of 10 user clip planes when the implementation only supports 6.
// In this example you want 10 clip planes but you can only do 6 per-pass,
// so you need 1 extra pass.
const int num_extra_clip_passes = 1;
glClear (GL_STENCIL_BUFFER_BIT);
// Disable color and depth writes for the extra clipping passes
glDepthMask (GL_FALSE);
glColorMask (GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
// Increment the stencil buffer value by 1 for every part of the sphere
// that is not clipped.
glStencilOp (GL_KEEP, GL_KEEP, GL_INCR);
glStencilFunc (GL_ALWAYS, 1, 0xFFFF);
// Setup Clip Planes: 0 through 5
// Draw Sphere
// Reject any part of the sphere that did not pass _all_ of the clipping passes
glStencilFunc (GL_EQUAL, num_extra_clip_passes, 0xFFFF);
// Re-enable color and depth writes
glDepthMask (GL_TRUE);
glColorMask (GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
// Setup Leftover Clip Planes
// DrawSphere
It is not perfect, it is quite fill-rate intensive and limits you to a total of 1536 clip planes (given an 8-bit stencil buffer), but it will get the job done without resorting to features present only in GLSL 130+ (namely gl_ClipDistance []).
You can just reuse "Gl.glEnable(Gl.GL_CLIP_PLANE1);" because you was disabled it later ...

How to draw a filled envelop like a cone on OpenGL (using GLUT)?

I am using freeglut for opengl rendering...
I need to draw an envelop looking like a cone (2D) that has to be filled with some color and some transparency applied.
Is the freeglut toolkit equipped with such an inbuilt functionality to draw filled geometries(or some trick)?
or is there some other api that has an inbuilt support for filled up geometries..
Edit1:
just to clarify the 2D cone thing... the envelop is the graphical interpretation of the coverage area of an aircraft during interception(of an enemy aircraft)...that resembles a sector of a circle..i should have mentioned sector instead..
and glutSolidCone doesnot help me as i want to draw a filled sector of a circle...which i have already done...what remains to do is to fill it with some color...
how to fill geometries with color in opengl?
Edit2:
All the answers posted to this questions can work for my problem in a way..
But i would definitely would want to know a way how to fill a geometry with some color.
Say if i want to draw an envelop which is a parabola...in that case there would be no default glut function to actually draw a filled parabola(or is there any?)..
So to generalise this question...how to draw a custom geometry in some solid color?
Edit3:
The answer that mstrobl posted works for GL_TRIANGLES but for such a code:
glBegin(GL_LINE_STRIP);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glEnd();
which draws a square...only a wired square is drawn...i need to fill it with blue color.
anyway to do it?
if i put some drawing commands for a closed curve..like a pie..and i need to fill it with a color is there a way to make it possible...
i dont know how its possible for GL_TRIANGLES... but how to do it for any closed curve?
On Edit3: The way I understand your question is that you want to have OpenGL draw borders and anything between them should be filled with colors.
The idea you had was right, but a line strip is just that - a strip of lines, and it does not have any area.
You can, however, have the lines connect to each other to define a polygon. That will fill out the area of the polygon on a per-vertex basis. Adapting your code:
glBegin(GL_POLYGON);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 0.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(200.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 200.0, 0.0);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(0.0, 0.0, 0.0);
glEnd();
Please note however, that drawing a polygon this way has two limitations:
The polygon must be convex.
This is a slow operation.
But I assume you just want to get the job done, and this will do it. For the future you might consider just triangulating your polygon.
I'm not sure what you mean by "an envelop", but a cone is a primitive that glut has:
glutSolidCone(radius, height, number_of_slices, number_of_stacks)
The easiest way to fill it with color is to draw it with color. Since you want to make it somewhat transparent, you need an alpha value too:
glColor4f(float red, float green, float blue, float alpha)
// rgb and alpha run from 0.0f to 1.0f; in the example here alpha of 1.0 will
// mean no transparency, 0.0 total transparency. Call before drawing.
To render translucently, blending has to be enabled. And you must set the blending function to use. What you want to do will probably be achieved with the following. If you want to learn more, drop me a comment and I will look for some good pointers. But here goes your setup:
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Call that before doing any drawing operations, possibly at program initialization. :)
Since you reclarified your question to ask for a pie: there's an easy way to draw that too using opengl primitives:
You'd draw a solid sphere using gluSolidSphere(). However, since you only want to draw part of it, you just clip the unwanted parts away:
void glClipPlane(GLenum plane, const GLdouble * equation);
With plane being GL_CLIPPLANE0 to GL_CLIPPLANEn and equation being a plane equation in normal form (ax + by + c*z + d = 0 would mean equation would hold the values { a, b, c, d }. Please note that those are doubles and not floats.
I remember there was a subroutine for that. But it's neither too hard to do by yourself.
But I don't understand the 2D -thing. Cone in 2D? Isn't it just a triangle?
Anyway, here's an algorithm to drawing a cone in opengl
First take a circle, subdivision it evenly so that you get a nice amount of edges.
Now pick the center of the circle, make triangles from the edges to the center of the circle. Then select a point over the circle and make triangles from the edges to that point.
The size shape and orientation depends about the values you use to generate the circle and two points. Every step is rather simple and shouldn't cause trouble for you.
First just subdivision a scalar value. Start from [0-2] -range. Take the midpoint ((start+end)/2) and split the range with it. Store the values as pairs. For instance, subdividing once should give you: [(0,1), (1,2)] Do this recursively couple of times, then calculate what those points are on the circle. Simple trigonometry, just remember to multiply the values with π before proceeding. After this you have a certain amount of edges. 2^n where n is the amount of subdivisions. Then you can simply turn them into triangles by giving them one vertex point more. Amount of triangles ends up being therefore: 2^(n+1). (The amounts are useful to know if you are doing it with fixed size arrays.
Edit: What you really want is a pie. (Sorry the pun)
It's equally simple to render. You can again use just triangles. Just select scalar range [-0.25 - 0.25], subdivide, project to circle, and generate one set of triangles.
The scalar - circle projection is simple as: x=cos(v*pi)r, y=sin(vpi)*r where (x,y) is the resulting vertex point, r is a radius, and trigonometric functions work on radiances, not degrees. (if they work with degrees, replace pi with 180)
Use vertex buffers or lists to render it yourself.
Edit: About the coloring question. glColor4f, if you want some parts of the geometry to be different by its color, you can assign a color for each vertex in vertex buffer itself. I don't right now know all the API calls to do it, but API reference in opengl is quite understandable.
On the edit on colors:
OpenGL is actually a state machine. This means that the current material and/or color position is used when drawing. Since you probably won't be using materials, ignore that for now. You want colors.
glColor3f(float r, float g, float b) // draw with r/g/b color and alpha of 1
glColor4f(float r, float g, float b, float alpha)
This will affect the colors of any vertices you draw, of any geometry you render - be it glu's or your own - after the glColorXX call has been executed. If you draw a face with vertices and change the color inbetween the glVertex3f/glVertex2f calls, the colors are interpolated.
Try this:
glBegin(GL_TRIANGLES);
glColor3f(0.0, 0.0, 1.0);
glVertex3f(-3.0, 0.0, 0.0);
glColor3f(0.0, 1.0, 0.0);
glVertex3f(0.0, 3.0, 0.0);
glColor3f(1.0, 0.0, 0.0);
glVertex3f(3.0, 0.0, 0.0);
glEnd();
But I pointed at glColor4f already, so I assume you want to set the colors on a per-vertex basis. And you want to render using display lists.
Just like you can display lists of vertices, you can also make them have a list of colors: all you need to do is enable the color lists and tell opengl where the list resides. Of course, they need to have the same outfit as the vertex list (same order).
If you had
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices_);
glDisableClientState(GL_VERTEX_ARRAY);
you should add colors this way. They need not be float; in fact, you tell it what format it should be. For a color list with 1 byte per channel and 4 channels (R, G, B and A) use this:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices_);
glColorPointer(4, GL_UNSIGNED_BYTE, 0, colors_);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
EDIT: Forgot to add that you then have to tell OpenGL which elements to draw by calling glDrawElements.