I'm trying to show a texture(yes it is a pot) with opengl 2.1 and glsl 120, but i'm not sure on how to do it, all i can get is a black quad, i've been following this tutorials: A Textured Cube, OpenGl - Textures and what i have understood is that i need to:
Specify the texture coordinates to attach to each vertex(in my case are 6 vertices, a cube without indexing)
Load the texture and bind it in a texture unit(default is 0)
call glDrawArrays
Inside the shaders i need to:
Receive the texture coords in an attribute in the vertex shader and pass it to the fragment shader through a varying variable
In the fragment shader use a sampler object to sample a pixel, in the position specified by the varying variable, from the texture.
Is it all correct?
Here is how i create the texture VBO and load the texture:
void Application::onStart(){
unsigned int format;
SDL_Surface* img;
float quadCoords[] = {
-0.5f, -0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
0.5f, 0.5f, 0.0f,
-0.5f, 0.5f, 0.0f,
-0.5f, -0.5f, 0.0f};
const float texCoords[] = {
0.0f, 0.0f,
1.0f, 0.0f,
1.0f, 1.0f,
1.0f, 1.0f,
0.0f, 1.0f,
0.0f, 0.0f};
//shader loading omitted ...
sprogram.bind(); // call glUseProgram(programId)
//set the sampler value to 0 -> use texture unit 0
sprogram.loadValue(sprogram.getUniformLocation(SAMPLER), 0);
//quad
glGenBuffers(1, &quadBuffer);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*18, quadCoords, GL_STATIC_DRAW);
//texture
glGenBuffers(1, &textureBuffer);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*12, texCoords, GL_STATIC_DRAW);
//load texture
img = IMG_Load("resources/images/crate.jpg");
if(img == nullptr)
throw std::runtime_error(SDL_GetError());
glGenTextures(1, &this->texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, this->texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, img->w, img->h, 0, GL_RGBA, GL_UNSIGNED_BYTE, img->pixels);
SDL_FreeSurface(img);
}
rendering phase:
glClear(GL_COLOR_BUFFER_BIT);
glEnableVertexAttribArray(COORDS);
glBindBuffer(GL_ARRAY_BUFFER, quadBuffer);
glVertexAttribPointer(COORDS, 3, GL_FLOAT, GL_FALSE, 0, nullptr);
glEnableVertexAttribArray(TEX_COORDS);
glBindBuffer(GL_ARRAY_BUFFER, textureBuffer);
glVertexAttribPointer(TEX_COORDS, 2, GL_FLOAT, GL_FALSE, 0, nullptr);
//draw the vertices
glDrawArrays(GL_TRIANGLES, 0, 6);
vertex shader:
#version 120
attribute vec3 coord;
attribute vec2 texCoord;
varying vec2 UV;
void main(){
gl_Position = vec4(coord.x, coord.y, coord.z, 1.0);
UV = texCoord;
}
fragment shader:
#version 120
uniform sampler2D tex;
varying vec2 UV;
void main(){
gl_FragColor.rgb = texture2D(tex, UV).rgb;
gl_FragColor.a = 1.0;
}
I know that the tutorials use out instead of varying so i tried to "convert" the code, also there is this tutorial: Simple Texture - LightHouse that explain the gl_MultiTexCoord0 attribute and gl_TexCoord array wich are built in, but this is almost the same thing i'm doing. I want to know if 'm doing it all right and if not, i would like to know how to show a simple 2d texture in the screen with opengl 2.1 and glsl 120
Do you have a particular reason to use opengl 2.1 with glsl version 1.2 ? If not stick to the openGl 3.0 because its easier to understand imho.
My guess is you have 2 big problems :
First of all getting a black quad: If its size occupies your hole app then its the background color. That means it doesn't draw anything at all .
I think(by testing this) OpenGL has a default program which will activate and even if you have already set a vertex array/buffer object on the gpu.It should render as a white quad in your window... So that might be ur 1st problem . I dont know if opengl 2.1 has vertex buffer arrays but opengl 3.0 has and you should definetly make use of that!
Second : you don't use your shader program in the rendering phase;
Call this function before drawing your quad:
glUseProgram(myProgram); // The myProgram variable is your compiled shader program
If by any chance you would like me to explain how to draw your quad using OpegGL 3.0 ++ let me know :) ...It is not far from what you already wrote in your code .
Related
I'm staring to understand that a vertex Shader handles transformations of my texture. While the fragment Shader handles individual pixels. But this vector math is confusing.
What I'm trying to do is render a sprite from a sprite sheet. I can render a whole image just fine, but now i'm actually trying to write my own shader.
I think its more efficient to have the graphics card do the heavy lifting, that being said;
Currently I draw whole images like so:
In my init step,
void TextureRenderer::initRenderData()
{
// Configure VAO/VBO
game_uint VBO;
game_float vertices[] = {
// Pos // Tex
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 0.0f
};
glGenVertexArrays(1, &this->quadVAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindVertexArray(this->quadVAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(game_float), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
Then when its time to draw any texture:
void TextureRenderer::DrawTexture(Texture2D &texture, vec2 position, vec2 size, game_float rotate, vec3 color)
{
// Prepare transformations
this->shader->Use();
glm::mat4 model;
model = glm::translate(model, vec3(position, 0.0f)); // First translate (transformations are: scale happens first, then rotation and then finall translation happens; reversed order)
model = glm::translate(model, vec3(0.5f * size.x, 0.5f * size.y, 0.0f)); // Move origin of rotation to center of quad
model = glm::rotate(model, rotate, vec3(0.0f, 0.0f, 1.0f)); // Then rotate
model = glm::translate(model, vec3(-0.5f * size.x, -0.5f * size.y, 0.0f)); // Move origin back
model = glm::scale(model, vec3(size, 1.0f)); // Last scale
this->shader->SetMatrix4("model", model);
// Render textured quad
this->shader->SetVector3f("spriteColor", color);
glActiveTexture(GL_TEXTURE0);
texture.Bind();
glBindVertexArray(this->quadVAO);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
}
TextureShader.vs:
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 position, vec2 texCoords>
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 projection;
void main()
{
TexCoords = vertex.zw;
gl_Position = projection * model * vec4(vertex.xy, 0.0, 1.0);
}
Fragment Shader:
#version 330 core
in vec2 TexCoords; //Position
out vec4 color;
uniform sampler2D image;
uniform vec3 spriteColor;
void main()
{
color = vec4(spriteColor, 1.0) * texture(image, TexCoords);
}
Now that work all fine and dandy (assuming proper opengl setup ext.)
But id like to apply this to a Sprite sheet shader, and have the GPU handle the math to draw it.
void SpriteRenderer::drawSprite(Texture2D &texture, vec2 position,game_float spriteHeight,game_float spriteWidth,int frameIndex)
{
shader->Use();//Load a diffrent Shader here.
shader->SetInteger("frameindex", frameIndex);
shader->SetVector2f("position", position);
shader->SetFloat("spriteHeight", spriteHeight);
shader->SetFloat("spriteWidth", spriteWidth);
shader->SetMatrix4("model", model);
shader->SetVector3f("spriteColor", color);
glActiveTexture(GL_TEXTURE0); //Set texture to nothin
texture.Bind(); //Bind my texture.
glBindVertexArray(this->quadVAO); //Bind the fullscreen quad
glDrawArrays(GL_TRIANGLES, 0, 6); //Draw
glBindVertexArray(0); //Unbind the quad.
}
I assume:
Inside the vertex Shader, I manipulate the VAO quad to the position it is on the canvas then set the color of the pixles in the fragment shader to that spesific region.
How would that be done?
Or would it be better off for me to pre-calculate a VAO Array for each Sprite in a sprite class? Then each draw call would be:
void SpriteRenderer::drawSprite(Texture2D &texture, vec2 position,Sprite s)
Where the sprite has these vertices stored.
Iv seen:
Techniques for drawing spritesheets in OpenGL with shaders
Somewhat similar, but id like to have the GPU handle the math all together.
I'm trying to create a 2D game using OpenGL. My helper libraries are GLFW, GLEW, SOIL (for images) and glm (for maths).
I've managed to draw a rectangle with a very simple texture on it. But when I try to move this rectangle it frequently has a little flicker (the timer between flickers is almost always the same), and the faster I move it, the more visible it becomes.
Another problem I have is I'm working on a laptop, and it renders fine with my integrated graphics (fine as in it works, it still stutters) but when I execute my program with my Nvidia graphics card it just shows my clearcolor, and nothing else, which is extremely odd. My sprite translation happens in the runCallback code (called in the main loop) and is just a multiplication of matrices. The result matrix is then fetched and used in the draw code. In the drawCallback theres just the DrawSprite function being called. Also I should note I'm using OpenGL 3.3 .
I'm sorry in advance for the rather large amount of code, but after extensive testing and trying a multitude of things i have no idea where my mistake lies...
If you would like to help me but need any more information, i will provide.
UPDATE:
Problem with Nvidia graphics card resolved, it was due to a wrong uniform parameter. But the stutter remains.
IMAGE LOADING CODE
SD_Texture::SD_Texture(const char * fileName)
{
texture = new GLuint;
unsigned char* data = SOIL_load_image(fileName, &width, &height, 0, SOIL_LOAD_RGB);
glGenTextures(1, (GLuint*)texture);
glBindTexture(GL_TEXTURE_2D, *((GLuint*)(texture)));
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
SOIL_free_image_data(data);
}
void SD_Texture::Bind()
{
glBindTexture(GL_TEXTURE_2D, *(GLuint*)texture);
}
VAO SETUP CODE
void SD_Window::SetupVAO()
{
GLfloat vertices[] =
{
-0.5f, 0.5f, 0.0f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.0f, 0.0f, 0.0f,
0.5f, -0.5, 0.0f, 1.0, 0.0f,
-0.5f, 0.5f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.0f, 1.0f, 1.0f
};
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5 * sizeof(GL_FLOAT), (GLvoid*)nullptr);
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5 * sizeof(GL_FLOAT), (GLvoid*)(3 * sizeof(GLfloat)));
glBindVertexArray(0);
}
DRAW CODE
void SD_Window::DrawSprite(SD_Sprite * sprite)
{
glActiveTexture(GL_TEXTURE0);
sprite->GetTexture()->Bind();
glUniformMatrix4fv(transformUniform, 1, GL_FALSE, glm::value_ptr(ortho * sprite->GetTransform()));
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D, 0);
}
MAIN LOOP CODE
void SD_Window::TakeControl(void (*runCallback)(float delta), void (*drawCallback)())
{
double currentTime;
double oldTime = 0.0f;
while (!ShouldClose())
{
currentTime = glfwGetTime();
glClear(GL_COLOR_BUFFER_BIT);
drawCallback();
glfwSwapBuffers(window);
runCallback(currentTime - oldTime);
oldTime = currentTime;
glfwPollEvents();
}
glfwDestroyWindow(window);
glfwTerminate();
}
VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec2 texCoord;
out vec2 sCoord;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(position, 1.0f);
sCoord = vec2(texCoord.x, 1.0f - texCoord.y);
}
FRAGMENT SHADER
#version 330 core
in vec2 sCoord;
out vec4 color;
uniform sampler2D sTexture;
void main()
{
color = texture(sTexture, sCoord);
}
When sending two textures to my GLSL shader only one actually arrives. What is strange is the first texture I bind is used for both textures slots in my shader. This leads me to believe the way I am passing my textures in OpenGL is wrong. However, I am unable to track down the problem.
Here is the code where I configure the textures for use in my shader.
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo2);
glPushAttrib(GL_VIEWPORT_BIT | GL_ENABLE_BIT);
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Get uniforms
GLuint pass_3O = glGetUniformLocation(blend_shader, "org");
GLuint pass_3B = glGetUniformLocation(blend_shader, "blur");
// Activate shaders
glUseProgram(blend_shader);
// Bind first texture
glActiveTexture(GL_TEXTURE0 );
glBindTexture(GL_TEXTURE_2D, init_texture);
// Bind the second texture
glActiveTexture(GL_TEXTURE1 );
glBindTexture(GL_TEXTURE_2D, third_texture);
// Assign index to 2d images
glUniform1i(pass_3O, 0);
glUniform1f(pass_3B, 1);
The code above is passing in two textures. The first is a 2D image of the first rendering pass of the 3D scene. The third is that same texture with x2 levels of blur added. This final stage is to blend them together for a poor mans bloom.
Here is the code where I am drawing both textures to the quad.
// Draw to quad
glBegin(GL_QUADS);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 0.0f);
glMultiTexCoord2f(GL_TEXTURE1, 0.0f, 0.0f);
glVertex3f(-w_width/2.0, -w_height/2.0, 0.5f);
glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 1.0f);
glMultiTexCoord2f(GL_TEXTURE1, 0.0f, 1.0f);
glVertex3f(-w_width/2.0, w_height/2.0, 0.5f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 1.0f);
glMultiTexCoord2f(GL_TEXTURE1, 1.0f, 1.0f);
glVertex3f(w_width/2.0, w_height/2.0, 0.5f);
glMultiTexCoord2f(GL_TEXTURE0, 1.0f, 0.0f);
glMultiTexCoord2f(GL_TEXTURE1, 1.0f, 0.0f);
glVertex3f(w_width/2.0, -w_height/2.0, 0.5f);
glEnd();
glFlush();
glPopAttrib();
// Unbind textures
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, 0);
// Disable blend shader
glUseProgram(0);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT,0);
And here is the shader I am using to render the final image.
Vert
#version 120
void main()
{
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_TexCoord[1] = gl_MultiTexCoord1;
gl_Position = ftransform();
}
Frag
#version 120
uniform sampler2D org;
uniform sampler2D blur;
void main()
{
vec4 orig_clr = texture2D( org, gl_TexCoord[0].st);
vec4 blur_clr = texture2D( blur, gl_TexCoord[1].st );
//gl_FragColor = orig_clr;
gl_FragColor = blur_clr;
}
If I switch between the last two lines in the fragment shader I get the same exact results. The only way to change which texture gets render is to change the order in which I bind them.
For example, the following would finally pass me the blurred image. Once again, only getting one of the two images.
glActiveTexture(GL_TEXTURE0 );
glBindTexture(GL_TEXTURE_2D, third_texture);
glActiveTexture(GL_TEXTURE1 );
glBindTexture(GL_TEXTURE_2D, init_texture);
Any thoughts on what I am overlooking?
Look at this code:
glUniform1i(pass_3O, 0);
glUniform1f(pass_3B, 1);
you have some small typo here, it should be glUniform1*i* instead of Uniform1*f* in the second call. The type must match that of the shader variable, so this call should just result in some error, leaving the uniform initialized at 0, which completely explains your results.
I am building a simple 3D game for practice, and I am having trouble passing normals to my shader when using indexed rendering. For each face of a polygon, at each vertex there would be the same normal value. For a cube with 8 vertices, there would be 6 * 6 = 36 normals (since each surface renders with two triangles). With indexed drawing I can only pass 8, one for each vertex. This does not allow me to pass surface normals, only averaged vertex normals.
How could I pass 36 different normals to the 36 different indexes? Using glDrawArrays is apparently slow so I have elected not to use it.
Here is my shader:
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 vertNormal;
smooth out vec4 colour;
uniform vec4 baseColour;
uniform mat4 modelToCameraTrans;
uniform mat3 modelToCameraLight;
uniform vec3 sunPos;
layout(std140) uniform Projection {
mat4 cameraToWindowTrans;
};
void main() {
gl_Position = cameraToWindowTrans * modelToCameraTrans * vec4(position, 1.0f);
vec3 dirToLight = normalize((modelToCameraLight * position) - sunPos);
vec3 camSpaceNorm = normalize(modelToCameraLight * vertNormal);
float angle = clamp(dot(camSpaceNorm, dirToLight), 0.0f, 1.0f);
colour = baseColour * angle * 0.07;
}
And here is the code I am currently using to bind to the VAO:
glGenVertexArrays(1, &vertexArray);
glBindVertexArray(vertexArray);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, polygonBuffer);
// The position input to the shader is index 0
glEnableVertexAttribArray(POSITION_ATTRIB);
glVertexAttribPointer(POSITION_ATTRIB, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Add the vertex normal to the shader
glBindBuffer(GL_ARRAY_BUFFER, vertexNormBuffer);
glEnableVertexAttribArray(VERTEX_NORMAL_ATTRIB);
glVertexAttribPointer(VERTEX_NORMAL_ATTRIB, 3, GL_FLOAT, GL_FALSE, 0, 0);
glBindVertexArray(0);
This is my renderer:
glBindVertexArray(vertexArray);
glDrawElements(GL_TRIANGLES, polygonVertexCount, GL_UNSIGNED_SHORT, 0);
glBindVertexArray(0);
For a cube with 8 vertices, there would be 6 * 6 = 36 normals (since
each surface renders with two triangles).
Correct me if wrong, but I only see 6 normals, one per face. All vertices on that face will use the same normal.
With indexed drawing I can only pass 8, one for each vertex. This does
not allow me to pass surface normals, only averaged vertex normals.
Here's where your reasoning fails. You do not just pass vertex locations to the shader, you pass a whole bunch of vertex attributes that make the vertex unique.
So if you use the same vertex location 6 times (which you'll often do), but with a different normal each time (actually two triangles will share the same data), you actually should send all the data for that vertex 6x, including duplicates of the location.
That being said, you don't need 36, you need 4*6=24 unique attributes. You have to send all vertices for each face separately because the normals differ and per face you have to differ between 4 positions. You could also see it as 8*3 because you have 8 positions that have to be replicated to handle 3 different normals.
So you'll end up with something like:
GLFloat positions[] = { 0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f,
0.0f, 0.0f, 0.0f,
1.0f, 0.0f, 0.0f,
...}
GLFloat normals[] = { 1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f,
... }
Note that while within normals and positions there is repetition, but that the combination of both at the same position is unique.
I am trying to get my very simple shader working with my current OpenGL set up. I am using a shader manager and upon loading the shaders, all the output says they have loaded correctly.
Here is my data:
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
Here is where i set up my buffer:
// This will identify our vertex buffer
GLuint vertexbuffer;
// Generate 1 buffer, put the resulting identifier in vertexbuffer
glGenBuffers(1, &vertexbuffer);
// The following commands will talk about our 'vertexbuffer' buffer
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
// Give our vertices to OpenGL.
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
Here is my declaration of my shader using a tested and working shader loader:
Shader shader;
shader.loadFromFile(VERTEX_SHADER, resourcePath() + "tri.vert");
shader.loadFromFile(FRAGMENT_SHADER, resourcePath() + "tri.frag");
shader.createAndLinkProgram();
shader.use();
shader.addAttribute("position");
shader.unUse();
The attributes for this shader are then stored in a map. This is the rendering call:
shader.use();
glEnableVertexAttribArray(shader["position"]);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
shader["position"], // attribute 0. No particular reason for 0, but must match the layout in the shader.
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // Starting from vertex 0; 3 vertices total -> 1 triangle
glDisableVertexAttribArray(shader["position"]);
shader.unUse();
I then swap the buffers. I tried with an old GL_TRIANGLES fixed pipeline drawing and it worked fine.
Here is my vertex shader:
#version 330
layout(location = 0) in vec3 position;
void main()
{
gl_Position.xyz = position;
}
Here is my fragment shader:
out vec3 color;
main()
{
color = vec3(1,0,0);
}
It is supposed to simply draw a red triangle. When I draw using intermediate mode, it renders fine. I am running Xcode on Mac OSX 10.7.4.
gl_Position.xyz = position;
And what is the W coordinate? It's kind of important not to leave it undefined. If you want it to be 1.0, you need to set it to be that:
gl_Position = vec4(position, 1.0);
gl_Position must be provided with x,y,z,w coordinate.
In GLSL version above 1.50 u have to provide the matrix manipulation in the shader i.e u have to do ll the projection , model, view matrix using some math library like GLM http://glm.g-truc.net/
from the vertex data set the z coordinate is 0.0 which is co insiding with the center of the scene.prove some -ve value o the Z coordinate and check for the results.