OpenGL using shaders on textures - c++

I have two images, and with the help of the instruction here:
http://en.wikibooks.org/wiki/OpenGL_Programming/Intermediate/Textures
I was able store them separately, into two separate textures, and upload them into video memory:
gluBuild2DMipmaps(GL_TEXTURE_2D, 4, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
Now, how would I access these textures with shaders to multiply these two textures?
For example, I found this example, about multiplication using shaders:
http://www.opengl.org/wiki/Texture_Combiners
//Vertex shader
#version 110
attribute vec4 InVertex;
attribute vec2 InTexCoord0;
attribute vec2 InTexCoord1;
uniform mat4 ProjectionModelviewMatrix;
varying vec2 TexCoord0;
varying vec2 TexCoord1; //Or just use TexCoord0
//------------------------
void main()
{
gl_Position = ProjectionModelviewMatrix * InVertex;
TexCoord0 = InTexCoord0;
TexCoord1 = InTexCoord1;
}
//------------------------
//Fragment shader
#version 110
uniform sampler2D Texture0;
uniform sampler2D Texture1;
//------------------------
varying vec2 TexCoord0;
varying vec2 TexCoord1; //Or just use TexCoord0
//------------------------
void main()
{
vec4 texel = texture2D(Texture0, TexCoord0);
texel *= texture2D(Texture1, TexCoord1);
gl_FragColor = texel;
}
But how would I make the textures that I've uploaded in a form of Vertex, so that I can use this Fragment shaders to accomplish this multiplication.
All I did was generated gluBuild2DMipmaps, but now I don't know how to apply Vertex/Fragment shaders to my texture?

Assume you have a quad where the first three values are vertex coord. and the last two your TexCoord.
-1.0f,-1.0f, 1.0f, 0.0f, 0.0f,
1.0f,-1.0f, 1.0f, 1.0f, 0.0f,
-1.0f, 1.0f, 1.0f, 0.0f, 1.0f,
1.0f,-1.0f, 1.0f, 1.0f, 0.0f,
1.0f, 1.0f, 1.0f, 1.0f, 1.0f,
-1.0f, 1.0f, 1.0f, 0.0f, 1.0f,
you have to submit your Hardware different uniforms and attributes:
first of all the (after MVP and so on.) the vertex and textcoord:
glEnableVertexAttribArray(VAA_Normal);
glVertexAttribPointer(VAA_Normal, 3, GL_FLOAT, GL_TRUE, 5*sizeof(GLfloat), (const GLvoid*)(5 * sizeof(GLfloat)));
glEnableVertexAttribArray(VAA_TexCoord);
glVertexAttribPointer(VAA_TexCoord, 2, GL_FLOAT, GL_TRUE, 5*sizeof(GLfloat), (const GLvoid*)(3 * sizeof(GLfloat)));
(VAA_Normal = glGetAttribLocation(aProgram, attribName);)
last but not least the important Texture:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, aTexture);
dont forget: its up to you how you combine different textures
edit:
sorry forgot
glUniform1i(glGetUniformLocation(aProgramID, "TEXTURE0"), 0);

Related

OpenGL sampler2D array

I have an array of sampler2D that looks like so:
uniform sampler2D u_Textures[2];. I want to be able to render more textures(in this case 2) in the same drawcall. In my fragment shader, if I set the color output value to somethiung like red, it does show me 2 red squares witch leads me to belive I did something wrong when binding the textures.
My code:
Texture tex1("Texture/lin.png");
tex1.Bind(0);
Texture tex2("Texture/font8.png");
tex2.Bind(1);
auto loc = glGetUniformLocation(sh.getRendererID(), "u_Textures");
GLint samplers[2] = { 0, 1 };
glUniform1iv(loc, 2, samplers);
void Texture::Bind(unsigned int slot) const {
glActiveTexture(slot + GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_RendererID);
}
And this is my fragment shader:
#version 450 core
layout(location = 0) out vec4 color;
in vec2 v_TexCoord;
in float v_texIndex;
uniform sampler2D u_Textures[2];
void main()
{
int index = int(v_texIndex);
vec4 texColor = texture(u_Textures[index], v_TexCoord);
if (texColor.a == 0.0) {
// this line fills the a = 0.0f pixels with the color red
color = vec4(1.0f, 0.0f, 0.0f, 1.0f);
}
else color = texColor;
}
Also, it only draws the tex2 texture on the screen.
This is my verticie attributes:
float pos[24 * 2] = {
// position xy z texture coordinate, texture index
-2.0f, -1.5f, 0.0f, 0.0f, 0.0f, 0.0f,
-1.5f, -1.5f, 0.0f, 1.0f, 0.0f, 0.0f,
-1.5f, -2.0f, 0.0f, 1.0f, 1.0f, 0.0f, // right side bottom
-2.0f, -2.0f, 0.0f, 0.0f, 1.0f, 0.0f,
0.5f, 0.5f, 0.0f, 0.0f, 0.0f, 1.0f,
1.5f, 0.5f, 0.0f, 1.0f, 0.0f, 1.0f,
1.5f, 1.5f, 0.0f, 1.0f, 1.0f, 1.0f,
0.5f, 1.5f, 0.0f, 0.0f, 1.0f, 1.0f
};
No matter how I chenge the texture index, it only draws 1 of those 2 textures.
You cannot use a fragment shader input variable to index an array of texture samplers. You have to use a sampler2DArray (GL_TEXTURE_2D_ARRAY) instead of an array of sampler2D (GL_TEXTURE_2D).
int index = int(v_texIndex);
vec4 texColor = texture(u_Textures[index], v_TexCoord);
is undefined behavior because v_texIndex is a fragment shader input variable and therefore not a dynamically uniform expression. See GLSL 4.60 Specification - 4.1.7. Opaque Types
[...] Texture-combined sampler types are opaque types, [...]. When aggregated into arrays within a shader, they can only be indexed with a dynamically uniform integral expression, otherwise results are undefined.
Example using sampler2DArray:
#version 450 core
layout(location = 0) out vec4 color;
in vec2 v_TexCoord;
in float v_texIndex;
uniform sampler2DArray u_Textures;
void main()
{
color = texture(u_Textures, vec3(v_TexCoord.xy, v_texIndex));
}
texture is overloaded for all sampler types. The texture coordinates and texture layer need not to dynamically uniform, but the index into a sampler array has to be dynamically uniform.
To be clear, the problem isn't with the array of sampler (the problem is not sampler2D u_Textures[2];). The problem is the indexing. The problem is that v_texIndex is not dynamically uniform (the problem is in float v_texIndex;). It works when the index is dynamically uniform (e.g. uniform float v_texIndex; will work). Also the specification just says that the result is undefined. So there may be some systems where it works.

How to combine texture and lighting in OpenGL

I'm trying to combine texture and lighting on a pyramid in OpenGL. I basically started by merging two separate codes, and now, I'm working to make changes to smooth out the merge. However, I am having 2 issues.
I need to remove the object color and replace it with texture, but I'm not sure how to approach that issue with this code since object color is deeply ingrained in the code.
I'm not sure how to list the coordinates for position, normals, and texture. Their current arrangement seems to be causing a lot of issues with the output.
For issue one, I have tried replacing pyramidColor and objectColor with texture, but it seemed to create more issues.
For issue two, I have tried rearranging the list order as position, texture, and normals, which helped for a few of the triangles. However, it still isn't right.
/*Header Inclusions*/
#include <iostream>
#include <GL/glew.h>
#include <GL/freeglut.h>
//GLM Math Header Inclusions
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/gtc/type_ptr.hpp>
//SOIL image loader Inclusion
#include "SOIL2/SOIL2.h"
using namespace std; //Standard namespace
#define WINDOW_TITLE "Pyramid" //Window title Macro
/*Shader program Macro*/
#ifndef GLSL
#define GLSL(Version, Source) "#version " #Version "\n" #Source
#endif
/*Variable declarations for shader, window size initialization, buffer and array objects */
GLint pyramidShaderProgram, lampShaderProgram, WindowWidth = 800, WindowHeight = 600;
GLuint VBO, PyramidVAO, LightVAO, texture;
//Subject position and scale
glm::vec3 pyramidPosition(0.0f, 0.0f, 0.0f);
glm::vec3 pyramidScale(2.0f);
//pyramid and light color
glm::vec3 objectColor(1.0f, 1.0f, 1.0f);
glm::vec3 lightColor(1.0f, 1.0f, 1.0f);
//Light position and scale
glm::vec3 lightPosition(0.5f, 0.5f, -3.0f);
glm::vec3 lightScale(0.3f);
//Camera position
glm::vec3 cameraPosition(0.0f, 0.0f, -6.0f);
//Camera rotation
float cameraRotation = glm::radians(-25.0f);
/*Function prototypes*/
void UResizeWindow(int, int);
void URenderGraphics(void);
void UCreateShader(void);
void UCreateBuffers(void);
void UGenerateTexture(void);
/*Pyramid Vertex Shader Source Code*/
const GLchar * pyramidVertexShaderSource = GLSL(330,
layout (location = 0) in vec3 position; //Vertex data from Vertex Attrib Pointer 0
layout (location = 1) in vec3 normal; //VAP position 1 for normals
layout (location = 2) in vec2 textureCoordinate;
out vec3 FragmentPos; //For outgoing color / pixels to fragment shader
out vec3 Normal; //For outgoing normals to fragment shader
out vec2 mobileTextureCoordinate;
//Global variables for the transform matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main(){
gl_Position = projection * view * model * vec4(position, 1.0f); //transforms vertices to clip coordinates
FragmentPos = vec3(model * vec4(position, 1.0f)); //Gets fragment / pixel position in world space only (exclude view and projection)
Normal = mat3(transpose(inverse(model))) * normal; //get normal vectors in world space only and exclude normal translation properties
mobileTextureCoordinate = vec2(textureCoordinate.x, 1 - textureCoordinate.y); //flips the texture horizontal
}
);
/*Pyramid Fragment Shader Source Code*/
const GLchar * pyramidFragmentShaderSource = GLSL(330,
in vec3 FragmentPos; //For incoming fragment position
in vec3 Normal; //For incoming normals
in vec2 mobileTextureCoordinate;
out vec4 pyramidColor; //For outgoing pyramid color to the GPU
out vec4 gpuTexture; //Variable to pass color data to the GPU
//Uniform / Global variables for object color, light color, light position, and camera/view position
uniform vec3 objectColor;
uniform vec3 lightColor;
uniform vec3 lightPos;
uniform vec3 viewPosition;
uniform sampler2D uTexture; //Useful when working with multiple textures
void main(){
/*Phong lighting model calculations to generate ambient, diffuse, and specular components*/
//Calculate Ambient Lighting
float ambientStrength = 0.1f; //Set ambient or global lighting strength
vec3 ambient = ambientStrength * lightColor; //Generate ambient light color
//Calculate Diffuse Lighting
vec3 norm = normalize(Normal); //Normalize vectors to 1 unit
vec3 lightDirection = normalize(lightPos - FragmentPos); //Calculate distance (light direction) between light source and fragments/pixels on
float impact = max(dot(norm, lightDirection), 0.0); //Calculate diffuse impact by generating dot product of normal and light
vec3 diffuse = impact * lightColor; //Generate diffuse light color
//Calculate Specular lighting
float specularIntensity = 0.8f; //Set specular light strength
float highlightSize = 128.0f; //Set specular highlight size
vec3 viewDir = normalize(viewPosition - FragmentPos); //Calculate view direction
vec3 reflectDir = reflect(-lightDirection, norm); //Calculate reflection vector
//Calculate specular component
float specularComponent = pow(max(dot(viewDir, reflectDir), 0.0), highlightSize);
vec3 specular = specularIntensity * specularComponent * lightColor;
//Calculate phong result
vec3 phong = (ambient + diffuse + specular) * objectColor;
pyramidColor = vec4(phong, 1.0f); //Send lighting results to GPU
gpuTexture = texture(uTexture, mobileTextureCoordinate);
}
);
/*Lamp Shader Source Code*/
const GLchar * lampVertexShaderSource = GLSL(330,
layout (location = 0) in vec3 position; //VAP position 0 for vertex position data
//Uniform / Global variables for the transform matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view *model * vec4(position, 1.0f); //Transforms vertices into clip coordinates
}
);
/*Fragment Shader Source Code*/
const GLchar * lampFragmentShaderSource = GLSL(330,
out vec4 color; //For outgoing lamp color (smaller pyramid) to the GPU
void main()
{
color = vec4(1.0f); //Set color to white (1.0f, 1.0f, 1.0f) with alpha 1.0
}
);
/*Main Program*/
int main(int argc, char* argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DEPTH | GLUT_DOUBLE | GLUT_RGBA);
glutInitWindowSize(WindowWidth, WindowHeight);
glutCreateWindow(WINDOW_TITLE);
glutReshapeFunc(UResizeWindow);
glewExperimental = GL_TRUE;
if (glewInit() != GLEW_OK)
{
std::cout<< "Failed to initialize GLEW" << std::endl;
return -1;
}
UCreateShader();
UCreateBuffers();
UGenerateTexture();
glClearColor(0.0f, 0.0f, 0.0f, 1.0f); //Set background color
glutDisplayFunc(URenderGraphics);
glutMainLoop();
//Destroys Buffer objects once used
glDeleteVertexArrays(1, &PyramidVAO);
glDeleteVertexArrays(1, &LightVAO);
glDeleteBuffers(1, &VBO);
return 0;
}
/*Resizes the window*/
void UResizeWindow(int w, int h)
{
WindowWidth = w;
WindowHeight = h;
glViewport(0, 0, WindowWidth, WindowHeight);
}
/*Renders graphics*/
void URenderGraphics(void)
{
glEnable(GL_DEPTH_TEST); //Enable z-depth
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //Clears the screen
GLint modelLoc, viewLoc, projLoc, objectColorLoc, lightColorLoc, lightPositionLoc, viewPositionLoc;
glm::mat4 model;
glm::mat4 view;
glm::mat4 projection;
/*********Use the pyramid Shader to activate the pyramid Vertex Array Object for rendering and transforming*********/
glUseProgram(pyramidShaderProgram);
glBindVertexArray(PyramidVAO);
//Transform the pyramid
model = glm::translate(model, pyramidPosition);
model = glm::scale(model, pyramidScale);
//Transform the camera
view = glm::translate(view, cameraPosition);
view = glm::rotate(view, cameraRotation, glm::vec3(0.0f, 1.0f, 0.0f));
//Set the camera projection to perspective
projection = glm::perspective(45.0f,(GLfloat)WindowWidth / (GLfloat)WindowHeight, 0.1f, 100.0f);
//Reference matrix uniforms from the pyramid Shader program
modelLoc = glGetUniformLocation(pyramidShaderProgram, "model");
viewLoc = glGetUniformLocation(pyramidShaderProgram, "view");
projLoc = glGetUniformLocation(pyramidShaderProgram, "projection");
//Pass matrix data to the pyramid Shader program's matrix uniforms
glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));
glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
//Reference matrix uniforms from the pyramid Shader program for the pyramid color, light color, light position, and camera position
objectColorLoc = glGetUniformLocation(pyramidShaderProgram, "objectColor");
lightColorLoc = glGetUniformLocation(pyramidShaderProgram, "lightColor");
lightPositionLoc = glGetUniformLocation(pyramidShaderProgram, "lightPos");
viewPositionLoc = glGetUniformLocation(pyramidShaderProgram, "viewPosition");
//Pass color, light, and camera data to the pyramid Shader programs corresponding uniforms
glUniform3f(objectColorLoc, objectColor.r, objectColor.g, objectColor.b);
glUniform3f(lightColorLoc, lightColor.r, lightColor.g, lightColor.b);
glUniform3f(lightPositionLoc, lightPosition.x, lightPosition.y, lightPosition.z);
glUniform3f(viewPositionLoc, cameraPosition.x, cameraPosition.y, cameraPosition.z);
glDrawArrays(GL_TRIANGLES, 0, 18); //Draw the primitives / pyramid
glBindVertexArray(0); //Deactivate the Pyramid Vertex Array Object
/***************Use the Lamp Shader and activate the Lamp Vertex Array Object for rendering and transforming ************/
glUseProgram(lampShaderProgram);
glBindVertexArray(LightVAO);
//Transform the smaller pyramid used as a visual cue for the light source
model = glm::translate(model, lightPosition);
model = glm::scale(model, lightScale);
//Reference matrix uniforms from the Lamp Shader program
modelLoc = glGetUniformLocation(lampShaderProgram, "model");
viewLoc = glGetUniformLocation(lampShaderProgram, "view");
projLoc = glGetUniformLocation(lampShaderProgram, "projection");
//Pass matrix uniforms from the Lamp Shader Program
glUniformMatrix4fv(modelLoc, 1, GL_FALSE, glm::value_ptr(model));
glUniformMatrix4fv(viewLoc, 1, GL_FALSE, glm::value_ptr(view));
glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(projection));
glBindTexture(GL_TEXTURE_2D, texture);
//Draws the triangles
glDrawArrays(GL_TRIANGLES, 0, 18);
glBindVertexArray(0); //Deactivate the Lamp Vertex Array Object
glutPostRedisplay();
glutSwapBuffers(); //Flips the back buffer with the front buffer every frame. Similar to GL Flush
}
/*Create the Shader program*/
void UCreateShader()
{
//Pyramid Vertex shader
GLint pyramidVertexShader = glCreateShader(GL_VERTEX_SHADER); //Creates the Vertex shader
glShaderSource(pyramidVertexShader, 1, &pyramidVertexShaderSource, NULL); //Attaches the Vertex shader to the source code
glCompileShader(pyramidVertexShader); //Compiles the Vertex shader
//Pyramid Fragment Shader
GLint pyramidFragmentShader = glCreateShader(GL_FRAGMENT_SHADER); //Creates the Fragment Shader
glShaderSource(pyramidFragmentShader, 1, &pyramidFragmentShaderSource, NULL); //Attaches the Fragment shader to the source code
glCompileShader(pyramidFragmentShader); //Compiles the Fragment Shader
//Pyramid Shader program
pyramidShaderProgram = glCreateProgram(); //Creates the Shader program and returns an id
glAttachShader(pyramidShaderProgram, pyramidVertexShader); //Attaches Vertex shader to the Shader program
glAttachShader(pyramidShaderProgram, pyramidFragmentShader); //Attaches Fragment shader to the Shader program
glLinkProgram(pyramidShaderProgram); //Link Vertex and Fragment shaders to the Shader program
//Delete the Vertex and Fragment shaders once linked
glDeleteShader(pyramidVertexShader);
glDeleteShader(pyramidFragmentShader);
//Lamp Vertex shader
GLint lampVertexShader = glCreateShader(GL_VERTEX_SHADER); //Creates the Vertex shader
glShaderSource(lampVertexShader, 1, &lampVertexShaderSource, NULL); //Attaches the Vertex shader to the source code
glCompileShader(lampVertexShader); //Compiles the Vertex shader
//Lamp Fragment shader
GLint lampFragmentShader = glCreateShader(GL_FRAGMENT_SHADER); //Creates the Fragment shader
glShaderSource(lampFragmentShader, 1, &lampFragmentShaderSource, NULL); //Attaches the Fragment shader to the source code
glCompileShader(lampFragmentShader); //Compiles the Fragment shader
//Lamp Shader Program
lampShaderProgram = glCreateProgram(); //Creates the Shader program and returns an id
glAttachShader(lampShaderProgram, lampVertexShader); //Attach Vertex shader to the Shader program
glAttachShader(lampShaderProgram, lampFragmentShader); //Attach Fragment shader to the Shader program
glLinkProgram(lampShaderProgram); //Link Vertex and Fragment shaders to the Shader program
//Delete the lamp shaders once linked
glDeleteShader(lampVertexShader);
glDeleteShader(lampFragmentShader);
}
/*Creates the Buffer and Array Objects*/
void UCreateBuffers()
{
//Position and Texture coordinate data for 18 triangles
GLfloat vertices[] = {
//Positions //Normals //Texture Coordinates
//Back Face //Negative Z Normals
0.0f, 0.5f, 0.0f, 0.0f, 0.0f, -1.0f, 0.5f, 1.0f,
0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f, -1.0f, 1.0f, 0.0f,
//Front Face //Positive Z Normals
0.0f, 0.5f, 0.0f, 0.0f, 0.0f, 1.0f, 0.5f, 1.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f,
0.5f, -0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f,
//Left Face //Negative X Normals
0.0f, 0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.5f, 1.0f,
-0.5f, -0.5f, -0.5f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
-0.5f, -0.5f, 0.5f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
//Right Face //Positive X Normals
0.0f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, 0.5f, 1.0f,
0.5f, -0.5f, 0.5f, 1.0f, 0.0f, 0.0f, 0.0f, 0.0f,
0.5f, -0.5f, -0.5f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f,
//Bottom Face //Negative Y Normals
-0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 1.0f,
0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f,
-0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f,
-0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 1.0f,
0.5f, -0.5f, -0.5f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f,
0.5f, -0.5f, 0.5f, 0.0f, -1.0f, 0.0f, 1.0f, 0.0f,
};
//Generate buffer ids
glGenVertexArrays(1, &PyramidVAO);
glGenBuffers(1, &VBO);
//Activate the PyramidVAO before binding and setting VBOs and VAPs
glBindVertexArray(PyramidVAO);
//Activate the VBO
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); //Copy vertices to VBO
//Set attribute pointer 0 to hold position data
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0); //Enables vertex attribute
//Set attribute pointer 1 to hold Normal data
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)(3 * sizeof(GLfloat)));
glEnableVertexAttribArray(1);
//Set attribute pointer 2 to hold Texture coordinate data
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)(6 * sizeof(GLfloat)));
glEnableVertexAttribArray(2);
glBindVertexArray(0); //Unbind the pyramid VAO
//Generate buffer ids for lamp (smaller pyramid)
glGenVertexArrays(1, &LightVAO); //Vertex Array for pyramid vertex copies to serve as light source
//Activate the Vertex Array Object before binding and setting any VBOs and Vertex Attribute Pointers
glBindVertexArray(LightVAO);
//Referencing the same VBO for its vertices
glBindBuffer(GL_ARRAY_BUFFER, VBO);
//Set attribute pointer to 0 to hold Position data (used for the lamp)
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(GLfloat), (GLvoid*)0);
glEnableVertexAttribArray(0);
glBindVertexArray(0);
}
/*Generate and load the texture*/
void UGenerateTexture(){
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
int width, height;
unsigned char* image = SOIL_load_image("brick.jpg", &width, &height, 0, SOIL_LOAD_RGB); //Loads texture file
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glGenerateMipmap(GL_TEXTURE_2D);
SOIL_free_image_data(image);
glBindTexture(GL_TEXTURE_2D, 0); //Unbind the texture
}
Expected results: A brick textured pyramid with lighting.
Actual results: A bunch of assorted triangles.
I see the following issues with your code:
In the fragment shader:
Remove the objectColor uniform and the gpuTexture output.
Replace the last three lines of main() with:
//Calculate phong result
vec3 objectColor = texture(uTexture, mobileTextureCoordinate).xyz;
vec3 phong = (ambient + diffuse) * objectColor + specular;
pyramidColor = vec4(phong, 1.0f); //Send lighting results to GPU
In your rendering code:
Replace all mentions of objectColor with texture setup:
uTextureLoc = glGetUniformLocation(pyramidShaderProgram, "uTexture");
glUniform1i(uTextureLoc, 0); // texture unit 0
Bind the texture before you call glDrawArrays of the textured pyramid:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glDrawArrays(GL_TRIANGLES, 0, 18);
(Right now you bind it before drawing the LightVAO, which doesn't use the texture.)
All your glVertexAttribPointer calls have an incorrect stride of 6 * sizeof(GLfloat), but the buffer you provide has eight (8) floats per vertex, so it shall be 8 * sizeof(GLfloat). Remember that this parameter is the number of bytes that the GL has to advance to fetch the next vertex. Other than that your VAO setup is alright.

OpenGL spritesheet shader

I'm staring to understand that a vertex Shader handles transformations of my texture. While the fragment Shader handles individual pixels. But this vector math is confusing.
What I'm trying to do is render a sprite from a sprite sheet. I can render a whole image just fine, but now i'm actually trying to write my own shader.
I think its more efficient to have the graphics card do the heavy lifting, that being said;
Currently I draw whole images like so:
In my init step,
void TextureRenderer::initRenderData()
{
// Configure VAO/VBO
game_uint VBO;
game_float vertices[] = {
// Pos // Tex
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 0.0f, 1.0f, 0.0f
};
glGenVertexArrays(1, &this->quadVAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glBindVertexArray(this->quadVAO);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 4 * sizeof(game_float), (GLvoid*)0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
}
Then when its time to draw any texture:
void TextureRenderer::DrawTexture(Texture2D &texture, vec2 position, vec2 size, game_float rotate, vec3 color)
{
// Prepare transformations
this->shader->Use();
glm::mat4 model;
model = glm::translate(model, vec3(position, 0.0f)); // First translate (transformations are: scale happens first, then rotation and then finall translation happens; reversed order)
model = glm::translate(model, vec3(0.5f * size.x, 0.5f * size.y, 0.0f)); // Move origin of rotation to center of quad
model = glm::rotate(model, rotate, vec3(0.0f, 0.0f, 1.0f)); // Then rotate
model = glm::translate(model, vec3(-0.5f * size.x, -0.5f * size.y, 0.0f)); // Move origin back
model = glm::scale(model, vec3(size, 1.0f)); // Last scale
this->shader->SetMatrix4("model", model);
// Render textured quad
this->shader->SetVector3f("spriteColor", color);
glActiveTexture(GL_TEXTURE0);
texture.Bind();
glBindVertexArray(this->quadVAO);
glDrawArrays(GL_TRIANGLES, 0, 6);
glBindVertexArray(0);
}
TextureShader.vs:
#version 330 core
layout (location = 0) in vec4 vertex; // <vec2 position, vec2 texCoords>
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 projection;
void main()
{
TexCoords = vertex.zw;
gl_Position = projection * model * vec4(vertex.xy, 0.0, 1.0);
}
Fragment Shader:
#version 330 core
in vec2 TexCoords; //Position
out vec4 color;
uniform sampler2D image;
uniform vec3 spriteColor;
void main()
{
color = vec4(spriteColor, 1.0) * texture(image, TexCoords);
}
Now that work all fine and dandy (assuming proper opengl setup ext.)
But id like to apply this to a Sprite sheet shader, and have the GPU handle the math to draw it.
void SpriteRenderer::drawSprite(Texture2D &texture, vec2 position,game_float spriteHeight,game_float spriteWidth,int frameIndex)
{
shader->Use();//Load a diffrent Shader here.
shader->SetInteger("frameindex", frameIndex);
shader->SetVector2f("position", position);
shader->SetFloat("spriteHeight", spriteHeight);
shader->SetFloat("spriteWidth", spriteWidth);
shader->SetMatrix4("model", model);
shader->SetVector3f("spriteColor", color);
glActiveTexture(GL_TEXTURE0); //Set texture to nothin
texture.Bind(); //Bind my texture.
glBindVertexArray(this->quadVAO); //Bind the fullscreen quad
glDrawArrays(GL_TRIANGLES, 0, 6); //Draw
glBindVertexArray(0); //Unbind the quad.
}
I assume:
Inside the vertex Shader, I manipulate the VAO quad to the position it is on the canvas then set the color of the pixles in the fragment shader to that spesific region.
How would that be done?
Or would it be better off for me to pre-calculate a VAO Array for each Sprite in a sprite class? Then each draw call would be:
void SpriteRenderer::drawSprite(Texture2D &texture, vec2 position,Sprite s)
Where the sprite has these vertices stored.
Iv seen:
Techniques for drawing spritesheets in OpenGL with shaders
Somewhat similar, but id like to have the GPU handle the math all together.

OpenGL: Why these codes draws nothing, with interleaved VBO and GLSL

I've looked many papers and blogs, and I finally reached these code to generate a quad:
program stores in gProgram
vert shader:---------------------
#version 330
layout(location = 0) in vec3 attrib_position;
layout(location = 6) in vec2 attrib_texcoord;
out Varing
{
vec4 pos;
vec2 texcoord;
} VS_StateOutput;
uniform mat4 gtransform;
void main(void)
{
VS_StateOutput.pos = gtransform * vec4(attrib_position,1.0f);
VS_StateOutput.texcoord = attrib_texcoord;
}
frag shader:--------------------
#version 330
uniform sampler2D texUnit;
in Varing
{
vec4 pos;
vec2 texcoord;
} PS_StateInput;
layout(location = 0) out vec4 color0;
void main(void)
{
color0 = texture(texUnit, PS_StateInput.texcoord);
}
Here is my vertex data: stores in gVertexVBO
float data[] =
{
-1.0f, 1.0f, 0.0f, 0.0f, 0.0f,
1.0f, 1.0f, 0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 0.0f, 1.0f, 1.0f,
-1.0f, -1.0f, 0.0f, 0.0f, 1.0f
};
and index data: stores in gIndexVBO
unsigned short idata[] =
{
0, 3, 2, 0, 2, 1
};
in the drawing section, I do like this:
ps: the gtransform is set as an identity matrix
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(gProgram);
glProgramUniformMatrix4fv(gProgram, gtLoc, 1, GL_FALSE, matrix);
glProgramUniform1uiv(gProgram, texLoc, 1, &colorTex);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, gIndexVBO);
glBindBuffer(GL_ARRAY_BUFFER, gVertexVBO);
char* offset = NULL;
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(float)*5, offset);
glEnableVertexAttribArray(0);
glVertexAttribPointer(6, 2, GL_FLOAT, GL_FALSE, sizeof(float)*5, offset+sizeof(float)*3);
glEnableVertexAttribArray(6);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, NULL);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(6);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
I get nothing but the cleared back color, I think the errors located within these codes, but I can't figure out where is it, I appreciate any help.
If you need to check codes in initialization section, I will post them in the reply, thanks!
You're not supplying the built in gl_Position in the vertex shader.
uniform mat4 gtransform;
void main(void)
{
vec4 outPosition = gtransform * vec4(attrib_position,1.0f);
gl_Position = outPosition;
// if you need the position in the fragment shader
VS_StateOutput.pos = outPosition;
VS_StateOutput.texcoord = attrib_texcoord;
}
OpenGL wont know about your VS_StateOutput struct and won't have any idea where to put the new vertices. See here for a quick reference. (Page 7)
This link gives a better detail of the built in structure.
EDIT:
Next I would change the fragment shader main to:
void main(void)
{
color0 = vec4(1.0); //texture(texUnit, PS_StateInput.texcoord);
}
to rule out any texture issues. Sometimes texture problems lead to a blank black texture. If you get a white colour, you then know its a texture problem and can look there instead.

Opengl deferred lighting shader

I just started to learning OpenGL 3.1 and I'm trying to implement deferred shading to my engine(framework?). I wrote simple shaders for first stage, lighting stage and deferred stage.
Lighting stage takes the diffuse color from deferred texture and saves it in lighting texture. Deferred stage draws the lighting texture. In lighting shader is a bug and scene is very strange. It looks like this, and it should look like this. Lighting stage vertex shader:
#version 150
in vec4 vertex;
out vec2 position;
void main(void)
{
gl_Position = vertex*2-1;
gl_Position.z = 0.0;
position.xy = vertex.xy;
}
Lighting stage fragment shader:
#version 150
in vec2 position;
uniform sampler2D diffuseTexture;
uniform sampler2D positionTexture;
out vec4 lightingOutput;
void main()
{
vec4 diffuse = texture(diffuseTexture, vec2(position.x, position.y));
vec4 position = texture(positionTexture, position.xy);
vec4 ambient = vec4(0.05, 0.05, 0.05, 1.0) * diffuse;
lightingOutput = diffuse;
}
That's what I render in lighting stage:
static const GLfloat _vertices[] =
{
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f,
};
And that's how I render it:
glUseProgram( programID[2] );
glEnableVertexAttribArray(vertexID[1]);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(
vertexID[1],
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, positionTexture);
glUniform1i(positionTextureID[1], 1);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, diffuseTexture);
glUniform1i(diffuseTextureID[1], 0);
glDrawArrays( GL_TRIANGLES, 0, 6 );
glDisableVertexAttribArray(vertexID[1]);
If you need all the code it's here www.dropbox.com/s/hvfe4v4pb1pfxb3/code.zip.
How to fix that strange problem?