I want to load Crisis Nanosuit model using Assimp in OpenGL/GLSL. The model has several meshes as described by the node tree in assimp. Each mesh is associated with one or more textures (diffuse or specular etc). How do I render textures over the model and still do it with a single draw call?
I have been able to load models without textures so far. And here is how I did it: I used the node tree to find out how many meshes are present in the model and stacked them using an Array-of-structures. This contained float values of Positions, Normals, Texcoords, Color_Ambient, Color_diffuse, Color_specular and Shininess into a buffer, VBO. Since the meshes are stacked, the index array was offset for each mesh accordingly. Finally with a single draw call, the model was rendered successfully. Here is the entire code and the relavent parts are as follows
struct Vertex
{
glm::vec3 position;
glm::vec3 normal;
glm::vec2 texcoord;
glm::vec3 colorambient;
glm::vec3 colordiffuse;
glm::vec3 colorspecular;
float shininess;
};
// Creating a nodestack of all the meshes
void modelloader::NodeTreeTraversal(aiNode *node)
{
if(node->mNumChildren==0)
nodestack.push_back(node);
else
for(unsigned int i=0; i<node->mNumChildren; i++)
this->NodeTreeTraversal(node->mChildren[i]);
}
// Look into assimp data structures for data and populate them into opengl's vbo's and ebo.
void modelloader::ProcessMeshes()
{
// currently this method loads vertex positions, normals, textures;
// also loads material info such as ambient, diffuse and specular colors with shininess as 16.0f
Vertex vertex;
unsigned int offset_faces=0;
for(unsigned int i=0; i<this->nodestack.size(); i++)
{
aiNode *node = nodestack[i];
for(unsigned int j=0; j<node->mNumMeshes; j++)
{
aiMesh *mesh = this->scene->mMeshes[node->mMeshes[j]];
aiColor4D ambient;
aiColor4D diffuse;
aiColor4D specular;
if(this->scene->HasMaterials()) {
aiMaterial *mtl = scene->mMaterials[mesh->mMaterialIndex];
aiGetMaterialColor(mtl, AI_MATKEY_COLOR_AMBIENT, &ambient);
aiGetMaterialColor(mtl, AI_MATKEY_COLOR_DIFFUSE, &diffuse);
aiGetMaterialColor(mtl, AI_MATKEY_COLOR_SPECULAR, &specular);
}
// load all mesh data
for(unsigned int k=0; k<mesh->mNumVertices; k++)
{
// positions and normals
vertex.position = glm::vec3(mesh->mVertices[k].x, mesh->mVertices[k].y, mesh->mVertices[k].z); // load positions
vertex.normal = glm::vec3(mesh->mNormals[k].x, mesh->mNormals[k].y, mesh->mNormals[k].z); // load normals
// load textures
if(this->scene->HasTextures())
vertex.texcoord = glm::vec2(mesh->mTextureCoords[0][k].x, mesh->mTextureCoords[0][k].y);
else vertex.texcoord = glm::vec2(0.0f, 0.0f);
// load materials
vertex.colorambient = glm::vec3(ambient.r, ambient.g, ambient.b);
vertex.colordiffuse = glm::vec3(diffuse.r, diffuse.g, diffuse.b);
vertex.colorspecular = glm::vec3(specular.r, specular.g, specular.b);
vertex.shininess = 16.0f;
// push back all the data for each vertex
meshdata.push_back(vertex);
}
// create index data
for(unsigned int l=0; l<mesh->mNumFaces; l++) {
this->indices.push_back(mesh->mFaces[l].mIndices[0]+offset_faces);
this->indices.push_back(mesh->mFaces[l].mIndices[1]+offset_faces);
this->indices.push_back(mesh->mFaces[l].mIndices[2]+offset_faces);
}
offset_faces = offset_faces+mesh->mNumVertices;
}
}
this->MeshData = &meshdata[0].position.x;
this->MeshDataSize = meshdata.size() * 18 * sizeof(float);
this->Indices = indices.data();
this->IndicesSize = indices.size()*sizeof(unsigned int);
}
// draw call
void modelloader::RenderModel()
{
glBindVertexArray(this->VAO);
glDrawElements(GL_TRIANGLES, this->IndicesSize/sizeof(unsigned int), GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
}
Here is the output image.
Now, when loading textures (separate image files for each part of the body), when I activated all the textures atonce, it stretched each texture file over the entire body. How do I do it properly?
My preliminary thoughts are: Activate all the texture files. Add an attribute called "mesh_number" in the VBO and in the fragment shader, use the appropriate texture corresponding to the "mesh_number". I dont know if this will work. How is it usually done? Do you have any code samples?
This problem gets solved when the draw call is applied to each mesh in the model as done here.
1) But aren't draw calls expensive? Shouldn't I be drawing the entire mesh at one go? 2) Should I create a single image file of all the body parts collaged into one; much like a sprite sheet?
You need to activate each texture via:
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, <cour texture id>);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, <cour texture id>);
So all the textures can be rendered for your draw-call.
Related
I am trying to do a scene in OpenGL to simulate earth from space. I have two spheres right now, one for earth, and another slightly big for clouds. The earth and the cloud sphere objects have their own shader programs to keep it simple. The earth shader program takes 4 textures (day, night, specmap and normalmap) and the cloud shader program takes 2 textures (cloudmap and normalmap). I have an object class which has a render function, and in that function I use this logic:
//bind the current object's texture
for (GLuint i = 0; i < texIDs.size(); i++){
glActiveTexture(GL_TEXTURE0 + i);
if (cubemap)
glBindTexture(GL_TEXTURE_CUBE_MAP, texIDs[i]);
else
glBindTexture(GL_TEXTURE_2D, texIDs[i]);
}
if (samplers.size()){
for (GLuint i = 0; i < samplers.size(); i++){
glUniform1i(glGetUniformLocation(program, samplers[i]), i);
}
}
It starts from the 0th texture unit, and binds N number of textures to N number of texture units starting from GL_TEXTURE0. Then it binds the the samplers starting from 0 to N in the shader program. The samplers are provided by me while loading the textures:
void Object::loadTexture(const char* filename, const GLchar* sampler){
int texID;
texID = SOIL_load_OGL_texture(filename, SOIL_LOAD_AUTO, SOIL_CREATE_NEW_ID, SOIL_FLAG_MIPMAPS | SOIL_FLAG_TEXTURE_REPEATS);
if(texID == 0){
cerr << "SOIL error: " << SOIL_last_result();
}
cout << filename << " Tex ID: " << texID << endl;
texIDs.push_back(texID);
samplers.push_back(sampler);
//glBindTexture(GL_TEXTURE_2D, texID);
}
When I do this, all the textures in the first sphere (earth) gets loaded successfully, but in the seconds sphere I get no textures and I just get a black sphere. My query is, how should I manage multiple textures and samplers if I'm using different shader programs for each object?
From what I see You are binding all textures as separate texture unit
that is wrong
what if you have 100 objects and each has 4 textures ...
I strongly doubt that you have 400 texture units at your disposal
Texture ID (name) is not Texture unit ...
I render space bodies like this:
First pass renders the astro body geometry
I have specific texture units for specific tasks
// texture units:
// 0 - texture0 map 2D rgba (surface)
// 1 - texture1 map 2D rgba (clouds blend)
// 2 - normal map 2D xyz (normal/bump mapping)
// 3 - specular map 2D i (reflection shininess)
// 4 - light map 2D rgb rgb (night lights)
// 5 - enviroment/skybox cube map 3D rgb
see the shader in that link (it was written for the solar system visualization too)...
you bind only the textures for single body before each render of it
(after you bind the shader)
do not change the texture unit meanings (how shader will know which texture is what if you do?)
Second render pass adds atmospheres
no textures used
it is just single transparent quad covering whole screen
here some insights to your tasks
[edit1] example of multitexturing
// init shader once per render all geometries
GLint prog_id; // shader program ID;
GLint txrskybox; // global skybox environment cube map
GLint id;
glUseProgram(prog_id);
id=glGetUniformLocation(prog_id,"txr_texture0"); glUniform1i(id,0); //uniform sampler2D txr_texture0;
id=glGetUniformLocation(prog_id,"txr_texture1"); glUniform1i(id,1); //uniform sampler2D txr_texture1;
id=glGetUniformLocation(prog_id,"txr_normal"); glUniform1i(id,2); //uniform sampler2D txr_normal;
id=glGetUniformLocation(prog_id,"txr_specular"); glUniform1i(id,3); //uniform sampler2D txr_specular;
id=glGetUniformLocation(prog_id,"txr_light"); glUniform1i(id,4); //uniform sampler2D txr_light;
id=glGetUniformLocation(prog_id,"txr_skybox"); glUniform1i(id,5); //uniform samplerCube txr_skybox;
// add here all uniforms you need ...
glActiveTexture(GL_TEXTURE0+5); glEnable(GL_TEXTURE_CUBE_MAP); glBindTexture(GL_TEXTURE_CUBE_MAP,txrskybox);
for (i=0;i<all_objects;i++)
{
// add here all uniforms you need ...
// pass textures once per any object render
// obj::(GLint) txr0,txr1,txrnor,txrspec,txrlight; // object local textures
glActiveTexture(GL_TEXTURE0+0); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txr0);
glActiveTexture(GL_TEXTURE0+1); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txr1);
glActiveTexture(GL_TEXTURE0+2); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrnor);
glActiveTexture(GL_TEXTURE0+3); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrspec);
glActiveTexture(GL_TEXTURE0+4); glEnable(GL_TEXTURE_2D); glBindTexture(GL_TEXTURE_2D,obj[i].txrlight);
// here render the geometry of obj[i]
}
// unbind textures and shaders
glActiveTexture(GL_TEXTURE0+5); glBindTexture(GL_TEXTURE_CUBE_MAP,0); glDisable(GL_TEXTURE_CUBE_MAP);
glActiveTexture(GL_TEXTURE0+4); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+3); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+2); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+1); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D);
glActiveTexture(GL_TEXTURE0+0); glBindTexture(GL_TEXTURE_2D,0); glDisable(GL_TEXTURE_2D); // unit0 at last so it stays active ...
glUseProgram(0);
I am trying to integrate the Assimp loader to my framework. Everything is rendered fine, but in this spider model I'm rendering, its fangs are not being drawn as expected (see following picture).
Below is the relevant code snippet:
//Storing the Indices
for (unsigned int t = 0; t < mesh->mNumFaces; ++t) {
aiFace* face = &mesh->mFaces[t];
memcpy(&faceArray[index], face->mIndices, 3*sizeof(unsigned int));
index += 3;
}
//Storing the Vertices
for (unsigned int t = 0; t < mesh->mNumVertices; ++t) {
aiVector3D vertex ;
if (mesh->HasPositions()) {
vertex = mesh->mVertices[t];
memcpy(&vertexArray[index], &vertex,3*sizeof(float));
}
index += 3;
}
//Render module
void model::helperDraw(GLuint vertexBufferID, GLuint indexBufferID, GLuint textureID)
{
GLint indexSize;
glBindBuffer(GL_ARRAY_BUFFER,vertexBufferID);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,indexBufferID);
glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &indexSize);
glBindTexture( GL_TEXTURE_2D, textureID);
glDrawElements(GL_TRIANGLES, indexSize/sizeof(GLuint), GL_UNSIGNED_INT, 0);
}
What could be wrong with my code?
There is nothing obviously wrong with your code. One possible cause for these rendering artefacts is that the OBJ model you load has some faces that are triangles an some faces that are not. You are rendering everything as GL_TRIANGLES, but the OBJ format can specify faces as quads, triangle-strips, triangles and even other more exotic things like patches.
Assimp has a mesh triangulation facility that can make your life a lot easier when dealing with these multi-format mesh files, such as the OBJ. Try passing the flag aiProcess_Triangulate to the load method of the importer or even to the post-processing method if you do post-processing as a separate step. This is likely to fix the issue.
I'm attempting to create omnidirectional/point lighting in openGL version 3.3. I've searched around on the internet and this site, but so far I have not been able to accomplish this. From my understanding, I am supposed to
Generate a framebuffer using depth component
Generate a cubemap and bind it to said framebuffer
Draw to the individual parts of the cubemap as refrenced by the enums GL_TEXTURE_CUBE_MAP_*
Draw the scene normally, and compare the depth value of the fragments against those in the cubemap
Now, I've read that it is better to use distances from the light to the fragment, rather than to store the fragment depth, as it allows for easier cubemap look up (something about not needing to check each individual texture?)
My current issue is that the light that comes out is actually in a sphere, and does not generate shadows. Another issue is that the framebuffer complains of not being complete, although I was under the impression that a framebuffer does not need a renderbuffer if it renders to a texture.
Here is my framebuffer and cube map initialization:
framebuffer = 0;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenTextures(1, &shadowTexture);
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowTexture);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);GL_COMPARE_R_TO_TEXTURE);
for(int i = 0; i < 6; i++){
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i , 0,GL_DEPTH_COMPONENT16, 800, 800, 0,GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
glDrawBuffer(GL_NONE);
Shadow Vertex Shader
void main(){
gl_Position = depthMVP * M* vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment Shader
void main(){
fragmentDepth = distance(lightPos, pos);
}
Vertex Shader (unrelated bits cut out)
uniform mat4 depthMVP;
void main() {
PositionWorldSpace = (M * vec4(position,1.0)).xyz;
gl_Position = MVP * vec4(position, 1.0 );
ShadowCoord = depthMVP * M* vec4(position, 1.0);
}
Fragment Shader (unrelated code cut)
uniform samplerCube shadowMap;
void main(){
float bias = 0.005;
float visibility = 1;
if(texture(shadowMap, ShadowCoord.xyz).x < distance(lightPos, PositionWorldSpace)-bias)
visibility = 0.1
}
Now as you are probably thinking, what is depthMVP? Depth projection matrix is currently an orthogonal projection with the ranges [-10, 10] in each direction
Well they are defined like so:
glm::mat4 depthMVP = depthProjectionMatrix* ??? *i->getModelMatrix();
The issue here is that I don't know what the ??? value is supposed to be. It used to be the camera matrix, however I am unsure if that is what it is supposed to be.
Then the draw code is done for the sides of the cubemap like so:
for(int loop = 0; loop < 6; loop++){
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X+loop, shadowTexture,0);
glClear( GL_DEPTH_BUFFER_BIT);
for(auto i: models){
glUniformMatrix4fv(modelPos, 1, GL_FALSE, glm::value_ptr(i->getModelMatrix()));
glm::mat4 depthMVP = depthProjectionMatrix*???*i->getModelMatrix();
glUniformMatrix4fv(glGetUniformLocation(shadowProgram, "depthMVP"),1, GL_FALSE, glm::value_ptr(depthMVP));
glBindVertexArray(i->vao);
glDrawElements(GL_TRIANGLES, i->triangles, GL_UNSIGNED_INT,0);
}
}
Finally the scene gets drawn normally (I'll spare you the details). Before the calls to draw onto the cubemap I set the framebuffer to the one that I generated earlier, and change the viewport to 800 by 800. I change the framebuffer back to 0 and reset the viewport to 800 by 600 before I do normal drawing. Any help on this subject will be greatly appreciated.
Update 1
After some tweaking and bug fixing, this is the result I get. I fixed an error with the depthMVP not working, what I am drawing here is the distance that is stored in the cubemap.
http://imgur.com/JekOMvf
Basically what happens is it draws the same one sided projection on each side. This makes sense since we use the same view matrix for each side, however I am not sure what sort of view matrix I am supposed to use. I think they are supposed to be lookAt() matrices that are positioned at the center, and look out in the cube map side's direction. However, the question that arises is how I am supposed to use these multiple projections in my main draw call.
Update 2
I've gone ahead and created these matrixes, however I am unsure of how valid they are (they were ripped from a website for DX cubemaps, so I inverted the Z coord).
case 1://Negative X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(-1,0,0),glm::vec3(0,-1,0));
break;
case 3://Negative Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,-1,0),glm::vec3(0,0,-1));
break;
case 5://Negative Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,-1),glm::vec3(0,-1,0));
break;
case 0://Positive X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(1,0,0),glm::vec3(0,-1,0));
break;
case 2://Positive Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,1,0),glm::vec3(0,0,1));
break;
case 4://Positive Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,1),glm::vec3(0,-1,0));
break;
The question still stands, what I am supposed to translate the depthMVP view portion by, as these are 6 individual matrices. Here is a screenshot of what it currently looks like, with the same frag shader (i.e. actually rendering shadows) http://i.imgur.com/HsOSG5v.png
As you can see the shadows seem fine, however the positioning is obviously an issue. The view matrix that I used to generate this was just an inverse translation of the position of the camera (as the lookAt() function would do).
Update 3
Code, as it currently stands:
Shadow Vertex
void main(){
gl_Position = depthMVP * vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment
void main(){
fragmentDepth = distance(lightPos, pos);
}
Main Vertex
void main(){
PositionWorldSpace = (M*vec4(position, 1)).xyz;
ShadowCoord = vec4(PositionWorldSpace - lightPos, 1);
}
Main Frag
void main(){
float texDist = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float dist = distance(lightPos, PositionWorldSpace);
if(texDist < distance(lightPos, PositionWorldSpace)
visibility = 0.1;
outColor = vec3(texDist);//This is to visualize the depth maps
}
The perspective matrix being used
glm::mat4 depthProjectionMatrix = glm::perspective(90.f, 1.f, 1.f, 50.f);
Everything is currently working, sort of. The data that the texture stores (i.e. the distance) seems to be stored in a weird manner. It seems like it is normalized, as all values are between 0 and 1. Also, there is a 1x1x1 area around the viewer that does not have a projection, but this is due to the frustum and I think will be easy to fix (like offsetting the cameras back .5 into the center).
If you leave the fragment depth to OpenGL to determine you can take advantage of hardware hierarchical Z optimizations. Basically, if you ever write to gl_FragDepth in a fragment shader (without using the newfangled conservative depth GLSL extension) it prevents hardware optimizations called hierarchical Z. Hi-Z, for short, is a technique where rasterization for some primitives can be skipped on the basis that the depth values for the entire primitive lies behind values already in the depth buffer. But it only works if your shader never writes an arbitrary value to gl_FragDepth.
If instead of writing a fragment's distance from the light to your cube map, you stick with traditional depth you should theoretically get higher throughput (as occluded primitives can be skipped) when writing your shadow maps.
Then, in your fragment shader where you sample your depth cube map, you would convert the distance values into depth values by using a snippet of code like this (where f and n are the far and near plane distances you used when creating your depth cube map):
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
Code borrowed from SO question: Omnidirectional shadow mapping with depth cubemap
So applying that extra bit of code to your shader, it would work out to something like this:
void main () {
float shadowDepth = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float testDepth = VectorToDepthValue(lightPos - PositionWorldSpace);
if (shadowDepth < testDepth)
visibility = 0.1;
}
I suspect I'm not correctly rendering particle positions to my FBO, or correctly sampling those positions when rendering, though that may not be the actual problem with my code, admittedly.
I have a complete jsfiddle here: http://jsfiddle.net/p5mdv/53/
A brief overview of the code:
Initialization:
Create an array of random particle positions in x,y,z
Create an array of texture sampling locations (e.g. for 2 particles, first particle at 0,0, next at 0.5,0)
Create a Frame Buffer Object and two particle position textures (one for input, one for output)
Create a full-screen quad (-1,-1 to 1,1)
Particle simulation:
Render a full-screen quad using the particle program (bind frame buffer, set viewport to the dimensions of my particle positions texture, bind input texture, and draw a quad from -1,-1 to 1,1). Input and output textures are swapped each frame.
Particle fragment shader samples the particle texture at the current fragment position (gl_FragCoord.xy), makes some modifications, and writes out the modified position
Particle rendering:
Draw using the vertex buffer of texture sampling locations
Vertex shader uses the sampling location to sample the particle position texture, then transforms them using view projection matrix
Draw the particle using a sprite texture (gl.POINTS)
Questions:
Am I correctly setting the viewport for the FBO in the particle simulation step? I.e. am I correctly rendering a full-screen quad?
// 6 2D corners = 12 vertices
var vertexBuffer = new Float32Array(12);
// -1,-1 to 1,1 screen quad
vertexBuffer[0] = -1;
vertexBuffer[1] = -1;
vertexBuffer[2] = -1;
vertexBuffer[3] = 1;
vertexBuffer[4] = 1;
vertexBuffer[5] = 1;
vertexBuffer[6] = -1;
vertexBuffer[7] = -1;
vertexBuffer[8] = 1;
vertexBuffer[9] = 1;
vertexBuffer[10] = 1;
vertexBuffer[11] = -1;
// Create GL buffers with this data
g.particleSystem.vertexObject = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, g.particleSystem.vertexObject);
gl.bufferData(gl.ARRAY_BUFFER, vertexBuffer, gl.STATIC_DRAW);
...
gl.viewport(0, 0,
g.particleSystem.particleFBO.width,
g.particleSystem.particleFBO.height);
...
// Set the quad as vertex buffer
gl.bindBuffer(gl.ARRAY_BUFFER, g.screenQuad.vertexObject);
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0);
// Draw!
gl.drawArrays(gl.TRIANGLES, 0, 6);
Am I correctly setting the texture coordinates to sample the particle positions?
for(var i=0; i<numParticles; i++)
{
// Coordinates of particle within texture (normalized)
var texCoordX = Math.floor(i % texSize.width) / texSize.width;
var texCoordY = Math.floor(i / texSize.width) / texSize.height;
particleIndices[ pclIdx ] = texCoordX;
particleIndices[ pclIdx + 1 ] = texCoordY;
particleIndices[ pclIdx + 2 ] = 1; // not used in shader
}
The relevant shaders:
Particle simulation fragment shader:
precision mediump float;
uniform sampler2D mParticleTex;
void main()
{
// Current pixel is the particle's position on the texture
vec2 particleSampleCoords = gl_FragCoord.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
// Move the particle up
particlePos.y += 0.1;
if(particlePos.y > 2.0)
{
// Reset
particlePos.y = -2.0;
}
// Write particle out to texture
gl_FragColor = particlePos;
}
Particle rendering vertex shader:
attribute vec4 vPosition;
uniform mat4 u_modelViewProjMatrix;
uniform sampler2D mParticleTex;
void main()
{
vec2 particleSampleCoords = vPosition.xy;
vec4 particlePos = texture2D(mParticleTex, particleSampleCoords);
gl_Position = u_modelViewProjMatrix * particlePos;
gl_PointSize = 10.0;
}
Let me know if there's a better way to go about debugging this, if nothing else. I'm using webgl-debug to find gl errors and logging what I can to the console.
Your quad is facing away from view so I tried adding gl.disable(gl.CULL_FACE), still no result.
Then I noticed that while resizing window panel with canvas it actually shows one black, square-shaped particle. So it seems that rendering loop is not good.
If you look at console log, it fails to load particle image and it also says that FBO size is 512x1 which is not good.
Some function declarations do not exist, as getTexSize. (?!)
Code needs tiding and grouping, and always check console if you're already using it.
Hope this helps a bit.
Found the problem.
gl_FragCoord is from [0,0] to [screenwidth, screenheight], I was wrongly thinking it was from [0,0] to [1,1].
I had to pass in shader variables for width and height, then normalize the sample coordinates before sampling from the texture.
I am working on an OpenGL 2D game with sprite graphics. I was recently advised that I should use OpenGL ES calls as it is a subset of OpenGL and would allow me to port it more easily to mobile platforms. The majority of the code is just calls to a draw_image function, which is defined so:
void draw_img(float x, float y, float w, float h, GLuint tex,float r=1,float g=1, float b=1) {
glColor3f(r,g,b);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f( x, y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(w+x, y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f( w+x, h+y);
glTexCoord2f(0.0f, 1.0f);
glVertex2f( x, h+y);
glEnd();
}
What do I need to change to make this OpenGL ES compatible? Also, the reason I am using fixed-function rather than shaders is that I am developing on a machine which doesn't support GLSL.
In OpenGL ES 1.1 use the glVertexPointer(), glColorPointer(), glTexCoordPointer() and glDrawArrays() functions to draw a quad. In contrast to your OpenGL implementation, you will have to describe the structures (vectors, colors, texture coordinates) that your quad consists of instead of just using the built-in glTexCoord2f, glVertex2f and glColor3f methods.
Here is some example code that should do what you want. (I have used the argument names you used in your function definition, so it should be simple to port your code from the example.)
First, you need to define a structure for one vertex of your quad. This will hold the quad vertex positions, colors and texture coordinates.
// Define a simple 2D vector
typedef struct Vec2 {
float x,y;
} Vec2;
// Define a simple 4-byte color
typedef struct Color4B {
GLbyte r,g,b,a;
};
// Define a suitable quad vertex with a color and tex coords.
typedef struct QuadVertex {
Vec2 vect; // 8 bytes
Color4B color; // 4 bytes
Vec2 texCoords; // 8 bytes
} QuadVertex;
Then, you should define a structure describing the whole quad consisting of four vertices:
// Define a quad structure
typedef struct Quad {
QuadVertex tl;
QuadVertex bl;
QuadVertex tr;
QuadVertex br;
} Quad;
Now, instantiate your quad and assign quad vertex information (positions, colors, texture coordinates):
Quad quad;
quad.bl.vect = (Vec2){x,y};
quad.br.vect = (Vec2){w+x,y};
quad.tr.vect = (Vec2){w+x,h+y};
quad.tl.vect = (Vec2){x,h+y};
quad.tl.color = quad.tr.color = quad.bl.color = quad.br.color
= (Color4B){r,g,b,255};
quad.tl.texCoords = (Vec2){0,0};
quad.tr.texCoords = (Vec2){1,0};
quad.br.texCoords = (Vec2){1,1};
quad.bl.texCoords = (Vec2){0,1};
Now tell OpenGL how to draw the quad. The calls to gl...Pointer provide OpenGL with the right offsets and sizes to your vertex structure's values, so it can later use that information for drawing the quad.
// "Explain" the quad structure to OpenGL ES
#define kQuadSize sizeof(quad.bl)
long offset = (long)&quad;
// vertex
int diff = offsetof(QuadVertex, vect);
glVertexPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
// color
diff = offsetof(QuadVertex, color);
glColorPointer(4, GL_UNSIGNED_BYTE, kQuadSize, (void*)(offset + diff));
// texCoods
diff = offsetof(QuadVertex, texCoords);
glTexCoordPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
Finally, assign the texture and draw the quad. glDrawArrays tells OpenGL to use the previously defined offsets together with the values contained in your Quad object to draw the shape defined by 4 vertices.
glBindTexture(GL_TEXTURE_2D, tex);
// Draw the quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindTexture(GL_TEXTURE_2D, 0);
Please also note that it is perfectly OK to use OpenGL ES 1 if you don't need shaders. The main difference between ES1 and ES2 is that, in ES2, there is no fixed pipeline, so you would need to implement a matrix stack plus shaders for the basic rendering on your own. If you are fine with the functionality offered by the fixed pipeline, just use OpenGL ES 1.