How i can add auto texture coordinates in OpenGL? - c++

i created bezier surface in OpenGL this way:
GLfloat punktyWSP[5][5][3] = {
{ {0,0,4}, {1,0,4},{2,0,4},{3,0,4},{4,1,4}},
{ {0,0,3}, {1,1,3},{2,1,3},{3,1,3},{4,1,3} },
{ {0,1,2}, {1,2,2},{2,6,2},{3,2,2},{4,1,2} },
{ {0,0,1}, {1,1,1},{2,1,1},{3,1,1},{4,1,1} },
{ {0,0,0}, {1,0,0},{2,0,0},{3,0,0},{4,1,0} }
};
glMap2f(GL_MAP2_VERTEX_3, 0, 1, 3, 5, 0, 1, 15, 5, &punktyWSP[0][0][0]);
glEnable(GL_MAP2_VERTEX_3);
glMapGrid2f(u, 0, 1, v, 0, 1);
glShadeModel(GL_FLAT);
glEnable (GL_AUTO_NORMAL);
glEvalMesh2(GL_FILL, 0, u, 0, v);
Now i want to texture it.
Is there any way to add auto texture coordinates to my surface as it it with norms and glenable(gl_auto_normal)?
If there is no such a function, do you have any idea how to add coordinates to my surface? Maybe glEnable(GL_MAP2_TEXTURE_COORD_2) ?

Related

Map two colours to two other colours in GLSL

I'm trying to take a noise pattern which consists of black and white (and grey where there is a smooth transition between the two) and I am trying to map it to two different colours, but I'm having trouble figuring out how to do this.
I can easily replace the white or black with a simple if statement, but the gradient areas where the white and black is mixed is still a mix of white and black, which makes sense. So I need to actually the map the colours to the new colours, but I have no idea the way I'm supposed to go about this.
There are easy ways
The inflexible way, use mix
gl_FragColor = mix(color0, color1, noise)
The more flexible way, use a ramp texture
float u = (noise * (rampTextureWidth - 1.0) + 0.5) / rampTextureWidth;
gl_FragColor = texture2D(rampTexture, vec2(u, 0.5));
Using ramp textures handles any number of colors where as mix only handles 2.
const vs = `
attribute vec4 position;
attribute float noise;
uniform mat4 u_matrix;
varying float v_noise;
void main() {
gl_Position = u_matrix * position;
v_noise = noise;
}
`;
const fs = `
precision highp float;
varying float v_noise;
uniform sampler2D rampTexture;
uniform float rampTextureWidth;
void main() {
float u = (v_noise * (rampTextureWidth - 1.0) + 0.5) / rampTextureWidth;
gl_FragColor = texture2D(rampTexture, vec2(u, 0.5));
}
`;
"use strict";
const m4 = twgl.m4;
const gl = document.querySelector("canvas").getContext("webgl");
// compiles shaders, links program, looks up locations
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
/*
6------7
/| /|
/ | / |
2------3 |
| | | |
| 4---|--5
| / | /
|/ |/
0------1
*/
const arrays = {
position: [
-1, -1, -1,
1, -1, -1,
-1, 1, -1,
1, 1, -1,
-1, -1, 1,
1, -1, 1,
-1, 1, 1,
1, 1, 1,
],
noise: {
numComponents: 1,
data: [
1, 0.5, 0.2, 0.3, 0.9, 0.1, 0.7, 1,
],
},
indices: [
0, 2, 1, 1, 2, 3,
1, 3, 5, 5, 3, 7,
5, 7, 4, 4, 7, 6,
4, 6, 0, 0, 6, 2,
2, 6, 3, 6, 7, 3,
0, 1, 4, 4, 1, 5,
],
};
const bufferInfo = twgl.createBufferInfoFromArrays(gl, arrays);
const red = [255, 0, 0, 255];
const yellow = [255, 255, 0, 255];
const blue = [ 0, 0, 255, 255];
const green = [ 0, 255, 0, 255];
const cyan = [ 0, 255, 255, 255];
const magenta = [255, 0, 255, 255];
function makeTexture(gl, name, colors) {
const width = colors.length / 4;
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA,
width, 1, 0,
gl.RGBA, gl.UNSIGNED_BYTE,
new Uint8Array(colors));
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
return {
name,
texture,
width,
};
}
const textures = [
makeTexture(gl, 'one color',
[...red]),
makeTexture(gl, 'two colors',
[...red, ...yellow]),
makeTexture(gl, 'three colors',
[...blue, ...red, ...yellow]),
makeTexture(gl, 'six colors',
[...green, ...red, ...blue, ...yellow, ...cyan, ...magenta]),
];
const infoElem = document.querySelector('#info');
function render(time) {
time *= 0.001;
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
// draw cube
const fov = 30 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.5;
const zFar = 40;
const projection = m4.perspective(fov, aspect, zNear, zFar);
const eye = [1, 4, -7];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
const viewProjection = m4.multiply(projection, view);
const world = m4.rotationY(time);
gl.useProgram(programInfo.program);
const tex = textures[time / 2 % textures.length | 0];
infoElem.textContent = tex.name;
// calls gl.bindBuffer, gl.enableVertexAttribArray, gl.vertexAttribPointer
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
// calls gl.uniformXXX, gl.activeTexture, gl.bindTexture
twgl.setUniformsAndBindTextures(programInfo, {
u_matrix: m4.multiply(viewProjection, world),
rampTexture: tex.texture,
rampTextureWidth: tex.width,
});
// calls gl.drawArrays or gl.drawElements
twgl.drawBufferInfo(gl, bufferInfo);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
<script src="https://twgljs.org/dist/4.x/twgl-full.min.js"></script>
<canvas></canvas>
<div id="info"></div>

glClear() keeps the screen black

I'm developing a 2D game called Spaceland and I've ran into a problem with clearing the screen. Whenever I call glClear(GL_COLOR_BUFFER_BIT) every frame, it keeps my screen black until i stop calling it. I have tested this by assigning glClear() to a key, and when I hold it down the screen turns black, when not pressed, the quad that is spreading across the screen just grows until I clear again.
I am using glClearColor(0, 0, 0, 1) when I create a window. I have tried turning off and on glfwSwapInterval().
create() function in my Window class:
public void create(boolean vsync) {
GLFWErrorCallback.createPrint(System.err).set();
GLFWVidMode vid = glfwGetVideoMode(glfwGetPrimaryMonitor());
keys = new boolean[GLFW_KEY_LAST];
for (int i = 0; i < GLFW_KEY_LAST; i ++) {
keys[i] = false;
}
glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);
ID = glfwCreateWindow(vid.width(), vid.height(), TITLE, glfwGetPrimaryMonitor(), 0);
if (ID == 0)
throw new IllegalStateException("Error whilst creating window: '" + TITLE + "'");
glfwMakeContextCurrent(ID);
createCapabilities();
glClearColor(0, 0, 0, 1);
camera = new Camera(getWidth(), getHeight());
glfwSwapInterval(vsync ? 1 : 0);
}
Sprite Class:
public class Sprite {
private VertexArray vao;
private VertexBuffer
pVbo,
iVbo;
private int vertexCount;
private float scale;
private Vector3f position;
private Vector3f rotation;
private Matrix4f tMatrix;
public Sprite(float[] pos, int[] indices) {
vertexCount = indices.length;
position = new Vector3f(0, 0, 0);
rotation = new Vector3f(0, 0, 0);
scale = 0.1f;
tMatrix = MatrixHelper.createTransformationMatrix(position, rotation, scale);
vao = new VertexArray();
pVbo = new VertexBuffer(false);
iVbo = new VertexBuffer(true);
vao.bind();
pVbo.bind();
pVbo.add(pos);
vao.add();
pVbo.unbind();
iVbo.bind();
iVbo.add(indices);
iVbo.unbind();
vao.unbind();
}
public void setPosition(float x, float y, float z) {
position.x = x;
position.y = y;
position.z = z;
}
public void setRotation(Vector3f rot) {
rotation = rot;
}
public void render(int renderType) {
MatrixHelper.setTMatrixPosition(tMatrix, position);
setPosition(getPosition().x + 0.0001f, 0, 0);
System.out.println(tMatrix);
Spaceland.shader.bind();
Spaceland.shader.editValue("transformation", tMatrix);
vao.bind();
glEnableVertexAttribArray(0);
iVbo.bind();
glDrawElements(renderType, vertexCount, GL_UNSIGNED_INT, 0);
iVbo.unbind();
glDisableVertexAttribArray(0);
vao.unbind();
Spaceland.shader.unbind();
}
public Vector3f getPosition() {
return position;
}
}
I don't think you need to see my Camera class or MatrixHelper class as the problem has occured before implementing this.
Main class (ignore rose[] and roseI[] it's just a cool pattern I made as a test):
public class Spaceland {
public static Window window;
public static Sprite sprite;
public static Shader shader;
public static float[] rose = {
-0.45f, 0f,
0.45f, 0f,
0f, 0.45f,
0f, -0.45f,
-0.4f, -0.2f,
-0.4f, 0.2f,
0.4f, -0.2f,
0.4f, 0.2f,
-0.2f, -0.4f,
-0.2f, 0.4f,
0.2f, -0.4f,
0.2f, 0.4f
};
public static int[] roseI = {
0, 1, 0, 2, 0, 3, 0, 4, 0, 5, 0, 6, 0, 7, 0, 8, 0, 9, 0, 10, 0, 11,
1, 2, 1, 3, 1, 4, 1, 5, 1, 6, 1, 7, 1, 8, 1, 9, 1, 10, 1, 11,
2, 3, 2, 4, 2, 5, 2, 6, 2, 7, 2, 8, 2, 9, 2, 10, 2, 11,
3, 4, 3, 5, 3, 6, 3, 7, 3, 8, 3, 9, 3, 10, 3, 11,
4, 5, 4, 6, 4, 7, 4, 8, 4, 9, 4, 10, 4, 11,
5, 6, 5, 7, 5, 8, 5, 9, 5, 10, 5, 11,
6, 7, 6, 8, 6, 9, 6, 10, 6, 11,
7, 8, 7, 9, 7, 10, 7, 11,
8, 9, 8, 10, 8, 11,
9, 10, 9, 11,
10, 11,
};
public static float[] quad = {
0.5f, 0.5f,
0.5f, -0.5f,
-0.5f, 0.5f,
-0.5f, -0.5f
};
public static int[] quadI = {
2, 0, 3,
0, 1, 3
};
public static void main(String[] args) {
init();
}
public static void loop() {
while (!window.isCloseRequested()) {
update();
render();
}
destroy(0);
}
public static void init() {
if (!glfwInit())
throw new IllegalStateException("Error whilst initialising GLFW");
window = new Window("Spaceland");
window.create(true);
shader = new Shader("src/main/java/com/spaceland/graphics/fragment.fs", "src/main/java/com/spaceland/graphics/vertex.vs");
sprite = new Sprite(quad, quadI);
loop();
}
public static void render() {
window.render();
sprite.render(GL11.GL_TRIANGLES);
}
public static void update() {
window.update();
if (window.isDown(GLFW_KEY_SPACE)) {
glClear(GL_COLOR_BUFFER_BIT);
}
}
public static void destroy(int error) {
window.destroy();
glfwTerminate();
glfwSetErrorCallback(null).free();
shader.destroy();
VertexBuffer.deleteAll();
VertexArray.destroyAll();
System.exit(error);
}
}
Please tell me if you need to see the Shader class, shader vs and fs files, or anything else.
Thanks!
glClear affects the output buffers. So it is part of rendering. If you want to clear as part of your rendering, put glClear inside your render function.
You have it inside update. I suspect that whomever is calling render and update (LWJGL, presumably?) doesn't guarantee any particular ordering to them. So each time you're asked to update you're stomping on top of the last thing you rendered.
Updates:
adjust internal state, usually partly as a function of time.
Renders:
capture current state visually.
It is not very clear in my question, but the answer is that I cleared the screen, swapped buffers, rendered, etc. Which doesn't work.
glClear(...);
glfwSwapBuffers(...);
...render...
This is how it is currently, and this doesn't work.
glClear(...);
...render...
glfwSwapBuffers(...);
This is how I do it now, and it works fine.

Do I need Bind Pose Bone Transformation for my mesh Animation?

I have a Hand mesh which I want to animate.
I have the Skeleton which can be hierarchically animated.
My mesh is also weighted in Blender. So each vertex has 4 associated bones to be affected by.
When I apply the Animation of my Skeleton to the mesh, the hierarchy is applied correctly. (so the hierarchy of the mesh, matches the hierarchy of the Skeleton).
So far so good, now question:
the fingers look to be stretched (its like the fingers smashed by a heavy door). Why?
Note: (I didnt apply the bind pose bone Transformation Matrix explicitly, but I read about it and I believe its functionality is there, in the hierarchical Transformation I have for my Skeleton).
If you need more clarification of the steps, please ask.
vector<glm::mat4> Posture1Hand::HierarchyApplied(HandSkltn HNDSKs){
vector <glm::mat4> Matrices;
Matrices.resize(HNDSKs.GetLimbNum());
//non Hierarchical Matrices
for (unsigned int i = 0; i < Matrices.size(); i++){
Matrices[i] = newPose[i].getModelMatSkltn(HNDSKs.GetLimb(i).getLwCenter());
}
for (unsigned int i = 0; i < Matrices.size(); i++){
vector<Limb*>childeren = HNDSKs.GetLimb(i).getChildren();
for (unsigned int j = 0; j < childeren.size(); j++){
Matrices[childeren[j]->getId()] = Matrices[i] * Matrices[childeren[j]->getId()];
}
}
return Matrices;
}
Here is my getModelMatSkltn method.
inline glm::mat4 getModelMatSkltn(const glm::vec3& RotationCentre) const{//to apply the rotation on the whole heirarchy
glm::mat4 posMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
posMatrix = glm::translate(posMatrix, newPos);
glm::mat4 trMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
glm::mat4 OriginTranslate = glm::translate(trMatrix, -RotationCentre);
glm::mat4 InverseTranslate = glm::translate(trMatrix, RotationCentre);
glm::mat4 rotXMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotXMatrix = glm::rotate(rotXMatrix, glm::radians(newRot.x), glm::vec3(1, 0, 0));
glm::mat4 rotYMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotYMatrix = glm::rotate(rotYMatrix, glm::radians(newRot.y), glm::vec3(0, 1, 0));
glm::mat4 rotZMatrix = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
rotZMatrix = glm::rotate(rotZMatrix, glm::radians(newRot.z), glm::vec3(0, 0, 1));
glm::mat4 scaleMatric = { 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1 };
scaleMatric = glm::scale(scaleMatric, newScale);
glm::mat4 rotMatrix = rotZMatrix*rotYMatrix*rotXMatrix;
rotMatrix = InverseTranslate*rotMatrix*OriginTranslate;
return posMatrix*rotMatrix*scaleMatric;
}
and this is how I send 20 transformation Matrix (because of 20 joints in Hand) to GPU:
void GLShader::Update(const vector trMat, const GLCamera& camera){
vector<glm::mat4> MVP; MVP.resize(trMat.size());
for (unsigned int i = 0; i < trMat.size(); i++){
MVP[i] = camera.getViewProjection()* trMat[i];
}
glUniformMatrix4fv(newUniform[TRANSFORM_U], trMat.size(), GL_FALSE, &MVP[0][0][0]);//4 floating value
}
I guess one should be familiar with calculation of vertex position in the shader in order to be able to answer the question, but I send a part of my vertex shader too.
attribute vec3 position;
attribute vec2 texCoord;
attribute vec4 weight;
attribute vec4 weightInd;
uniform mat4 transform[20];//vector of uniform for 20 number of joints in my skleton
void main(){
mat4 WMat;//weighted matrix
float w;
int Index;
for (int i=0; i<4; i++){
Index=int(weightInd[i]);
w=weight[i];
WMat += w*transform[Index];
}
gl_Position= WMat*vec4(position, 1.0);
}

Incorrectly rendering Isocahedron in OpenGL

I'm trying to draw an Isocahedron in OpenGL with c++. I keep getting close but having some missing faces. I have found 3 different sets of vertex/index data on multiple sites, most often the data listed below
float X = 0.525731112119133606f;
float Z = 0.850650808352039932f;
float temppts[12][3] = { { -X, 0.0f, Z }, { X, 0.0f, Z }, { -X, 0.0f, -Z }, { X, 0.0f, -Z },
{ 0.0f, Z, X }, { 0.0f, Z, -X }, { 0.0f, -Z, X }, { 0.0f, -Z, -X },
{ Z, X, 0.0f }, { -Z, X, 0.0f }, { Z, -X, 0.0f }, { -Z, -X, 0.0f } };
GLushort tempindicies[60] =
{ 1, 4, 0, 4, 9, 0, 4, 5, 9, 8, 5, 4, 1, 8, 4,
1, 10, 8, 10, 3, 8, 8, 3, 5, 3, 2, 5, 3, 7, 2,
3, 10, 7, 10, 6, 7, 6, 11, 7, 6, 0, 11, 6, 1, 0,
10, 1, 6, 11, 0, 9, 2, 11, 9, 5, 2, 9, 11, 2, 7};
This code is adapted from a book and multiple sites display it working, though they are drawing immediate and I'm using vbo/ibo. Can anyone point me to some working vertex/index data or tell me what is going wrong transferring this to buffer objects? The three different data all have differently incorrect icosahedrons, each with different faces missing.
I have checked over my bufferData calls many times and tried several drawing modes ( TRIANGLES, TRIANGLE_STRIP ... ) and am convinced the index data is wrong somehow
I used the mesh coordinates (vertices) and the triangle connectivity from Platonic Solids (scroll down to icosahedron). I've pasted a screen shot from that file below. When calling glDrawElements I used GL_TRIANGLES.
Icosahedron
Another thing to watch out for is back face culling. Initially switch off backface culling.
glDisable(GL_CULL_FACE);

Algorithm for a geodesic sphere

I have to make a sphere out of smaller uniformely distributed balls. I think the optimal way is to build a triangle-based geodesic sphere and use the vertices as the middle points of my balls. But I fail to write an algorithm generating the vertices.
Answer in C++ or pseudo-code will be better.
Example of a geodesic sphere: http://i.stack.imgur.com/iNQfP.png
Using the link #Muckle_ewe gave me, I was able to code the following algorithm:
Outside the main()
class Vector3d { // this is a pretty standard vector class
public:
double x, y, z;
...
}
void subdivide(const Vector3d &v1, const Vector3d &v2, const Vector3d &v3, vector<Vector3d> &sphere_points, const unsigned int depth) {
if(depth == 0) {
sphere_points.push_back(v1);
sphere_points.push_back(v2);
sphere_points.push_back(v3);
return;
}
const Vector3d v12 = (v1 + v2).norm();
const Vector3d v23 = (v2 + v3).norm();
const Vector3d v31 = (v3 + v1).norm();
subdivide(v1, v12, v31, sphere_points, depth - 1);
subdivide(v2, v23, v12, sphere_points, depth - 1);
subdivide(v3, v31, v23, sphere_points, depth - 1);
subdivide(v12, v23, v31, sphere_points, depth - 1);
}
void initialize_sphere(vector<Vector3d> &sphere_points, const unsigned int depth) {
const double X = 0.525731112119133606;
const double Z = 0.850650808352039932;
const Vector3d vdata[12] = {
{-X, 0.0, Z}, { X, 0.0, Z }, { -X, 0.0, -Z }, { X, 0.0, -Z },
{ 0.0, Z, X }, { 0.0, Z, -X }, { 0.0, -Z, X }, { 0.0, -Z, -X },
{ Z, X, 0.0 }, { -Z, X, 0.0 }, { Z, -X, 0.0 }, { -Z, -X, 0.0 }
};
int tindices[20][3] = {
{0, 4, 1}, { 0, 9, 4 }, { 9, 5, 4 }, { 4, 5, 8 }, { 4, 8, 1 },
{ 8, 10, 1 }, { 8, 3, 10 }, { 5, 3, 8 }, { 5, 2, 3 }, { 2, 7, 3 },
{ 7, 10, 3 }, { 7, 6, 10 }, { 7, 11, 6 }, { 11, 0, 6 }, { 0, 1, 6 },
{ 6, 1, 10 }, { 9, 0, 11 }, { 9, 11, 2 }, { 9, 2, 5 }, { 7, 2, 11 }
};
for(int i = 0; i < 20; i++)
subdivide(vdata[tindices[i][0]], vdata[tindices[i][1]], vdata[tindices[i][2]], sphere_points, depth);
}
Then in the main():
vector<Vector3d> sphere_points;
initialize_sphere(sphere_points, DEPTH); // where DEPTH should be the subdivision depth
for(const Vector3d &point : sphere_points)
const Vector3d point_tmp = point * RADIUS + CENTER; // Then for the sphere I want to draw, I iterate over all the precomputed sphere points and with a linear function translate the sphere to its CENTER and chose the proper RADIUS
You actually only need to use initialize_sphere() once and use the result for every sphere you want to draw.
I've done this before for a graphics project, the algorithm I used is detailed on this website
http://www.opengl.org.ru/docs/pg/0208.html
just ignore any openGL drawing calls and only code up the parts that deal with creating the actual vertices
There are well known algorithms to triangulate surfaces. You should be able to use the GNU Triangulated Surface Library to generate a suitable mesh if you don't want to code one of them up yourself.
It depends on the number of triangles you want the sphere to have. You can potentially have infinite resolution.
First focus on creating a dome, you can double it later by taking the negative coordinates of your upper dome. You will generate the sphere by interlocking rows of triangles.
Your triangles are equilateral, so decide on a length.
Divide 2(pi)r by the number of triangles you want to be on the bottom row of the dome.
This will be the length of each side of each triangle.
Next you need to create a concentric circle that intersects the surface of the sphere.
Between this circle and the base of the dome will be your first row.
You will need to find the angle that each triangle is tilted. (I will post later when I figure that out)
Repeat process for each concentric circle (generating row) until the height of the row * the number of rows approximately equals the 2(pi)r that u started with.
I will try to program it later if I get a chance. You could also try posting in the Math forum.