Choosing textures in open GL - opengl

Hi guys and girls the problem I have is I have sucessfully loaded 3 BMP textures (or at least I hope I have using char* textureFilenames[textureCount] = {"cement.bmp","hedge.bmp","sky.bmp"};
and I'm applying it currently using
glTexCoord2f(0.0,0.0);
glVertex3f(-150.0, 0.0, -150.0);
glTexCoord2f(1.0,0.0);
glVertex3f(-150.0, 0.0, 150.0);
glTexCoord2f(1.0,1.0);
glVertex3f(150.0, 0.0, 150.0);
glTexCoord2f(0.0,1.0);
glVertex3f(150.0, 0.0, -150.0);
however it currently only picks up the sky.bmp is there anyway i can select one of the others?

OpenGL is a state machine. The current texture is part of the OpenGL state. The last texture you bind with glBindTexture() will be used until you bind another.
glBindTexture(GL_TEXTURE_2D, cement_texture_id);
// ... following geometry will use the cement texture
glBindTexture(GL_TEXTURE_2D, hedge_texture_id);
// ... hedge texture
glBindTexture(GL_TEXTURE_2D, sky_texture_id);
// ... sky texture
The "OpenGL RedBook" has a chapter on texture mapping that covers the basics.

You mistake lies in your lack of understanding of OpenGL. OpenGL is not a scene graph! It's best to think OpenGL to be a set of drawing tools to paint on a canvas called the frame buffer.
So in using OpenGL you must put your mind in a state similar to if you's draw a picture with pencils, eraser, brush and paint. First you prepare your tools: Textures are like "sheets of colour", meshes of vertices are like some delicate "brush".
Like an artist the very fist step is to prepare your tools. You prepare your geometry (i.e. the meshes), if you use Vertex Buffer Objects you load them into fast memory with glBufferData, and your paint and dye, the textures. This is what you do in the "init" phase (I prefer to do this on demand, so that users don't see a "loading" screen).
First you load all your objects (geometry in VBOs, textures etc.); you do this exactly once for each required object, i.e. once an object is prepared (i.e. complete) you don't have to re-upload it.
Then in every drawing iteration for each object you want to draw you bind the needed OpenGL objects to their targets, then perform the drawing calls, which will then be performed using the currently bound objects.
i.e. something like this, please use common sense to fill in the lacking functions in your mind:
struct texture; // some structure holding texture information, details don't matter here
struct geometry; // structure holding object geometry and cross references
texture *textures;
geometry *geometries;
texture * load_texture(char const *texture_name)
{
texture *tex;
if( texture_already_loaded(textures, texture_name) )
tex = get_texture(texture_name);
else
tex = load_texture_data(textures, texture_name);
return tex;
}
geometry * load_geometry(char const *geometry_name)
{
geometry * geom;
if( geometry_already_loaded(geometries, geometry_name) )
geom = get_geometry(geometry_name);
else
geom = load_geometry_data(geometries, geometry_name)
if( geom->texture_name )
geom->texture = load_texture(geom->texture_name);
return geom;
}
void onetime_initialization()
{
for(geometry_list_entry * geom = geometry_name_list; geom ; geom = geom->next)
geom->geometry = geometry_load(geom->name);
}
void drawGL()
{
glViewport(...);
glClearColor(...);
glClear(...);
glMatrixMode(GL_PROJECTION);
// ...
glMatrixMode(GL_MODELVIEW);
// ...
for(geometry_list_entry * geom = geometry_name_list; geom ; geom = geom->next)
{
glMatrixMode(GL_MODELVIEW); // this is not redundant!
glPushMatrix();
apply_geometry_transformation(geom->transformation); // calls the proper glTranslate, glRotate, glLoadMatrix, glMultMatrix, etc.
glBindTexture(GL_TEXTURE_2D, geom->texture->ID);
draw_geometry(geom);
glMatrixMode(GL_MODELVIEW); // this is not redundant!
glPopMatrix();
}
// ...
swapBuffers();
}

Related

Opengl/glsl/lwjgl either 2d or 3d, not both?

So I'm trying to render a basic overlay onto my 3D scene, and currently I can either have the 3D scene or the 2D overlay, I cant work out how to get both
In my main method, where render is called, I moved specific render functions to manager classes, so in the main render I call :
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
glEnable(GL_COLOR_MATERIAL);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-aspect, aspect, -1, 1, -10, 10);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
material.setColour(new Vector3f(1,1,1));
sLight.getPointLight().setPosition(camera.getPosition());
sLight.setDirection(camera.getForward());
DayCycle.getInstance().update(Time.getDelta());
shader.updateUniforms(transform.getTransformation(), transform.getProjectedTransformation(), material);
//material is a wrapper class for textures and specular value etc
//transform is a matrix wrapper for getting projected transformations, taking the camera position when its created
WorldManager.renderAll(true); //true denotes yes to wireframe mode
InterfaceManager.renderAll();
glfwSwapBuffers(window);
glfwPollEvents();
If i comment out WorldManager.renderAll(), I get the little 2d square in the right part of the screen, If i dont comment it, I get the world render but no little square
WorldManager.renderAll()
public static void renderAll(boolean wireframeMode)
{
RendererUtils.setWireframeMode(wireframeMode);
for (String s : chunks.keySet())
{
Chunk actingChunk = chunks.get(s);
Transform transform = new Transform();
Shader shader = PhongShader.getInstance();
transform.setTranslation(new Vector3f(actingChunk.getLocation().getX() * (Chunk.ChunkSize),0.0f, actingChunk.getLocation().getY() * (Chunk.ChunkSize)));
transform.setScale(1.0f, 50f, 1.0f);
shader.updateUniforms(transform.getTransformation(), transform.getProjectedTransformation(), actingChunk.getMaterial());
shader.bind();
actingChunk.getMesh().draw();
//transform.setRotation(new Vector3f(0,0,0));
}
}
InterfaceManager.renderAll()
public static void renderAll()
{
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -10, 10);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
RendererUtils.setWireframeMode(false);
for (Interface i : interfaces)
{
Transform transform = new Transform();
transform.setTranslation(new Vector3f(0,0,0));
InterfaceShader.getInstance().updateUniforms(transform.getProjectedTransformation());
InterfaceShader.getInstance().bind();
i.getMesh().draw();
}
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
}
When I have WorldManager.renderAll() uncommented, i get a nice sea of triangles (as its meant to look) but no 2D square
With it commented, I get a nice little square where its meant to be and nothing else
Shaders are here : https://pastebin.com/xWaWhQHy because I felt this post was getting too long to have them inlined
What's my problem? I cant figure out where it is
Edit : If i've missed any pertinent code, tell me and i'll upload it to a pastebin
Edit 2 : updated my code here to reflect that i'd removed a shader in interfaceManager to actually get a square to draw at all : https://pastebin.com/pHHDsCvF for the shader code
Edit 3 : Ive determined it's something to do with my interface shaders, If i use PhongShader instead of InterfaceShader then it works exactly how I wanted it to
I can suggest you to modify the code this way:
WorldManager.renderAll(true); //true denotes yes to wireframe mode
glClear(GL_DEPTH_BUFFER_BIT);
InterfaceManager.renderAll();
This way you will clear depth buffer before rendering 2d interface.
The problem was that I was still applying transformations to the vertices after passing them to the shader.
By editing out the transformation (and later scrapping the entire vertex shader) in the InterfaceShader instance, the little squares were appearing in the right place

Multiple images of same mesh without duplicate triangle transfers

I take multiple images of the same mesh using OpenGL, GLEW and GLFW. The mesh (triangles) doesn't change in each shot, only the ModelViewMatrix does.
Here's the important code of my mainloop:
for (int i = 0; i < number_of_images; i++) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
/* set GL_MODELVIEW matrix depending on i */
glBegin(GL_TRIANGLES);
for (Triangle &t : mesh) {
for (Point &p : t) {
glVertex3f(p.x, p.y, p.z);
}
}
glReadPixels(/*...*/) // get picture and store it somewhere
glfwSwapBuffers();
}
As you can see, I set/transfer the triangle vertices for each shot I want to take. Is there a solution in which I only need to transfer them once? My mesh is quite large, so this transfer takes quite some time.
In the year 2016 you must not use glBegin/glEnd. No way. Use Vertex Array Obejcts instead; and use custom vertex and/or geometry shaders to reposition and modify your vertex data. Using these techniques, you will upload your data to the GPU once, and then you'll be able to draw the same mesh with various transformations.
Here is an outline of how your code may look like:
// 1. Initialization.
// Object handles:
GLuint vao;
GLuint verticesVbo;
// Generate and bind vertex array object.
glGenVertexArrays(1, &vao);
glBindVertexArray(vao);
// Generate a buffer object.
glGenBuffers(1, &verticesVbo);
// Enable vertex attribute number 0, which
// corresponds to vertex coordinates in older OpenGL versions.
const GLuint ATTRIBINDEX_VERTEX = 0;
glEnableVertexAttribArray(ATTRIBINDEX_VERTEX);
// Bind buffer object.
glBindBuffer(GL_ARRAY_BUFFER, verticesVbo);
// Mesh geometry. In your actual code you probably will generate
// or load these data instead of hard-coding.
// This is an example of a single triangle.
GLfloat vertices[] = {
0.0f, 0.0f, -9.0f,
0.0f, 0.1f, -9.0f,
1.0f, 1.0f, -9.0f
};
// Determine vertex data format.
glVertexAttribPointer(ATTRIBINDEX_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, 0);
// Pass actual data to the GPU.
glBufferData(GL_ARRAY_BUFFER, sizeof(GLfloat)*3*3, vertices, GL_STATIC_DRAW);
// Initialization complete - unbinding objects.
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
// 2. Draw calls.
while(/* draw calls are needed */) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindVertexArray(vao);
// Set transformation matrix and/or other
// transformation parameters here using glUniform* calls.
glDrawArrays(GL_TRIANGLES, 0, 3);
glBindVertexArray(0); // Unbinding just as an example in case if some other code will bind something else later.
}
And a vertex shader may look like this:
layout(location=0) in vec3 vertex_pos;
uniform mat4 viewProjectionMatrix; // Assuming you set this before glDrawArrays.
void main(void) {
gl_Position = viewProjectionMatrix * vec4(vertex_pos, 1.0f);
}
Also take a look at this page for a good modern accelerated graphics book.
#BDL already commented that you should abandon the immediate mode drawing calls (glBegin … glEnd) and switch to Vertex Array drawing (glDrawElements, glDrawArrays) that fetch their data from Vertex Buffer Objects (VBOs). #Sergey mentioned Vertex Array Objects in his answer, but those are actually state containers for VBOs.
A very important thing you have to understand – and the way you asked your question it's apparently something you're not aware of, yet – is, that OpenGL does not deal with "meshes", "scenes" or the like. OpenGL is just a drawing API. It draws points… lines… and triangles… one at a time… with no connection between them whatsoever. That's it. So when you show multiple views of the "same" thing, you must draw it several times. There's no way around this.
Most recent versions of OpenGL support multiple viewport rendering, but it still takes a geometry shader to multiply the geometry into several pieces to be drawn.

HLSL - Sampling a render target texture always return black color

Okay, first of all, I'm really new to DirectX11 and this is actually my first project using it. I'm also relatively new to Computer Graphics in general so I might have some concepts wrong although, for this particular case, I do not think so. My code is based on the RasterTek tutorials.
In trying to implement a shader shader, I need to render the scene to a 2D texture and then perform a gaussian blur on the resulting image.
That part seems to be working fine as when using the Visual Studio graphics debugger the output seems to be what I expect.
However, after having having done all post processing, I render a quad to the backbuffer using a simple shader that uses the final output of the blur as a resource. This always gives me a black screen. When I debug my pixel shader with the VS graphics debugger, it seem like the Sample(texture, uv) method always returns (0,0,0,1) when trying to sample that texture.
The pixel shader works fine if I use a different texture, like some normal map or whatever, as a resource, just not when using any of the rendertargets from the previous passes.
The behaviour is particularly weird because the actual blur shader works fine when using any of the rendertargets as a resource.
I know I cannot use a rendertarget as both input and output but I think I have that covered since I call OMSetRenderTargets so I can render to the backbuffer.
Here's the step by step of my implementation:
Set Render Targets
Clear them
Clear Depth buffer
Render scene to texture
Turn off Z buffer
Render to quad
Perform horizontal blur
Perform vertical blur
Set back buffer as render target
Clear back buffer
Render final output to quad
Turn z buffer on
Present back buffer
Here is the shader for the quad:
Texture2D shaderTexture : register(t0);
SamplerState SampleType : register(s0);
struct PixelInputType
{
float4 position : SV_POSITION;
float2 tex : TEXCOORD0;
};
float4 main(PixelInputType input) : SV_TARGET
{
return shaderTexture.Sample(SampleType, input.tex);
}
Here's the relevant c++ code
This is how I set the render targets
void DeferredBuffers::SetRenderTargets(ID3D11DeviceContext* deviceContext, bool activeRTs[BUFFER_COUNT]){
vector<ID3D11RenderTargetView*> rts = vector<ID3D11RenderTargetView*>();
for (int i = 0; i < BUFFER_COUNT; ++i){
if (activeRTs[i]){
rts.push_back(m_renderTargetViewArray[i]);
}
}
deviceContext->OMSetRenderTargets(rts.size(), &rts[0], m_depthStencilView);
// Set the viewport.
deviceContext->RSSetViewports(1, &m_viewport);
}
I use a ping pong approach with the Render Targets for the blur.
I render the scene to a MainTarget and depth information to the depthMap. The first pass performs an horizontal blur onto a third target (horizontalBlurred) and then I use that one as input for the vertical blur which renders back to the mainTarget and to the finalTarget. It's a loop because on the vertical pass I'm supposed to blend the PS output with what's on the finalTarget. I left that code (and some other stuff) out as it's not relevant.
The m_Fullscreen is the quad.
bool activeRenderTargets[4] = { true, true, false, false };
// Set the render buffers to be the render target.
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
// Clear the render buffers.
m_ShaderManager->getDeferredBuffers()->ClearRenderTargets(m_D3D->GetDeviceContext(), 0.25f, 0.0f, 0.0f, 1.0f);
m_ShaderManager->getDeferredBuffers()->ClearDepthStencil(m_D3D->GetDeviceContext());
// Render the scene to the render buffers.
RenderSceneToTexture();
// Get the matrices.
m_D3D->GetWorldMatrix(worldMatrix);
m_Camera->GetBaseViewMatrix(baseViewMatrix);
m_D3D->GetOrthoMatrix(projectionMatrix);
// Turn off the Z buffer to begin all 2D rendering.
m_D3D->TurnZBufferOff();
// Put the full screen ortho window vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_FullScreenWindow->Render(m_D3D->GetDeviceContext());
ID3D11ShaderResourceView* mainTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(0);
ID3D11ShaderResourceView* horizontalBlurred = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(2);
ID3D11ShaderResourceView* depthMap = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(1);
ID3D11ShaderResourceView* finalTarget = m_ShaderManager->getDeferredBuffers()->GetShaderResourceView(3);
activeRenderTargets[1] = false; //depth map is never a render target again
for (int i = 0; i < numBlurs; ++i){
activeRenderTargets[0] = false; //main target is resource in this pass
activeRenderTargets[2] = true; //horizontal blurred target
activeRenderTargets[3] = false; //unbind final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_HorizontalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, mainTarget, depthMap);
activeRenderTargets[0] = true; //rendering to main target
activeRenderTargets[2] = false; //horizontal blurred is resource
activeRenderTargets[3] = true; //rendering to final target
m_ShaderManager->getDeferredBuffers()->SetRenderTargets(m_D3D->GetDeviceContext(), activeRenderTargets);
m_ShaderManager->RenderScreenSpaceSSS_VerticalBlur(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, horizontalBlurred, depthMap);
}
m_D3D->SetBackBufferRenderTarget();
m_D3D->BeginScene(0.0f, 0.0f, 0.5f, 1.0f);
// Reset the viewport back to the original.
m_D3D->ResetViewport();
m_ShaderManager->RenderTextureShader(m_D3D->GetDeviceContext(), m_FullScreenWindow->GetIndexCount(), worldMatrix, baseViewMatrix, projectionMatrix, depthMap);
m_D3D->TurnZBufferOn();
m_D3D->EndScene();
And, finally, here are 3 screenshots from my graphics log.
They show rendering the scene onto the mainTarget, a verticalPass which takes as input the horizontalBlurred resource and finally, rendering onto the backBuffer, which is what's failing. You can see the resource bound to the shader and how the output is just a black screen. I purposedly set the background as red to find out if it was sampling with wrong coordinates, but nope.
So, has anyone ever experienced something like this? What could be the cause of this bug?
Thanks in advance for any help!
EDIT: The Render_SOMETHING_SOMETHING_shader methods handle binding all the resources, setting the shaders, draw calls etc etc. If necessary I can post them here, but I don't think it's that relevant.

3D model looks transparent-like

Trying to draw wavefront obj files using OpenGL but it seems there is a depth-buffer problem.
Source:
// Default constructor
Engine::Engine()
{
initialize();
loadModel();
start();
}
// Initialize OpenGL
void Engine::initialize()
{
// Enable depth test
glEnable(GL_DEPTH_TEST);
// Enable depth write
glDepthMask(GL_TRUE);
}
void Engine::start()
{
// Main loop
while(isOpen())
{
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw 3D model to screen
draw();
}
}
Things to check:
Make sure depth test is enabled glEnable(GL_DEPTH_TEST)
Make sure depth write is enabled glDepthMask(true)
Make sure your context has a depth buffer Assert(glGetIntegerv(GL_DEPTH_BITS) != 0))
Did you activated the depth test?
glEnable(GL_DEPTH_TEST);
Request a GL context with a depth buffer and glEnable(GL_DEPTH_TEST).
Try this:
mainWindow.create
(
sf::VideoMode
(
settings.getWidth(),
settings.getHeight()
),
"",
sf::Style::Resize,
sf::ContextSettings( 16, 0, 0, 2, 0 )
);
Are you using vertex shaders? This thing would happen if at the exit of vertex shader gl_Position.z is accidentally set to 0. Or to any value between -1 and 1 I believe.
It would also happen if all vertices had equal z value before the input stage, albeit there must be something wrong with your transformation matrices in this case. Or simply you may be doing something very exotic with your transformation matrices, and the model is fine. Have you set up both MODELVIEW and PROJECTION, and multiplied them accordingly? Either in the vertex shader or in the FFP?

Untextured Quads appear dark

I just started working with OpenGL, but I ran into a problem after implementing a Font system.
My plan is to simply visualize several Pathfinding Algorithms.
Currently OpenGL gets set up like this (OnSize gets called once on window creation manually):
void GLWindow::OnSize(GLsizei width, GLsizei height)
{
// set size
glViewport(0,0,width,height);
// orthographic projection
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0.0,width,height,0.0,-1.0,1.0);
glMatrixMode(GL_MODELVIEW);
m_uiWidth = width;
m_uiHeight = height;
}
void GLWindow::InitGL()
{
// enable 2D texturing
glEnable(GL_TEXTURE_2D);
// choose a smooth shading model
glShadeModel(GL_SMOOTH);
// set the clear color to black
glClearColor(0.0, 0.0, 0.0, 0.0);
glEnable(GL_ALPHA_TEST);
glAlphaFunc(GL_GREATER, 0.0f);
}
In theory I don't need blending, because I will only use untextured Quads to visualize obstacles and line etc to draw paths... So everything will be untextured, except the fonts...
The Font Class has a push and pop function, that look like this (if I remember right my Font system is based on a NeHe Tutorial that I was following quite a while ago):
inline void GLFont::pushScreenMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
GLint viewport[4];
glGetIntegerv(GL_VIEWPORT, viewport);
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(viewport[0],viewport[2],viewport[1],viewport[3], -1.0, 1.0);
glPopAttrib();
}
inline void GLFont::popProjectionMatrix()
{
glPushAttrib(GL_TRANSFORM_BIT);
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glPopAttrib();
}
So the Problem:
If I don't draw a Text I can see the Quads I want to draw, but they are quite dark, so there must be something wrong with my general OpenGL Matrix Properties.
If I draw Text (so the font related push and pop functions get called) I can't see any Quads.
The question:
How do I solve this problem and some background information why this happened would also be nice, because I am still a beginner/student, who just started.
If your quads are untextured, you will run into undefined behaviour. What will probably happen is that any previous texture will be used, and the colour at point (0,0) will be used, which could be what is causing them to be invisible.
Really, you need to disable texturing before trying to draw untextured quads using glDisable(GL_TEXTURE_2D). Again, if you don't, it'll just use the previous texture and texture co-ordinates, which without seeing your draw() loop, I'm assuming to be undefined.