I've been trying to do simple texture mapping with SOIL, and I've been having bizarre outputs..
The output ONLY displays when a PNG texture is loaded. (SOIL_load_OGL_texture)
Other textures appear as a greyish or a white.
vertices passed as:
struct Vertex {
float position[3];
float texturepos[2];
};
Vertex vertices[] = { // w and h are width and height of square.
0,0,0,0,0,
w,0,0,1,0,
0,h,0,0,1,
w,0,0,1,0,
w,h,0,1,1,
0,h,0,0,1
};
vertex shader:
attribute vec2 texpos;
attribute vec3 position;
uniform mat4 transform;
varying vec2 pixel_texcoord;
void main(void) {
gl_Position=transform * vec4(position,1.0);
pixel_texcoord=texpos;
}
fragment shader:
varying vec2 pixel_texcoord;
uniform sampler2D texture;
void main(void) {
gl_FragColor=texture2D(texture,pixel_texcoord);
}
All of the uniforms and attributes are validated.
Texture trying to render:
(is 128x128, power of 2.)
Output [with normal shaders]:
However, I think the problem lies entirely in something really bizarre that happened when I tried to debug it.
I changed the fragment shader to:
varying vec2 pixel_texcoord;
uniform sampler2D texture;
void main(void) {
gl_FragColor=vec4(pixel_texcoord.x,0,pixel_texcoord.y,1);
}
And got this result:
Something is very wrong with the texture coordinates, as according to the shader, Y is now X, and X no longer exists.
Can anyone explain this?
If my texture coordinates are correctly positioned, then I'll start looking at another image library..
[EDIT] I tried loading an image through raw gimp-generated data, and it had the same problem.
It's as if the texture coordinates are 1-dimensional..
Found the problem! Thanks to starmole's advice, I took another look at the glVertexAttribPointer calls, which were formatted as this:
glVertexAttribPointer(attribute_vertex,3,GL_FLOAT,GL_FALSE,sizeof(Vertex),0);
glVertexAttribPointer(attribute_texture_coordinate,2,GL_FLOAT,GL_FALSE,sizeof(Vertex),(void*) (sizeof(GLfloat) * 2));
The 2 in (void*) (sizeof(GLfloat) * 2)); should have been a 3, as there were 3 vertex coordinates.
Everything works perfectly now.
It's amazing how such a small typo can break it so badly.
Related
I've encoded some data into a 44487x1.0 luminance texture:
Now I would like to "scrub" this data across my shader, so that a slice of the texture equal in width to the pixel width of my canvas is displayed. So if the canvas is 500px wide, then 500 pixels from the texture will be shown. The texture is then translated by some offset value so that different values within the texture can be displayed.
//vertex shader
export const vs = GLSL`
#version 300 es
in vec4 position;
void main() {
gl_Position = position;
}
`;
//fragment shader
#version 300 es
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
uniform float u_time;
uniform sampler2D u_texture_7; //data texture
out vec4 fragColor;
void main(){
//data texture dimensions
vec2 dims = vec2(44487., 1.0);
//amount by which to translate the data texture
vec2 offset = vec2(u_time*.5, 0.);
//canvas coords
vec2 uv = gl_FragCoord.xy/u_resolution.xy;
//textuer asspect ratio, w/h
float textureAspect = 44487. / 1.;
vec3 col = vec3(0.);
//texture width is 44487*larger than uv, I guess?
vec2 textCoords = vec2((uv.x/textureAspect)+offset.x, uv.y);
//get texture values
vec3 text = texture(u_texture_7, textCoords).rgb;
//output
fragColor = vec4(text, 1.);
}
However, this doesn't seem to work. All I get is a black screen. Is using a wide texture like this a good way to go about getting the array values into the shader? The texture is very small in size, but I'm wondering if the dimensions might still be causing an issue.
Alternatively to providing one large texture, I could provide a smaller texture, but update the texture uniform values via js?
After trying several different approaches, the work around I ended up using was uploading the 44487x1.0 image to a separate 2d canvas, and then performing the transformations of the texture in the 2d canvas, and not the shader. The canvas is then sent to the shader as a texture.
Might not be the most efficient solution, but it avoids having to mess around with the texture too much in the shader.
I am trying to learn how to use shaders and use GLSL. One of the shaders is working but is distorting the texture of the sprite it's working on. I'm doing this all on SFML.
Distorted texture on left, actual texture on right:
The problem comes from this line
When I started the texture was being rendered upside down but subtracting the y component of the cordinates from 1 fixed that issue. The line that is causing the issue is
vec2 texCoord = (gl_FragCoord.xy / sourceSize.xy);
Where the sourceSize is a uniform passing in the resolution of something as a vec2. I've been passing in various values into this and getting different distorted versions of the texture. I was wondering if there was a way a ratio to pass in or something to avoid this distortion.
Texture Size in Pixels: 512x512
Passed in values for the above image: 512x512
Shader
uniform sampler2D source;
uniform vec2 sourceSize;
uniform float time;
void main( void )
{
vec2 texCoord = (gl_FragCoord.xy / sourceSize.xy); //Gets the pixel position in a range of 0.0 to 1
texCoord = vec2 (texCoord.x,1.0-texCoord.y);//Inverts the y co ordinate
vec4 Color = texture2D(source, texCoord);//Gets the current pixture colour
gl_FragColor = Color;//Output
}
Found a solution. Posting it here for if other need the help.
Changing
vec4 Color = texture2D(source, texCoord);//Gets the current pixture colour
To
vec4 Color = texture2D(source, gl_TexCoord[0].xy);//Gets the current pixture colour
Will fix the distortion effect.
I have a 2d texture that I loaded with
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, gs.width(), gs.height(), 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, gs.buffer());
where gs is an object that with methods that return the proper types.
In the fragment shader I sample from the texture and attempt to use that as the alpha channel for the resultant color. If I use the sampled value for other channels in the output texture it produces what I would expect. Any value that I use for the alpha channel appears to be ignored, because it always draws Color.
I am clearing the screen using:
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
Can anyone suggest what I might be doing wrong? I am getting an OpenGL 4.0 context with 8 red, 8 green, 8 blue, and 8 alpha bits.
Vertex Shader:
#version 150
in vec2 position;
in vec3 color;
in vec2 texcoord;
out vec3 Color;
out vec2 Texcoord;
void main()
{
Texcoord = texcoord;
Color = color;
gl_Position = vec4(position, 0.0, 1.0);
}
Fragment Shader:
#version 150
in vec3 Color;
in vec2 Texcoord;
out vec4 outColor;
uniform sampler2D tex;
void main()
{
float t = texture(tex, Texcoord);
outColor = vec4(Color, t);
}
Frankly, I am surprised this actually works. texture (...) returns a vec4 (unless you are using a shadow/integer sampler, which you are not). You really ought to be swizzling that texture down to just a single component if you intend to store it in a float.
I am guessing you want the alpha component of your texture, but who honestly knows -- try this instead:
float t = texture (tex, Texcoord).a; // Get the alpha channel of your texture
A half-way decent GLSL compiler would warn/error you for doing what you are trying to do right now. I suspect yours is as well, but you are not checking the shader info log when you compile your shader.
Update:
The original answer did not even begin to address the madness you are doing with your GL_DEPTH_COMPONENT internal format texture. I completely missed that because the code did not fit on screen.
Why are you using gs.rgba() to pass data to a texture whose internal and pixel transfer format is exactly 1 component? Also, if you intend to use a depth texture in your shader then the reason it is always returning a=1.0 is actually very simple:
Beginning with GLSL 1.30, when sampled using texture (...), depth textures are automatically setup to return the following vec4:
vec4 (r, r, r, 1.0).
The RGB components are replaced with the value of R (the floating-point depth), and A is replaced with a constant value of 1.0.
Your issue is that you're only passing in a vec3 when you need a vec4. RGBA - 4 components, not just three.
I have OpenGL program that I want to texture sphere with bitmap of earth. I prepared mesh in Blender and exported it to OBJ file. Program loads appropriate mesh data (vertices, uv and normals) and bitmap properly- I have checked it texturing cube with bone bitmap.
My program is texturing sphere, but incorrectly (or in the way I don't expect). Each triangle of this sphere includes deformed copy of this bitmap. I've checked bitmap and uv seems to be ok. I've tried many sizes of bitmap (powers of 2, multiples of 2 etc).
Here's the texture:
Screenshot of my program (like It would ignore my UV coords):
Mappings of UVs in Blender I've done in this way:
Code setting texture after loading it (apart from code adding texture to VBO- I think it's ok):
GLuint texID;
glGenTextures(1,&texID);
glBindTexture(GL_TEXTURE_2D,texID);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGB,width,height,0,GL_BGR,GL_UNSIGNED_BYTE,(GLvoid*)&data[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP);
Is there needed any extra code to map this texture properly?
[Edit]
Initializing textures (earlier presented code is in LoadTextureBMP_custom() function)
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
GLuint TBO_ID;
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
return true;
}
My main loop:
bool Program::MainLoop()
{
bool done = false;
mat4 projectionMatrix;
mat4 viewMatrix;
mat4 modelMatrix;
mat4 MVP;
Camera camera;
shader.SetShader(true);
while(!done)
{
if( (glfwGetKey(GLFW_KEY_ESC)))
done = true;
if(!glfwGetWindowParam(GLFW_OPENED))
done = true;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Tutaj przeksztalcenia macierzy
camera.UpdateCamera();
modelMatrix = mat4(1.0f);
viewMatrix = camera.GetViewMatrix();
projectionMatrix = camera.GetProjectionMatrix();
MVP = projectionMatrix*viewMatrix*modelMatrix;
// Koniec przeksztalcen
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D,textureID);
shader.SetShaderParameters(MVP);
SetOpenGLScene(width,height);
glEnableVertexAttribArray(0); // Udostepnienie zmiennej Vertex Shadera => vertexPosition_modelspace
glBindBuffer(GL_ARRAY_BUFFER,VBO_ID);
glVertexAttribPointer(0,3,GL_FLOAT,GL_FALSE,0,(void*)0);
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glVertexAttribPointer(1,2,GL_FLOAT,GL_FALSE,0,(void*)0);
glDrawArrays(GL_TRIANGLES,0,vert.size());
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glfwSwapBuffers();
}
shader.SetShader(false);
return true;
}
VS:
#version 330
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
vec4 v = vec4(vertexPosition,1.0f);
gl_Position = MVP*v;
UV = vertexUV;
}
FS:
#version 330
in vec2 UV;
out vec4 color;
uniform sampler2D texSampler; // Uchwyt tekstury
void main()
{
color = texture(texSampler, UV);
}
I haven't done any professional GL programming, but I've been working with 3D software quite a lot.
your UVs are most likely bad
your texture is a bad fit to project on a sphere
considering UVs are bad, you might want to check your normals as well
consider an ISOSPHERE instead of a regular one to make more efficient use of polygons
You are currently using a flat texture with flat mapping, which may give you very ugly results, since you will have very low resolution in the "outer" perimeter and most likely a nasty seam artifact where the two projections meet if you like... rotate the planet or something.
Note that you don't have to have any particular UV map, it just needs to be correct with the geometry, which it doesn't look like it is right now. The spherical mapping will take care for the rest. You could probably get away with a cylindrical map as well, since most Earth textures are in a suitable projection.
Finally, I've got the answer. Error was there:
bool Program::InitTextures(string texturePath)
{
textureID = LoadTextureBMP_custom(texturePath);
// GLuint TBO_ID; _ERROR_
glGenBuffers(1,&TBO_ID);
glBindBuffer(GL_ARRAY_BUFFER,TBO_ID);
glBufferData(GL_ARRAY_BUFFER,uv.size()*sizeof(vec2),&uv[0],GL_STATIC_DRAW);
}
There is the part of Program class declaration:
class Program
{
private:
Shader shader;
GLuint textureID;
GLuint VAO_ID;
GLuint VBO_ID;
GLuint TBO_ID; // Member covered with local variable declaration in InitTextures()
...
}
I've erroneously declared local TBO_ID that covered TBO_ID in class scope. UVs were generated with crummy precision and seams are horrible, but they weren't problem.
I have to admit that information I've supplied is too small to enable help. I should have put all the code of Program class. Thanks everybody who tried to.
I can't seem to find any information on the Web about fixing shadow casting by objects, which textures have alpha != 1.
Is there any way to implement something like "per-fragment depth test", not a "per-vertex", so I could just discard appearing of the fragment on a shadowmap if colored texel has transparency? Also, in theory, it could make shadow mapping be more accurate.
EDIT
Well, maybe that was a terrible idea I gave above, but only I want is to tell shaders that if texel have alpha<1, there's no need to shadow things behind that texel. I guess depth texture require only vertex information, thats why every tutorial about shadow mapping has minimized vertex and empty fragment shader and nothing happens when trying do something with fragment shader.
Anyway, what is the main idea of fixing shadow casting by partly-transparent objects?
EDIT2
I've modified my shaders and now It discards every fragment, if at least one has transparency o_O. So those objects now don't cast any shadows (but opaque do)... Please, have a look at the shaders:
// Vertex Shader
uniform mat4 orthoView;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 TC;
void main(void) {
TC = in_TextureCoord;
gl_Position = orthoView * in_Position;
}
.
//Fragment Shader
uniform sampler2D texture;
in vec2 TC;
void main(void) {
vec4 texel = texture2D(texture, TC);
if (texel.a < 0.4)
discard;
}
And it's strange because I use the same trick with the same textures in my other shaders and it works... any ideas?
If you use discard in the fragment shader, then no depth information will be recorded for that fragment. So in your fragment shader, simply add a test to see whether the texture is transparent, and if so discard that fragment.