DirectX11 / OpenGL only renders half of the texture - opengl

This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}

Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE​)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.

This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.

I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.

A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH

Related

Why is basic specular shading fluid, and not jagged?

Simple question, I just got my first specular shader working, and looking over the math, I cant help to think that the angle between each edge should cause the "specularity" to spike/become jagged. But its entirely fluid/spherical.
The idea is to calculate the angle off the vertice-normal, but there are only so many of these, and still the "specular shade" turns out perfectly even.
I cant see how the gpu knows the angle of the fragment based off of the vertice normal alone.
edit:
vert shader
#version 400 core
layout ( location = 0 ) in vec3 vertex_position;
layout ( location = 2 ) in vec2 tex_cord;
layout ( location = 3 ) in vec3 vertex_normal;
uniform mat4 transform; //identity matrix
uniform mat3 lmodelmat; //inverse rotation
out vec2 UV;
out vec3 normal;
void main()
{
UV=tex_cord;
normal=normalize(vertex_normal*lmodelmat); //normalize to keep brightness
gl_Position=transform*vec4(vertex_position,1.0);
}
and frag
#version 400 core
in vec2 UV;
in vec3 normal;
uniform sampler2D mysampler;
uniform vec3 lightpos; //lights direction
out vec4 frag_colour;
in vec3 vert2cam; //specular test
void main()
{
//skip invis frags
vec4 alphatest=texture(mysampler,UV);
if(alphatest.a<0.00001)discard;
//diffuse'ing fragment
float diffuse=max(0.1,dot(normal,lightpos));
//specular'izing fragment
vec3 lpnorm=normalize(lightpos); //vector from fragment to light
vec3 reflection=normalize(reflect(-lpnorm,normal)); //reflection vector
float specularity=max(0,dot(lpnorm,reflection));
specularity=pow(specularity,50);
frag_colour=alphatest*diffuse+specularity;
}
Answer: Interpolation
This will, for the renderer, equate as an averaged curve, and not a jagged edge (flat shading)
Without code .etc. it is hard to precisely answer your question but assuming a simple vector shader -> fragment shader pipeline. The vector shader will be run for each vertex. It will set typically set parameters marked 'varying' (e.g. texture coordinates).
Every 3 vertices will be grouped to form a polygon and the fragment shader run to determine the color of each point within the polygon. The 'varying' parameters set by the vertex shader will be interpolated based on the distance of the fragment from the 3 edges of the polygon (See: Barycentric interpolation).
Hence for example:
gl_FragColor = texture2D(myUniformSampler, vec2(myTextureCoord.s,myTextureCoord.t));
Will sample the texture correctly for each pixel. Assuming you're using per-fragment lighting, the values are probably being interpolated for each fragment shader from the values you set in your vertex shader. If you set the same normal for each edge you'll get a different effect.
Edit (Based on the code you added):
out vec2 UV;
out vec3 normal;
out vec3 color;
Are set per vertex in your vertex shader. Every three vertices defines a polygon. The fragment shader is then run for each point (e.g. pixel) within the polygon to determine the color .etc. of each point.
The values of these parameters:
in vec3 color; /// <<-- You don't seem to be actually using this
in vec2 UV;
in vec3 normal;
in the fragment shader are interpolated based on the distance of the point on the polygon being 'drawn' from each vertex (See: Barycentric interpolation). Hence the normal varies between the vertices defined by your vertex shader.
If for a given polygon defined by three vertices you set the normals to all be facing in the same direction, you will get a different effect.

LibGDX Overlapping 2D Shadows

I'm working on shadows for a 2D overhead game. Right now, the shadows are just sprites with the color (0,0,0,0.1) drawn on a layer above the tiles.
The problem: When many entities or trees get clumped together, the shadows overlap, forming unnatural-looking dark areas.
I've tried drawing the shadows to a framebuffer and using a simple shader to prevent overlapping, but that lead to other problems, including layering issues.
Is it possible to enable a certain blend function for the shadows that prevents "stacking", or a better way to use a shader?
If you don't want to deal with sorting issues, I think you could do this with a shader. But every object will have to be either affected by shadow or not. So tall trees could be marked as not shadow receiving, while the ground, grass, and characters would be shadow receiving.
First make a frame buffer with clear color white. Draw all your shadows on it as pure black.
Then make a shadow mapping shader to draw everything in your world. This relies on you not needing all four channels of the sprite's color, because we need one of those channels to mark each sprite as shadow receiving or not. For example, if you aren't using RGB to tint your sprites, we could use the R channel. Or if you aren't fading them in and out, we could use A. I'll assume the latter here:
Vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform mat4 u_projTrans;
void main()
{
v_texCoords = a_texCoord0;
v_color = a_color;
v_color.a = v_color.a * (255.0/254.0); //this is a correction due to color float precision (see SpriteBatch's default shader)
vec3 screenPosition = u_projTrans * a_position;
v_texCoordsShadowmap = (screenPosition.xy * 0.5) + 0.5;
gl_Position = screenPosition;
}
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoords;
varying vec2 v_texCoordsShadowmap;
varying vec4 v_color;
uniform sampler2D u_texture;
uniform sampler2D u_textureShadowmap;
void main()
{
vec4 textureColor = texture2D(u_texture, v_texCoords);
float shadowColor = texture2D(u_textureShadowmap, v_texCoordsShadowmap).r;
shadowColor = mix(shadowColor, 1.0, v_color.a);
textureColor.rgb *= shadowColor * v_color.rgb;
gl_FragColor = textureColor;
}
These are completely untested and probably have bugs. Make sure you assign the frame buffer's color texture to "u_textureShadowmap". And for all your sprites, set their color's alpha based on how much shadow you want them to have cast on them, which will generally always be 0 or 0.1 (based on the brightness you were using before).
Draw your shadows to fbo with disabled blending.
Draw background e.g. grass
Draw shadows texture from fbo
Draw all other sprites

Is it faster to use texelFetch when rendering fonts?

I am writing some font drawing shaders in OpenGL 3.3. I will render my font into a texture atlas and then generate some display lists for some text I want to draw. I would like the rendering of text to consume the least amount of resources (CPU, GPU memory, GPU time). How can I accomplish this?
Looking at Freetype-gl, I noticed that the author generates 6 indices and 4 vertices per character.
Since I am using OpenGL 3.3, I have some additional freedom. My plan was to generate 1 vertex per character plus one integer "code" per character. The character code can be used in texelFetch operations to retrieve texture coördinates and character size information. A geometry shader turns the size information and vertex into a triangle strip.
Is texelFetch going to be slower than sending more vertices/texture coördinates? Is this worth doing?, or is there are reason why it's not done in the font libraries I looked at?
Final code:
Vertex shader:
#version 330
uniform sampler2D font_atlas;
uniform sampler1D code_to_texture;
uniform mat4 projection;
uniform vec2 vertex_offset; // in view space.
uniform vec4 color;
uniform float gamma;
in vec2 vertex; // vertex in view space of each character adjusted for kerning, etc.
in int code;
out vec4 v_uv;
void main()
{
v_uv = texelFetch(
code_to_texture,
code,
0);
gl_Position = projection * vec4(vertex_offset + vertex, 0.0, 1.0);
}
Geometry shader:
#version 330
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
uniform sampler2D font_atlas;
uniform mat4 projection;
in vec4 v_uv[];
out vec2 g_uv;
void main()
{
vec4 pos = gl_in[0].gl_Position;
vec4 uv = v_uv[0];
vec2 size = vec2(textureSize(font_atlas, 0)) * (uv.zw - uv.xy);
vec2 pos_opposite = pos.xy + (mat2(projection) * size);
gl_Position = vec4(pos.xy, 0, 1);
g_uv = uv.xy;
EmitVertex();
gl_Position = vec4(pos.x, pos_opposite.y, 0, 1);
g_uv = uv.xw;
EmitVertex();
gl_Position = vec4(pos_opposite.x, pos.y, 0, 1);
g_uv = uv.zy;
EmitVertex();
gl_Position = vec4(pos_opposite.xy, 0, 1);
g_uv = uv.zw;
EmitVertex();
EndPrimitive();
}
Fragment shader:
#version 330
uniform sampler2D font_atlas;
uniform vec4 color;
uniform float gamma;
in vec2 g_uv;
layout (location = 0) out vec4 fragment_color;
void main()
{
float a = texture(font_atlas, g_uv).r;
fragment_color.rgb = color.rgb;
fragment_color.a = color.a * pow(a, 1.0 / gamma);
}
I wouldn't expect there to be a significant performance difference between your proposed method vs storing the quad vertex positions and texture coordinates in a vertex buffer. On the one hand your method requires a smaller vertex buffer and less work for the CPU. On the other hand the texelFetch calls will be more-or-less at random locations, and not make the best use of the cache. This last point may not be very significant as I guess that texture wont be very large. Also, the execution model of geometry shaders mean they can quickly become the bottleneck of the pipeline.
To answer "is this worth doing?" - I suspect not for performance reasons. Unfortunately you can't tell until you implement it and measure the performance. I think it's quite a cool idea though, so I don't think you'd be wasting your time trying it out.
Maybe you can use Atomic Counter to handle current position in text.
Here is an interresting paper on memory bandwidth
GPU perf...
You can cache the result in a fbo.
For realy fast rendering as you said, you may build a geom shader taking points as input and outputing quads and sample a texture to get additional on glyph info.
This appear effectively the best solution...

Point Sprites for particle system

Are point sprites the best choice to build a particle system?
Are point sprites present in the newer versions of OpenGL and drivers of the latest graphics cards? Or should I do it using vbo and glsl?
Point sprites are indeed well suited for particle systems. But they don't have anything to do with VBOs and GLSL, meaning they are a completely orthogonal feature. No matter if you use point sprites or not, you always have to use VBOs for uploading the geometry, be they just points, pre-made sprites or whatever, and you always have to put this geometry through a set of shaders (in modern OpenGL of course).
That being said point sprites are very well supported in modern OpenGL, just not that automatically as with the old fixed-function approach. What is not supported are the point attenuation features that let you scale a point's size based on it's distance to the camera, you have to do this manually inside the vertex shader. In the same way you have to do the texturing of the point manually in an appropriate fragment shader, using the special input variable gl_PointCoord (that says where in the [0,1]-square of the whole point the current fragment is). For example a basic point sprite pipeline could look this way:
...
glPointSize(whatever); //specify size of points in pixels
glDrawArrays(GL_POINTS, 0, count); //draw the points
vertex shader:
uniform mat4 mvp;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = mvp * position;
}
fragment shader:
uniform sampler2D tex;
layout(location = 0) out vec4 color;
void main()
{
color = texture(tex, gl_PointCoord);
}
And that's all. Of course those shaders just do the most basic drawing of textured sprites, but are a starting point for further features. For example to compute the sprite's size based on its distance to the camera (maybe in order to give it a fixed world-space size), you have to glEnable(GL_PROGRAM_POINT_SIZE) and write to the special output variable gl_PointSize in the vertex shader:
uniform mat4 modelview;
uniform mat4 projection;
uniform vec2 screenSize;
uniform float spriteSize;
layout(location = 0) in vec4 position;
void main()
{
vec4 eyePos = modelview * position;
vec4 projVoxel = projection * vec4(spriteSize,spriteSize,eyePos.z,eyePos.w);
vec2 projSize = screenSize * projVoxel.xy / projVoxel.w;
gl_PointSize = 0.25 * (projSize.x+projSize.y);
gl_Position = projection * eyePos;
}
This would make all point sprites have the same world-space size (and thus a different screen-space size in pixels).
But point sprites while still being perfectly supported in modern OpenGL have their disadvantages. One of the biggest disadvantages is their clipping behaviour. Points are clipped at their center coordinate (because clipping is done before rasterization and thus before the point gets "enlarged"). So if the center of the point is outside of the screen, the rest of it that might still reach into the viewing area is not shown, so at the worst once the point is half-way out of the screen, it will suddenly disappear. This is however only noticable (or annyoing) if the point sprites are too large. If they are very small particles that don't cover much more than a few pixels each anyway, then this won't be much of a problem and I would still regard particle systems the canonical use-case for point sprites, just don't use them for large billboards.
But if this is a problem, then modern OpenGL offers many other ways to implement point sprites, apart from the naive way of pre-building all the sprites as individual quads on the CPU. You can still render them just as a buffer full of points (and thus in the way they are likely to come out of your GPU-based particle engine). To actually generate the quad geometry then, you can use the geometry shader, which lets you generate a quad from a single point. First you do only the modelview transformation inside the vertex shader:
uniform mat4 modelview;
layout(location = 0) in vec4 position;
void main()
{
gl_Position = modelview * position;
}
Then the geometry shader does the rest of the work. It combines the point position with the 4 corners of a generic [0,1]-quad and completes the transformation into clip-space:
const vec2 corners[4] = {
vec2(0.0, 1.0), vec2(0.0, 0.0), vec2(1.0, 1.0), vec2(1.0, 0.0) };
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
uniform mat4 projection;
uniform float spriteSize;
out vec2 texCoord;
void main()
{
for(int i=0; i<4; ++i)
{
vec4 eyePos = gl_in[0].gl_Position; //start with point position
eyePos.xy += spriteSize * (corners[i] - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corners[i]; //use corner as texCoord
EmitVertex();
}
}
In the fragment shader you would then of course use the custom texCoord varying instead of gl_PointCoord for texturing, since we're no longer drawing actual points.
Or another possibility (and maybe faster, since I remember geometry shaders having a reputation for being slow) would be to use instanced rendering. This way you have an additional VBO containing the vertices of just a single generic 2D quad (i.e. the [0,1]-square) and your good old VBO containing just the point positions. What you then do is draw this single quad multiple times (instanced), while sourcing the individual instances' positions from the point VBO:
glVertexAttribPointer(0, ...points...);
glVertexAttribPointer(1, ...quad...);
glVertexAttribDivisor(0, 1); //advance only once per instance
...
glDrawArraysInstanced(GL_TRIANGLE_STRIP, 0, 4, count); //draw #count quads
And in the vertex shader you then assemble the per-point position with the actual corner/quad-position (which is also the texture coordinate of that vertex):
uniform mat4 modelview;
uniform mat4 projection;
uniform float spriteSize;
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 corner;
out vec2 texCoord;
void main()
{
vec4 eyePos = modelview * position; //transform to eye-space
eyePos.xy += spriteSize * (corner - vec2(0.5)); //add corner position
gl_Position = projection * eyePos; //complete transformation
texCoord = corner;
}
This achieves the same as the geometry shader based approach, properly-clipped point sprites with a consistent world-space size. If you actually want to mimick the screen-space pixel size of actual point sprites, you need to put some more computational effort into it. But this is left as an exercise and would be quite the oppisite of the world-to-screen transformation from the the point sprite shader.

OpenGL 3.2 : cast right shadows by transparent textures

I can't seem to find any information on the Web about fixing shadow casting by objects, which textures have alpha != 1.
Is there any way to implement something like "per-fragment depth test", not a "per-vertex", so I could just discard appearing of the fragment on a shadowmap if colored texel has transparency? Also, in theory, it could make shadow mapping be more accurate.
EDIT
Well, maybe that was a terrible idea I gave above, but only I want is to tell shaders that if texel have alpha<1, there's no need to shadow things behind that texel. I guess depth texture require only vertex information, thats why every tutorial about shadow mapping has minimized vertex and empty fragment shader and nothing happens when trying do something with fragment shader.
Anyway, what is the main idea of fixing shadow casting by partly-transparent objects?
EDIT2
I've modified my shaders and now It discards every fragment, if at least one has transparency o_O. So those objects now don't cast any shadows (but opaque do)... Please, have a look at the shaders:
// Vertex Shader
uniform mat4 orthoView;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 TC;
void main(void) {
TC = in_TextureCoord;
gl_Position = orthoView * in_Position;
}
.
//Fragment Shader
uniform sampler2D texture;
in vec2 TC;
void main(void) {
vec4 texel = texture2D(texture, TC);
if (texel.a < 0.4)
discard;
}
And it's strange because I use the same trick with the same textures in my other shaders and it works... any ideas?
If you use discard in the fragment shader, then no depth information will be recorded for that fragment. So in your fragment shader, simply add a test to see whether the texture is transparent, and if so discard that fragment.