In a vertex i give pointSize a value bigger than 1. Say 15.
In the fragment i would like choose a point inside that 15x15 square :
vec2 sprite = gl_PointCoord;
if (sprite.s == (9. )/15.0 ) discard ;
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
But that does not work when Size is not a power of 2.
(if size is 16, so (sprite.s == a/16.) where a is in 1..16 : Perfect !)
is a way to achieve my purpose where size is not of power of 2 ?
edit : i know the solution with a texture of size : PointSize * PointSize
gl_FragColor = texture2D(tex, gl_PointCoord);
but that not fit for dynamic change
edit 26 july :
first I do not understand why it is easier to read in a float texture using webgl2 rather than webgl. For my part I make an ext = gl.getExtension ("OES_texture_float"); and the gl.readpixel uses the same syntax.
Then, it is certain that I did not understand everything but I tried the solution s = 0.25 and s = 0.75 for a correctly centered 2x2 pixel, and that does not seem to work.
On the other hand, the values: 0.5 and 1.0 give me a correct display (see fiddle 1)
(fiddle 1) https://jsfiddle.net/3u26rpf0/274/
In fact, to accurately display any size vertex (say SIZE) I use the following formula:
float size = 13.0;
float nby = floor ((size) /2.0);
float nbx = floor ((size-1.0) /2.0);
//
// <nby> pixels CENTER <nbx> pixels
//
// if size is odd nbx == nby
// if size is even nbx == nby +1
vec2 off = 2. * vec2 (nbx, nby) / canvasSize;
vec2 p = -1. + (2. * (a_position.xy * size) + 1.) / canvasSize + off;
gl_Position vec4 = (p, 0.0,1.0);
gl_PointSize = size;
https://jsfiddle.net/3u26rpf0/275/
Checking for exact values with floating point numbers is not generally a good idea. Check for range
sprite.s > ??? && sprite.s < ???
Or better yet consider using a mask texture or something more flexible than a hard coded if statement.
Otherwise in WebGL pixels are referred to by their centers. So, if you draw a 2x2 point on pixel boundary then these should be the .s values for gl_PointCoord.
+-----+-----+
| .25 | .75 |
| | |
+-----+-----+
| .25 | .75 |
| | |
+-----+-----+
If you draw it off a pixel boundary then it depends
++=====++=====++======++
|| || || ||
|| +------+------+ ||
|| | | | ||
++==| | |===++
|| | | | ||
|| +------+------+ ||
|| | | | ||
++==| | |===++
|| | | | ||
|| +------+------+ ||
|| || || ||
++=====++=====++======++
It will still only draw 4 pixels (the 4 that are closest to where the point lies) but it will choose different gl_PointCoords as though it could draw on fractional pixels. If we offset gl_Position so our point is over by .25 pixels it still draws the exact same 4 pixels as when pixel aligned since an offset of .25 is not enough move it from drawing the same 4 pixels we can guess it's going to offset gl_PointCoord by -.25 pixels (in our case that's for a 2x2 point that's an offset of .125 so (.25 - -.125) = .125 and (.75 - .125) = .675.
We can test what WebGL is using by writing them into a floating point texture using WebGL2 (since it's easier to read the float pixels back in WebGL2)
function main() {
const gl = document.createElement("canvas").getContext("webgl2");
if (!gl) {
return alert("need WebGL2");
}
const ext = gl.getExtension("EXT_color_buffer_float");
if (!ext) {
return alert("need EXT_color_buffer_float");
}
const vs = `
uniform vec4 position;
void main() {
gl_PointSize = 2.0;
gl_Position = position;
}
`;
const fs = `
precision mediump float;
void main() {
gl_FragColor = vec4(gl_PointCoord.xy, 0, 1);
}
`;
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const width = 2;
const height = 2;
// creates a 2x2 float texture and attaches it to a framebuffer
const fbi = twgl.createFramebufferInfo(gl, [
{ internalFormat: gl.RGBA32F, minMag: gl.NEAREST, },
], width, height);
// binds the framebuffer and set the viewport
twgl.bindFramebufferInfo(gl, fbi);
gl.useProgram(programInfo.program);
test([0, 0, 0, 1]);
test([.25, .25, 0, 1]);
function test(position) {
twgl.setUniforms(programInfo, {position});
gl.drawArrays(gl.POINTS, 0, 1);
const pixels = new Float32Array(width * height * 4);
gl.readPixels(0, 0, 2, 2, gl.RGBA, gl.FLOAT, pixels);
console.log('gl_PointCoord.s at position:', position.join(', '));
for (y = 0; y < height; ++y) {
const s = [];
for (x = 0; x < width; ++x) {
s.push(pixels[(y * height + x) * 4]);
}
console.log(`y${y}:`, s.join(', '));
}
}
}
main();
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
The formula for what gl_PointCoord will be is in the spec section 3.3
so following that a point drawn .25 pixels off of a 0 pixel boundary for a 2 pixel width point
drawing a 2x2 at .25,.25 (slightly off center)
// first pixel
// this value is constant for all pixels. It is the unmodified
// **WINDOW** coordinate of the **vertex** (not the pixel)
xw = 1.25
// this is the integer pixel coordinate
xf = 0
// gl_PointSize
size = 2
s = 1 / 2 + (xf + 1 / 2 - xw) / size
s = .5 + (0 + .5 - 1.25) / 2
s = .5 + (-.75) / 2
s = .5 + (-.375)
s = .125
which is the value I get from running the sample above.
xw is the window x coordinate for the vertex. In other words xw is based on what we set gl_Position to so
xw = (gl_Position.x / gl_Position.w * .5 + .5) * canvas.width
Or more specificially
xw = (gl_Position.x / gl_Position.w * .5 + .5) * viewportWidth + viewportX
Where viewportX and viewportWidth are set with gl.viewport(x, y, width, height) and default to the same size as the canvas.
Related
I use a shader that has a rotation over time option, and it worked great for years,
But after updating Unity (2017.2 to 2018.2) I get this error- "Shader error in 'Custom/NewSurfaceShader': Too many texture interpolators would be used for ForwardBase pass (11 out of max 10) "
and the material using this shader became white.
I don't know what to do, I looked online but everyone has a different problem
Here is my code:
Shader "Custom/NewSurfaceShader" {
Properties{
//Tint
_Color("Color", Color) = (1,1,1,1)
//Textures and Alphas
_TexOne("Texture One (RGB)", 2D) = "white" {}
_TexTwo("Texture Two (RGB)", 2D) = "white" {}
_AlphaTexOne("Alpha One (A)", 2D) = "white" {}
_AlphaTexTwo("Alpha Two(A)", 2D) = "white" {}
_AlphaTexThree("Alpha Two(A)", 2D) = "white" {}
_Brightness("Brightness", Range(0,5)) = 1
_AlphaWeakness("Alpha Weakness", Range(0,10)) = 1
_ScrollSpeed1X("Scroll Speed Texture One X", Range(-10,10)) = 0
_ScrollSpeed1Y("Scroll Speed Texture One Y", Range(-10,10)) = 0
_ScrollSpeed2X("Scroll Speed Texture Two X", Range(-10,10)) = 0
_ScrollSpeed2Y("Scroll Speed Texture Two Y", Range(-10,10)) = 0
_ScrollSpeedAlpha1X("Scroll Speed Alpha One X", Range(-10,10)) = 0
_ScrollSpeedAlpha1Y("Scroll Speed Alpha One Y", Range(-10,10)) = 0
_ScrollSpeedAlpha2X("Scroll Speed Alpha Two X", Range(-10,10)) = 0
_ScrollSpeedAlpha2Y("Scroll Speed Alpha Two Y", Range(-10,10)) = 0
_RotationSpeed1("Rotation Speed Texture 1", Float) = 0.0
_RotationCenter1("Rotation Center Texture 1", Range(0,1)) = 0.5
_RotationSpeed2("Rotation Speed Texture 2", Float) = 0.0
_RotationCenter2("Rotation Center Texture 2", Range(0,1)) = 0.5
_Speed("Wave Speed", Range(-80, 80)) = 5
_Freq("Frequency", Range(0, 5)) = 2
_Amp("Amplitude", Range(-1, 1)) = 1
}
SubShader{
//Default Queues - Background, Geometry, AlphaTest, Transparent, and Overlay
Tags{ "Queue" = "Transparent" "IgnoreProjector" = "True" "RenderType" = "Transparent" }
LOD 200
CGPROGRAM #pragma surface surf Lambert alpha:fade vertex:vert
//sampler2D _Color;
sampler2D _TexOne;
sampler2D _TexTwo;
sampler2D _AlphaTexOne;
sampler2D _AlphaTexTwo;
sampler2D _AlphaTexThree;
fixed4 _Color;
float _ScrollSpeed1X;
float _ScrollSpeed1Y;
float _ScrollSpeed2X;
float _ScrollSpeed2Y;
float _ScrollSpeedAlpha1X;
float _ScrollSpeedAlpha1Y;
float _ScrollSpeedAlpha2X;
float _ScrollSpeedAlpha2Y;
float _RotationSpeed1;
float _RotationCenter1;
float _RotationSpeed2;
float _RotationCenter2;
float _Brightness;
float _AlphaWeakness;
float _RotationSpeed;
float _Speed;
float _Freq;
float _Amp;
float _OffsetVal;
struct Input {
float2 uv_TexOne;
float2 uv_TexTwo;
float2 uv_AlphaTexOne;
float2 uv_AlphaTexTwo;
float2 uv_AlphaTexThree;
};
void vert(inout appdata_full v) {
float time = _Time * _Speed;
// float waveValueA = sin(time + v.vertex.x * _Freq) * _Amp;
// v.vertex.xyz = float3(v.vertex.x, v.vertex.y + waveValueA, v.vertex.z);
// v.normal = normalize(float3(v.normal.x + waveValueA, v.normal.y, v.normal.z));
}
// This is the only code you need to touch
void surf(Input IN, inout SurfaceOutput o) {
//Rotation
float sinX, cosX, sinY;
float2x2 rotationMatrix;
sinX = sin(_RotationSpeed1 * _Time);
cosX = cos(_RotationSpeed1 * _Time);
sinY = sin(_RotationSpeed1 * _Time);
rotationMatrix = float2x2(cosX, -sinX, sinY, cosX);
//Center the rotation point and apply rotation
IN.uv_TexOne.xy -= _RotationCenter1;
IN.uv_TexOne.xy = mul(IN.uv_TexOne.xy, rotationMatrix);
IN.uv_TexOne.xy += _RotationCenter1;
sinX = sin(_RotationSpeed2 * _Time);
cosX = cos(_RotationSpeed2 * _Time);
sinY = sin(_RotationSpeed2 * _Time);
rotationMatrix = float2x2(cosX, -sinX, sinY, cosX);
//Center the rotation point and apply rotation
IN.uv_TexTwo.xy -= _RotationCenter2;
IN.uv_TexTwo.xy = mul(IN.uv_TexTwo.xy, rotationMatrix);
IN.uv_TexTwo.xy += _RotationCenter2;
//Scrolling
IN.uv_TexOne.x -= _ScrollSpeed1X * _Time;
IN.uv_TexOne.y -= _ScrollSpeed1Y * _Time;
IN.uv_TexTwo.x -= _ScrollSpeed2X * _Time;
IN.uv_TexTwo.y -= _ScrollSpeed2Y * _Time;
IN.uv_AlphaTexOne.x -= _ScrollSpeedAlpha1X * _Time;
IN.uv_AlphaTexOne.y -= _ScrollSpeedAlpha1Y * _Time;
IN.uv_AlphaTexTwo.x -= _ScrollSpeedAlpha2X * _Time;
IN.uv_AlphaTexTwo.y -= _ScrollSpeedAlpha2Y * _Time;
//Textures
fixed4 c1 = tex2D(_TexOne, IN.uv_TexOne) * (_Color * _Brightness); // This is your color texture
fixed4 c2 = tex2D(_TexTwo, IN.uv_TexTwo) * (_Color * _Brightness); // This is your color texture
//Alphas
fixed4 a1 = tex2D(_AlphaTexOne, IN.uv_AlphaTexOne); // This is your alpha texture
fixed4 a2 = tex2D(_AlphaTexTwo, IN.uv_AlphaTexTwo); // This is your alpha texture
fixed4 a3 = tex2D(_AlphaTexThree, IN.uv_AlphaTexThree); // This is your alpha texture
//Assignment
o.Albedo = (c1.rgb * c2.rgb * 2); // Setting your color from the one texture
o.Alpha = ((a1.a * a2.a * 2) * a3.a * 2) *_AlphaWeakness; // Setting your alpha from the other texture
}
ENDCG
}
}
Straightforward solution: target your shader for newer platform (3.5 or higher) by adding
#pragma target 3.5 after CGPROGRAM:
CGPROGRAM #pragma surface surf Lambert alpha:fade vertex:vert
#pragma target 3.5
This is because in shader model 3.0 maximum 10 interpolators are available, i.e. your Input structure may have maximum 10 float fields. Now your structure has exactly 10 (each float2 is 2), but don't forget that the engine may have some internal interpolations that are done behind the scenes and do not come from your input data. This is the case and as a result you have 11 interpolators.
If you target older platforms, you will need to think how to optimize your shader, as there are too many field in Input structure. For example, do you really need 3 alpha channels? Do you use it all? Maybe remove uv_AlphaTexThree?
I am creating a UV sphere (similar to an Earth globe divided into lines of latitude). I am doing this by:
Calculating all of the vertices around each each parallel latitude circle (e.g. 72 points per circle)
Using GL_TRIANGLE_STRIP to fill in each "slice" between each of the latitude circles.
Unfortunately I keep seeing dots on my otherwise perfect sphere.
What would cause this and how do I get rid of it?
void CSphere2::AddVertices( void )
{
#define SPHERE2_RES 72
// Create sphere using horizontal slices/circles
int nPointsPerCircle = SPHERE2_RES;
int nStackedCircles = SPHERE2_RES;
GLfloat r = m_Size;
GLfloat yAngle = - (PI / 2.0f); // Start at -90deg and work up to +90deg (south to north pole)
GLfloat yAngleStep = PI / nStackedCircles;
// Sweep angle is zero initially for pointing towards me (-Z direction)
GLfloat horizSweepAngle = 0;
GLfloat horizSweepStep = ( 2 * PI ) / nPointsPerCircle;
// Each time we have a slice, the top and bottom radii vary..
GLfloat sweepRadiusTop;
GLfloat sweepRadiusBottom;
GLfloat xBottomPoint;
GLfloat zBottomPoint;
GLfloat xTopPoint;
GLfloat zTopPoint;
for( int c = 0; c < nStackedCircles; c ++ )
{
// Draw a circle - note that this always uses two circles - a top and bottom circle.
GLfloat yBottomCircle;
GLfloat yTopCircle;
yTopCircle = r * sin( yAngle + yAngleStep );
yBottomCircle = r * sin( yAngle );
std::vector<GLfloat> vBottom_x;
std::vector<GLfloat> vBottom_z;
std::vector<GLfloat> vTop_x;
std::vector<GLfloat> vTop_z;
sweepRadiusTop = r * cos( yAngle + yAngleStep );
sweepRadiusBottom = r * cos( yAngle );
// Add 1 face - a triangle strip per slice..
AddFace();
m_Faces[ c ].m_DrawType = GL_TRIANGLE_STRIP;
// Now work out the position of the points around each circle - bottom points will always be the
// same as the last top circle points.. but I'm not going to try optimising yet..
for( int s = 0; s < nPointsPerCircle; s ++ )
{
GLfloat xBottomPoint = sweepRadiusBottom * sin( horizSweepAngle );
GLfloat zBottomPoint = sweepRadiusBottom * cos( horizSweepAngle );
GLfloat xTopPoint = sweepRadiusTop * sin( horizSweepAngle + horizSweepStep );
GLfloat zTopPoint = sweepRadiusTop * cos( horizSweepAngle + horizSweepStep );
vBottom_x.push_back( xBottomPoint );
vBottom_z.push_back( zBottomPoint );
vTop_x.push_back( xTopPoint );
vTop_z.push_back( zTopPoint );
horizSweepAngle += horizSweepStep;
}
// OPTIMISE THIS!!
for( int s = 1; s <= nPointsPerCircle + 1; s ++ )
{
if( s == nPointsPerCircle + 1 )
{
// Join the last bottom point with the very first top point - go one more to fully close and leave no vertical gap
xTopPoint = vTop_x[ 1 ];
zTopPoint = vTop_z[ 1 ];
xBottomPoint = vBottom_x[ 0 ];
zBottomPoint = vBottom_z[ 0 ];
}
else
if( s == nPointsPerCircle )
{
// Join the last bottom point with the very first top point
xTopPoint = vTop_x[ 0 ];
zTopPoint = vTop_z[ 0 ];
xBottomPoint = vBottom_x[ s - 1 ];
zBottomPoint = vBottom_z[ s - 1 ];
}
else
{
xTopPoint = vTop_x[ s ];
zTopPoint = vTop_z[ s ];
xBottomPoint = vBottom_x[ s - 1 ];
zBottomPoint = vBottom_z[ s - 1 ];
}
// Calculate and add the Normal for each vertex.. Normal for a point on surface of a Sphere2 should be the unit vector going from centre
// of the Sphere2 to the surface (x,y,z).
//
// If centre of Sphere2 is 0,0,0 then N = | {x,y,z} - {0,0,0} | = | {x,y,z} |
glm::vec3 vNormalBottom = glm::vec3( xBottomPoint, yBottomCircle, zBottomPoint );
vNormalBottom = glm::normalize( vNormalBottom );
glm::vec3 vNormalTop = glm::vec3( xTopPoint, yTopCircle, zTopPoint );
vNormalTop = glm::normalize( vNormalTop );
// Add bottom of slice vertex..
m_Faces[ c ].AddVertexWithNormal( xBottomPoint, yBottomCircle, zBottomPoint, vNormalBottom.x, vNormalBottom.y, vNormalBottom.z );
// Add top of slice vertex, next step position..
m_Faces[ c ].AddVertexWithNormal( xTopPoint, yTopCircle, zTopPoint, vNormalTop.x, vNormalTop.y, vNormalTop.z );
}
int nVertexCount = m_Faces[ c ].m_Vertices.size();
m_Faces[ c ].m_SideCount = nVertexCount;
// Face colouring colours the vertices so they need to be created first..
m_Faces[ c ].SetRGB( m_RGBA.r, m_RGBA.g, m_RGBA.b );
yAngle += yAngleStep;
}
}
void CSphere2::Create( GLfloat fSize )
{
m_Size = fSize;
// Must add vertices first..
AddVertices();
glGenBuffers( 1, &m_VBO );
glBindBuffer( GL_ARRAY_BUFFER, m_VBO );
int nFaces = m_Faces.size();
int nVertexCount = 0;
for( int f = 0; f < nFaces; f ++ )
{
nVertexCount += m_Faces[ f ].m_Vertices.size();
m_Faces[ f ].m_SideCount = nVertexCount;
}
// Define the size of the buffer..
glBufferData( GL_ARRAY_BUFFER, sizeof( COLVERTEX ) * nVertexCount, NULL, GL_STATIC_DRAW );
int nOffset = 0;
for( int f = 0; f < nFaces; f ++ )
{
// Copy in each vertice's data..
for( int v = 0; v < (int) m_Faces[ f ].m_Vertices.size(); v ++ )
{
glBufferSubData( GL_ARRAY_BUFFER, nOffset, sizeof( COLVERTEX ), &m_Faces[ f ].m_Vertices[ v ].m_VertexData );
nOffset += sizeof( COLVERTEX );
}
}
glBindBuffer( GL_ARRAY_BUFFER, 0 );
}
I had the same problem with other examples that I'd copied from elsewhere so I sat down, did the math myself and I still have the same problem.
Vertex shader:
char *vs3DShader =
"#version 140\n"
"#extension GL_ARB_explicit_attrib_location : enable\n"
"layout (location = 0) in vec3 Position;"
"layout (location = 1) in vec4 color;"
"layout (location = 2) in vec3 aNormal;"
"out vec4 frag_color;"
"out vec3 Normal;"
"out vec3 FragPos;"
"uniform mat4 model;"
"uniform mat4 view;"
"uniform mat4 projection;"
"void main()"
"{"
" FragPos = vec3(model * vec4(Position, 1.0));"
" gl_Position = projection * view * vec4(FragPos, 1.0);"
// Rotate normals with respect to current Model matrix (object rotation).
" Normal = mat3( transpose( inverse( model ) ) ) * aNormal; "
" // Pass vertex color to fragment shader.. \n"
" frag_color = color;"
"}"
;
Fragment shader:
char *fs3DShader =
"#version 140\n"
"in vec4 frag_color;"
"in vec3 Normal;"
"in vec3 FragPos;"
"out vec4 FragColor;"
"uniform vec3 lightPos; "
"uniform vec3 lightColor; "
"void main()"
"{"
" // ambient\n"
" float ambientStrength = 0.1;"
" vec3 ambient = ambientStrength * lightColor;"
" // diffuse \n"
" vec3 norm = normalize(Normal);"
" vec3 lightDir = normalize(lightPos - FragPos);"
" float diff = max(dot(norm, lightDir), 0.0);"
" vec3 diffuse = diff * lightColor;"
" vec3 result = (ambient + diffuse) * frag_color;"
" FragColor = vec4(result, 1.0);"
"}"
;
Am I missing some sort of smoothing option? I have tried moving my viewpoint to both sides of the sphere and the dots are happening all around - so it isn't where the triangle strip band "closes" that's the problem - its all over the sphere.
See bright dots below:
Update: I just wanted to prove that the wrapping back to zero degrees isn't the problem. Below is an image when only a quarter of each circle is swept through 90 degrees. The dots are still appear in the mid regions.
Floating point accuracy is not infinite, when working with transcendental numbers you will inevitably accumulate errors.
Here is an example program that does the same loop that your program does, except it just prints out the final angle:
#include <cmath>
#include <cstdio>
int main() {
const int N = 72;
const float step = std::atan(1.0f) * 8 / N;
float x = 0.0f;
for (int i = 0; i < N; i++) {
x += step;
}
std::printf("x - 2pi = %f\n", x - 8 * std::atan(1.0f));
return 0;
}
On my system, it prints out -0.000001. Close to zero, but not zero.
If you want two points in your mesh to line up, don't give them different values. Otherwise you get small seams like this.
A typical approach to this problem is to just generate a circle like this:
#include <cmath>
#include <cstdio>
#include <vector>
struct vec2 { float x, y; };
int main() {
const int N = 72;
const float step = std::atan(1.0f) * 8 / N;
std::vector<vec2> circle;
for (int i = 0; i < N; i++) {
float a = i * step;
circle.push_back({ std::cos(a), std::sin(a) });
}
return 0;
}
At every point in the circle, circle[i], the next point is now just circle[(i+1)%N]. This ensures that the point after circle[N-1] will always be exactly the same as circle[0].
I found a couple of problems with the vertex calculation in the question. Since I was calculating both bottom and top vertices every time I was sweeping around a horizontal slice there was rounding/precision error produced. A point on the top of the current slice should be the same as the bottom point on the next slice up - but I was calculating this top and bottom after incrementing as Dietrich Epp suggested. This resulted in different values. My solution was to re-use the previous top circle vertices as the bottom vertices of the next slice up.
I also hadn't calculated the x/z positions for top and bottom circles using the same sweep angle - I'd incremented the angle which I shouldn't have done.
So fundamentally, problem was caused by 2 overlapping vertices that should have had identical coordinates but were ever so slightly different.
Here's the working solution:
void CSphere2::AddVertices( void )
{
#define SPHERE2_RES 72
// Create sphere using horizontal slices/circles
int nPointsPerCircle = SPHERE2_RES;
int nStackedCircles = SPHERE2_RES;
GLfloat r = m_Size;
GLfloat yAngle = - (PI / 2.0f); // Start at -90deg and work up to +90deg (south to north pole)
GLfloat yAngleStep = PI / nStackedCircles;
// Sweep angle is zero initially for pointing towards me (-Z direction)
GLfloat horizSweepAngle = 0;
GLfloat horizSweepStep = ( 2 * PI ) / nPointsPerCircle;
// Each time we have a slice, the top and bottom radii vary..
GLfloat sweepRadiusTop;
GLfloat sweepRadiusBottom;
GLfloat xBottomPoint;
GLfloat zBottomPoint;
GLfloat xTopPoint;
GLfloat zTopPoint;
std::vector<GLfloat> vCircle_x;
std::vector<GLfloat> vCircle_z;
std::vector<GLfloat> vLastCircle_x;
std::vector<GLfloat> vLastCircle_z;
int nFace = 0;
for( int c = 0; c <= nStackedCircles + 1; c ++ )
{
// Draw a circle - note that this always uses two circles - a top and bottom circle.
GLfloat yBottomCircle;
GLfloat yTopCircle;
yTopCircle = r * sin( yAngle + yAngleStep );
yBottomCircle = r * sin( yAngle );
sweepRadiusTop = r * cos( yAngle );
GLfloat xCirclePoint;
GLfloat zCirclePoint;
horizSweepAngle = 0;
vCircle_x.clear();
vCircle_z.clear();
// Now work out the position of the points around each circle - bottom points will always be the
// same as the last top circle points..
for( int s = 0; s < nPointsPerCircle; s ++ )
{
zCirclePoint = sweepRadiusTop * sin( horizSweepAngle );
xCirclePoint = sweepRadiusTop * cos( horizSweepAngle );
vCircle_x.push_back( xCirclePoint );
vCircle_z.push_back( zCirclePoint );
horizSweepAngle += horizSweepStep;
}
if( c == 0 )
{
// First time around there is no last circle, so just use the same points..
vLastCircle_x = vCircle_x;
vLastCircle_z = vCircle_z;
// And don't add vertices until next time..
continue;
}
// Add 1 face - a triangle strip per slice..
AddFace();
m_Faces[ nFace ].m_DrawType = GL_TRIANGLE_STRIP;
for( int s = 1; s <= nPointsPerCircle + 1; s ++ )
{
if( s == nPointsPerCircle + 1 )
{
// Join the last bottom point with the very first top point
xTopPoint = vCircle_x[ 1 ];
zTopPoint = vCircle_z[ 1 ];
xBottomPoint = vLastCircle_x[ 0 ];
zBottomPoint = vLastCircle_z[ 0 ];
}
else
if( s == nPointsPerCircle )
{
// Join the last bottom point with the very first top point
xTopPoint = vCircle_x[ 0 ];
zTopPoint = vCircle_z[ 0 ];
xBottomPoint = vLastCircle_x[ s - 1 ];
zBottomPoint = vLastCircle_z[ s - 1 ];
}
else
{
xTopPoint = vCircle_x[ s ];
zTopPoint = vCircle_z[ s ];
xBottomPoint = vLastCircle_x[ s - 1 ];
zBottomPoint = vLastCircle_z[ s - 1 ];
}
// Calculate and add the Normal for each vertex.. Normal for a point on surface of a Sphere2 should be the unit vector going from centre
// of the Sphere2 to the surface (x,y,z).
//
// If centre of Sphere2 is 0,0,0 then N = | {x,y,z} - {0,0,0} | = | {x,y,z} |
glm::vec3 vNormalBottom = glm::vec3( xBottomPoint, yBottomCircle, zBottomPoint );
vNormalBottom = glm::normalize( vNormalBottom );
glm::vec3 vNormalTop = glm::vec3( xTopPoint, yTopCircle, zTopPoint );
vNormalTop = glm::normalize( vNormalTop );
// Add bottom of slice vertex..
m_Faces[ nFace ].AddVertexWithNormal( xBottomPoint, yBottomCircle, zBottomPoint, vNormalBottom.x, vNormalBottom.y, vNormalBottom.z );
// Add top of slice vertex, next step position..
m_Faces[ nFace ].AddVertexWithNormal( xTopPoint, yTopCircle, zTopPoint, vNormalTop.x, vNormalTop.y, vNormalTop.z );
}
// Now copy the current circle x/y positions as the last circle positions (bottom circle)..
vLastCircle_x = vCircle_x;
vLastCircle_z = vCircle_z;
int nVertexCount = m_Faces[ nFace ].m_Vertices.size();
m_Faces[ nFace ].m_SideCount = nVertexCount;
// Face colouring colours the vertices so they need to be created first..
m_Faces[ nFace ].SetRGB( m_RGBA.r, m_RGBA.g, m_RGBA.b );
yAngle += yAngleStep;
nFace ++;
}
}
I'm encountering a problem trying to replicate the OpenGL behaviour in an ambient without OpenGL.
Basically I need to create an SVG file from a list of lines my program creates. These lines are created using an othigraphic projection.
I'm sure that these lines are calculated correctly because if I try to use them with a OpenGL context with orthographic projection and save the result into an image, the image is correct.
The problem raises when I use the exactly same lines without OpenGL.
I've replicated the OpenGL projection and view matrices and I process every line point like this:
3D_output_point = projection_matrix * view_matrix * 3D_input_point
and then I calculate it's screen (SVG file) position like this:
2D_point_x = (windowWidth / 2) * 3D_point_x + (windowWidth / 2)
2D_point_y = (windowHeight / 2) * 3D_point_y + (windowHeight / 2)
I calculate the othographic projection matrix like this:
float range = 700.0f;
float l, t, r, b, n, f;
l = -range;
r = range;
b = -range;
t = range;
n = -6000;
f = 8000;
matProj.SetValore(0, 0, 2.0f / (r - l));
matProj.SetValore(0, 1, 0.0f);
matProj.SetValore(0, 2, 0.0f);
matProj.SetValore(0, 3, 0.0f);
matProj.SetValore(1, 0, 0.0f);
matProj.SetValore(1, 1, 2.0f / (t - b));
matProj.SetValore(1, 2, 0.0f);
matProj.SetValore(1, 3, 0.0f);
matProj.SetValore(2, 0, 0.0f);
matProj.SetValore(2, 1, 0.0f);
matProj.SetValore(2, 2, (-1.0f) / (f - n));
matProj.SetValore(2, 3, 0.0f);
matProj.SetValore(3, 0, -(r + l) / (r - l));
matProj.SetValore(3, 1, -(t + b) / (t - b));
matProj.SetValore(3, 2, -n / (f - n));
matProj.SetValore(3, 3, 1.0f);
and the view matrix this way:
CVettore position, lookAt, up;
position.AssegnaCoordinate(rtRay->m_pCam->Vp.x, rtRay->m_pCam->Vp.y, rtRay->m_pCam->Vp.z);
lookAt.AssegnaCoordinate(rtRay->m_pCam->Lp.x, rtRay->m_pCam->Lp.y, rtRay->m_pCam->Lp.z);
up.AssegnaCoordinate(rtRay->m_pCam->Up.x, rtRay->m_pCam->Up.y, rtRay->m_pCam->Up.z);
up[0] = -up[0];
up[1] = -up[1];
up[2] = -up[2];
CVettore zAxis, xAxis, yAxis;
float length, result1, result2, result3;
// zAxis = normal(lookAt - position)
zAxis[0] = lookAt[0] - position[0];
zAxis[1] = lookAt[1] - position[1];
zAxis[2] = lookAt[2] - position[2];
length = sqrt((zAxis[0] * zAxis[0]) + (zAxis[1] * zAxis[1]) + (zAxis[2] * zAxis[2]));
zAxis[0] = zAxis[0] / length;
zAxis[1] = zAxis[1] / length;
zAxis[2] = zAxis[2] / length;
// xAxis = normal(cross(up, zAxis))
xAxis[0] = (up[1] * zAxis[2]) - (up[2] * zAxis[1]);
xAxis[1] = (up[2] * zAxis[0]) - (up[0] * zAxis[2]);
xAxis[2] = (up[0] * zAxis[1]) - (up[1] * zAxis[0]);
length = sqrt((xAxis[0] * xAxis[0]) + (xAxis[1] * xAxis[1]) + (xAxis[2] * xAxis[2]));
xAxis[0] = xAxis[0] / length;
xAxis[1] = xAxis[1] / length;
xAxis[2] = xAxis[2] / length;
// yAxis = cross(zAxis, xAxis)
yAxis[0] = (zAxis[1] * xAxis[2]) - (zAxis[2] * xAxis[1]);
yAxis[1] = (zAxis[2] * xAxis[0]) - (zAxis[0] * xAxis[2]);
yAxis[2] = (zAxis[0] * xAxis[1]) - (zAxis[1] * xAxis[0]);
// -dot(xAxis, position)
result1 = ((xAxis[0] * position[0]) + (xAxis[1] * position[1]) + (xAxis[2] * position[2])) * -1.0f;
// -dot(yaxis, eye)
result2 = ((yAxis[0] * position[0]) + (yAxis[1] * position[1]) + (yAxis[2] * position[2])) * -1.0f;
// -dot(zaxis, eye)
result3 = ((zAxis[0] * position[0]) + (zAxis[1] * position[1]) + (zAxis[2] * position[2])) * -1.0f;
// Set the computed values in the view matrix.
matView.SetValore(0, 0, xAxis[0]);
matView.SetValore(0, 1, yAxis[0]);
matView.SetValore(0, 2, zAxis[0]);
matView.SetValore(0, 3, 0.0f);
matView.SetValore(1, 0, xAxis[1]);
matView.SetValore(1, 1, yAxis[1]);
matView.SetValore(1, 2, zAxis[1]);
matView.SetValore(1, 3, 0.0f);
matView.SetValore(2, 0, xAxis[2]);
matView.SetValore(2, 1, yAxis[2]);
matView.SetValore(2, 2, zAxis[2]);
matView.SetValore(2, 3, 0.0f);
matView.SetValore(3, 0, result1);
matView.SetValore(3, 1, result2);
matView.SetValore(3, 2, result3);
matView.SetValore(3, 3, 1.0f);
The results I get from OpenGL and from the SVG output are quite different, but in two days I couldn't come up with a solution.
This is the OpenGL output
And this is my SVG output
As you can see, it's rotation isn't corrent.
Any idea why? The line points are the same and the matrices too, hopefully.
Pasing the matrices I was creating didn't work. I mean, the matrices were wrong, I think, because OpenGL didn't show anything.
So I tryed doing the opposite, I created the matrices in OpenGL and used them with my code. The result is better, but not perfect yet.
Now I think the I do something wrong mapping the 3D points into 2D screen points because the points I get are inverted in Y and I still have some lines not perfectly matching.
This is what I get using the OpenGL matrices and my previous approach to map 3D points to 2D screen space (this is the SVG, not OpenGL render):
Ok this is the content of the view matrix I get from OpenGL:
This is the projection matrix I get from OpenGL:
And this is the result I get with those matrices and by changing my 2D point Y coordinate calculation like bofjas said:
It looks like some rotations are missing. My camera has a rotation of 30° on both the X and Y axis, and it looks like they're not computed correctly.
Now I'm using the same matrices OpenGL does. So I think that I'm doing some wrong calculations when I map the 3D point into 2D screen coordinates.
Rather than debugging your own code, you can use transform feedback to compute the projections of your lines using the OpenGL pipeline. Rather than rasterizing them on the screen you can capture them in a memory buffer and save directly to the SVG afterwards. Setting this up is a bit involved and depends on the exact setup of your OpenGL codepath, but it might be a simpler solution.
As per your own code, it looks like you either mixed x and y coordinates somewhere, or row-major and column-major matrices.
I've solved this problem in a really simple way. Since when I draw using OpenGL it's working, I've just created the matrices in OpenGL and then retrieved them with glGet(). Using those matrices everything is ok.
You're looking for a specialized version of orthographic (oblique) projections called isometric projections. The math is really simple if you want to know what's inside the matrix. Have a look on Wikipedia
OpenGL loads matrices in column major(opposite of c++).for example this matrix:
[1 ,2 ,3 ,4 ,
5 ,6 ,7 ,8 ,
9 ,10,11,12,
13,14,15,16]
loads this way in memory:
|_1 _|
|_5 _|
|_9 _|
|_13_|
|_2 _|
.
.
.
so i suppose you should transpose those matrices from openGL(if you`re doing it row major)
I am reading the depthbuffer of a scene, however as I rotate the camera I notice that towards the edges of the screen the depth is returned closer to camera. I think the angle of impact has an effect on the depthbuffer, however as I am drawing a quad to the framebuffer, I do not want this to happen (this is not actually the case ofcourse but this sums up my what I need).
I linearize the depth with the following:
float linearize(float depth) {
float zNear = 0.1;
float zFar = 40.0;
return (2.0 * zNear) / (zFar + zNear - depth * (zFar - zNear));
}
I figured the following to correct for this, but it's not quite right yet. 45.0 is the angle of the camera vertically / 2. side is the space from the center of the screen.
const float angleVert = 45.0 / 180.0 * 3.17;
float sideAdjust(vec2 coord, float depth) {
float angA = cos(angleVert);
float side = (coord.y - 0.5);
if (side < 0.0) side = -side;
side *= 2.0;
float depthAdj = angA * side;
return depth / depthAdj;
}
To show my problem with a drawing with results of depth of a flat surface in front of the camera:
c
/ | \
/ | \
/ | \
closer further closer
is what I have, what I need:
c
| | |
| | |
| | |
even even even
An idea of how to do it, would be to find the position P in eye-space. Consider P a vector from origin to the point. Project the P onto the the eye direction vector (which in eye-space always is (0,0,-1)). The length of the projected vector is what you need.
I'm trying to create sub-cursor for terrain mapping.
Basic by code: (old image, but rotation is same)
image http://www.sdilej.eu/pics/274a90360f9c46e2eaf94e095e0b6223.png
This is when i testing change glRotate ax to my numbers:
image2 http://www.sdilej.eu/pics/146bda9dc51708da54b9249706f874fc.png
What i want:
image3 http://www.sdilej.eu/pics/69721aa237608b423b635945d430e561.png
My code:
void renderDisk(float x1, float y1, float z1, float x2, float y2, float z2, float radius, int subdivisions, GLUquadricObj* quadric)
{
float vx = x2 - x1;
float vy = y2 - y1;
float vz = z2 - z1;
//handle the degenerate case of z1 == z2 with an approximation
if( vz == 0.0f )
vz = .0001f;
float v = sqrt( vx*vx + vy*vy + vz*vz );
float ax = 57.2957795f * acos( vz/v );
if(vz < 0.0f)
ax = -ax;
float rx = -vy * vz;
float ry = vx * vz;
glPushMatrix();
glTranslatef(x1, y1, z1);
glRotatef(ax, rx, ry, 0.0);
gluQuadricOrientation(quadric, GLU_OUTSIDE);
gluDisk(quadric, radius - 0.25, radius + 5.0, subdivisions, 5);
glPopMatrix();
}
void renderDisk_convenient(float x, float y, float z, float radius, int subdivisions)
{
// Mouse opacity
glColor4f( 0.0f, 7.5f, 0.0f, 0.5f );
GLUquadricObj* quadric = gluNewQuadric();
gluQuadricDrawStyle(quadric, GLU_LINE);
gluQuadricNormals(quadric, GLU_SMOOTH);
gluQuadricTexture(quadric, GL_TRUE);
renderDisk(x, y, z, x, y, z, radius, subdivisions, quadric);
gluDeleteQuadric(quadric);
}
renderDisk_convenient(posX, posY, posZ, radius, 20);
This is a simple one. In your call to renderDisk() you supply bad arguments. Looks like you copied the function from some tutorial without understanding how it works. The first three parameters control the center position, and the other three parameters control rotation using a second position which the disk is always facing. If the two positions are equal (which is your case), this line is executed:
//handle the degenerate case of z1 == z2 with an approximation
if( vz == 0.0f )
vz = .0001f;
And setting z to nonzero makes the disc perpendicular to XZ plane, which is also the horizontal plane for your terrain. So ... to make it okay, you need to modify your function like this:
void renderDisk_convenient(float x, float y, float z, float radius, int subdivisions)
{
// Mouse opacity
glColor4f( 0.0f, 7.5f, 0.0f, 0.5f );
GLUquadricObj* quadric = gluNewQuadric();
gluQuadricDrawStyle(quadric, GLU_LINE);
gluQuadricNormals(quadric, GLU_SMOOTH);
gluQuadricTexture(quadric, GL_TRUE);
float upX = 0, upY = 1, upZ = 0; // up vector (does not need to be normalized)
renderDisk(x, y, z, x + upX, y + upY, z + upZ, radius, subdivisions, quadric);
gluDeleteQuadric(quadric);
}
This should turn the disc into the xz plane so it will be okay if the terrain is flat. But in other places, you actually need to modify the normal direction (the (upX, upY, upZ) vector). If your terrain is generated from a heightmap, then the normal can be calculated using code such as this:
const char *p_s_heightmap16 = "ps_height_1k.png";
const float f_terrain_height = 50; // terrain is 50 units high
const float f_terrain_scale = 1000; // the longer edge of terrain is 1000 units long
TBmp *p_heightmap;
if(!(p_heightmap = p_LoadHeightmap_HiLo(p_s_heightmap16))) {
fprintf(stderr, "error: failed to load heightmap (%s)\n", p_s_heightmap16);
return false;
}
// load heightmap
TBmp *p_normalmap = TBmp::p_Alloc(p_heightmap->n_width, p_heightmap->n_height);
// alloc normalmap
const float f_width_scale = f_terrain_scale / max(p_heightmap->n_width, p_heightmap->n_height);
// calculate the scaling factor
for(int y = 0, hl = p_normalmap->n_height, hh = p_heightmap->n_height; y < hl; ++ y) {
for(int x = 0, wl = p_normalmap->n_width, wh = p_heightmap->n_width; x < wl; ++ x) {
Vector3f v_normal(0, 0, 0);
{
Vector3f v_pos[9];
for(int yy = -1; yy < 2; ++ yy) {
for(int xx = -1; xx < 2; ++ xx) {
int sx = xx + x;
int sy = yy + y;
float f_height;
if(sx >= 0 && sy >= 0 && sx < wh && sy < hh)
f_height = ((const uint16_t*)p_heightmap->p_buffer)[sx + sy * wh] / 65535.0f * f_terrain_height;
else
f_height = 0;
v_pos[(xx + 1) + 3 * (yy + 1)] = Vector3f(xx * f_width_scale, f_height, yy * f_width_scale);
}
}
// read nine-neighbourhood
/*
0 1 2
+----------+----------+
|\ | /|
| \ | / |
| \ | / |
| \ | / |
3|_________\|/_________|5
| 4/|\ |
| / | \ |
| / | \ |
| / | \ |
|/ | \|
+----------+----------+
6 7 8
*/
const int p_indices[] = {
0, 1, //4,
1, 2, //4,
2, 5, //4,
5, 8, //4,
8, 7, //4,
7, 6, //4,
6, 3, //4,
3, 0 //, 4
};
for(int i = 0; i < 8; ++ i) {
Vector3f a = v_pos[p_indices[i * 2]];
Vector3f b = v_pos[p_indices[i * 2 + 1]];
Vector3f c = v_pos[4];
// triangle
Vector3f v_tri_normal = (a - c).v_Cross(b - c);
v_tri_normal.Normalize();
// calculate normals
v_normal += v_tri_normal;
}
v_normal.Normalize();
}
// calculate normal from the heightmap (by averaging the normals of eight triangles that share the current point)
uint32_t n_normalmap =
0xff000000U |
(max(0, min(255, int(v_normal.z * 127 + 128))) << 16) |
(max(0, min(255, int(v_normal.y * 127 + 128))) << 8) |
max(0, min(255, int(-v_normal.x * 127 + 128)));
// calculate normalmap color
p_normalmap->p_buffer[x + wl * y] = n_normalmap;
// use the lightmap bitmap to store the results
}
}
(note this contains some structures and functions that are not included here so you won't be able to use this code directly, but the basic concept is there)
Once you have the normals, you need to sample normal under location (x, z) and use that in your function. This will still make the disc intersect the terrain where there is a steep slope next to flat surface (where the second derivative is high). In order to cope with that, you can either lift the cursor up a bit (along the normal), or disable depth testing.
If your terrain is polygonal, you could use vertex normals just as well, just take triangle that is below (x, y, z) and interpolate it's vertices normals to get the normal for the disc.
I hope this helps, feel free to comment if you need further advice ...