DirectX FVF-like GLSL Shaders - opengl

Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.

Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.

Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!

Related

How to implement a Color Matrix Filter in a GLSL shader

I would like to implement a Color Matrix Filter in a GLSL shader but couldn't find any documentation regarding this matter. I'm totaly new to the world of shaders (never coded one myself) so please forgive me if my explanation/vocabulary doesn't make mush sense.
Informations I could gather so far:
A color matrix is composed of 5 columns (RGBA + offset) and 4 rows
The values in the first four columns are multiplied with the source red, green, blue, and alpha values respectively. The fifth column value is added (offset)
I believe the largest matrices in GLSL are 4×4 mat4 matrices (excluding the 'offset' column)
The only mat4 I've seen implemented in a shader looks like this:
colorMatrix = (GPUMatrix4x4){{0.3588, 0.7044, 0.1368, 0.0},
{0.2990, 0.5870, 0.1140, 0.0},
{0.2392, 0.4696, 0.0912 ,0.0},
{0,0,0,1.0}
};
Question:
How can implement one ? As stated above I've never coded a GLSL shader before and unfortunately I'm unable to provide an MCVE. I would love to see an example so I can learn from it.
Thank you
EDIT:
I'm working with Processing and this is the only example I've found of vertex and fragment shaders for color rendering:
colorvert.glsl:
uniform mat4 transform;
attribute vec4 position;
attribute vec4 color;
varying vec4 vertColor;
void main() {
gl_Position = transform * position;
vertColor = color;
}
colorfrag.glsl:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
varying vec4 vertColor;
void main() {
gl_FragColor = vertColor;
}
For starters I would try :
Vertex:
#version 410 core
layout(location = 0) in vec3 in_vertex;
layout(location = 3) in vec4 in_color;
out vec4 color;
void main()
{
const mat4x4 m=mat4x4 // RGBA matrix
(
0.3588, 0.7044, 0.1368, 0.0,
0.2990, 0.5870, 0.1140, 0.0,
0.2392, 0.4696, 0.0912 ,0.0,
0.0 , 0.0 , 0.0 ,1.0
);
const vec4 o=vec4(0.0,0.0,0.0,0.0); // offset
color = (m * in_color) + o; // transformation
gl_Position = vec4(in_vertex,1.0);
}
Fragment:
#version 410 core
in vec4 color;
out vec4 out_color;
void main()
{
out_color=color;
}
Just change the #version, layout and input attributes/uniforms to meet your needs (currently it use default nVidia attribute locations for fixed pipeline)
Now to convert image for example just render textured quad on <-1,+1> vertex coordinate x,y range.
If your matrices or colors change inside fragment (for example as a result of some proceduraly generated stuff) than just move the transformation to fragment shader instead.
You can also change the const to uniform (and move it above main) so you can pass custom parameters on the run ...
In case you need a GLSL start example see:
complete GL+GLSL+VAO/VBO C++ example

OpenGL Simple Shading, Artifacts

I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.

GLSL: Rotating Normal

There is a scene with some objects and a terrain. When I try to rotate the object the normals stay the same. Means that the dark side of an object stays the dark side of an object.
My specular lightning is working.
Vertex Shader:
uniform vec3 lightPos;
uniform sampler2D Texture;
varying vec2 TexCoord;
varying vec3 position;
varying vec3 vertex;
varying mat3 nMat;
varying mat3 vMatrix;
varying vec3 normal;
varying vec3 oneNormal;
varying vec3 lightPos2;
void main()
{
gl_Position=gl_ModelViewProjectionMatrix*gl_Vertex;
position=vec3(gl_ModelViewMatrix*gl_Vertex);
vMatrix = mat3(gl_ModelViewMatrix);
lightPos2 = vec3(gl_ModelViewMatrix*vec4(lightPos,1.0));
vertex = vec3(gl_Vertex);
nMat = gl_NormalMatrix;
normal=gl_NormalMatrix*gl_Normal;
oneNormal = gl_Normal;
TexCoord=gl_MultiTexCoord0.xy;
}
Fragment Shader:
varying vec3 position;
varying vec3 normal;
uniform sampler2D Texture;
varying vec2 TexCoord;
uniform vec3 lightPos;
varying vec3 vertex;
uniform vec3 lambient;
uniform vec3 ldiffuse;
uniform vec3 lspecular;
uniform float shininess;
varying mat3 nMat;
varying mat3 vMatrix;
varying vec3 oneNormal;
varying vec3 lightPos2;
void main()
{
float dist=length(vertex-lightPos);
float att=1.0/(1.0+0.1*dist+0.01*dist*dist);
vec4 TexColor = texture2D(Texture, TexCoord);
vec3 ambient=TexColor.rgb*lambient; //the ambient light
//=== Diffuse ===//
vec3 surf2light=normalize(position-lightPos2);
float dcont=max(0.0,
dot( normalize(nMat*(-normal)), nMat*surf2light) );
vec3 diffuse=dcont*(TexColor.rgb*ldiffuse);
//=== Specular ===//
vec3 surf2view = normalize(lightPos-position);
surf2light=nMat*normalize(-vertex);
vec3 reflection=reflect(-surf2view,normalize(normal));
float scont=pow(max(0.0,dot(surf2light,reflection)),shininess);
vec3 specular=scont*lspecular;
gl_FragColor=vec4((ambient+diffuse+specular)*att,1.0);
}
You are transforming the normals. But at least for part of the lighting calculations, you're using a light source position that is also transformed into the same coordinate space. If you transform both the normals and the light source, the direction of the normals relative to the light source will be the same again.
Extracting the key lines from the vertex shader:
position=vec3(gl_ModelViewMatrix*gl_Vertex);
lightPos2 = vec3(gl_ModelViewMatrix*vec4(lightPos,1.0));
normal=gl_NormalMatrix*gl_Normal;
gl_NormalMatrix corresponds to gl_ModelViewMatrix, with the necessary adjustments to transform direction vectors instead of points. Therefore, the modelview transformation has been applied to all of position, lightPos2 and normal.
Then, all three of these values (with interpolation applied) are passed into the fragment shader, where you have this calculation for the diffuse lighting term:
vec3 surf2light=normalize(position-lightPos2);
float dcont=max(0.0,
dot( normalize(nMat*(-normal)), nMat*surf2light) );
vec3 diffuse=dcont*(TexColor.rgb*ldiffuse);
Now, there's a couple of issues here:
You're applying nMat, which is the normal matrix, to both vectors again, even though they were already transformed. It does no harm, since the dot product does not change if you apply the same rotation to both vectors, but it does not make sense.
All the values used (position, lightPos2, normal) were transformed by the modelview matrix. So their relative positions/directions are exactly the same as they were originally. The result of the calculation will be the same as it would be if it were applied to the corresponding untransformed vectors.
You need to decide which coordinate system you want to use for the lighting calculations. There are at least a couple of options. The easiest one for you is probably to use eye coordinate space. To do this, you can apply the view transformation to the light position before passing it into the shader, and then use this light position directly, without any additional transformations, in the shader code.
Since you say that the specular lighting works as desired, and you're using the uniform lightPos there, it might be as simple as using that variable instead of lightPos2 in the diffuse calculation.

OpenGL point light moving when camera rotates

I have a point light in my scene. I thought it worked correctly until I tested it with the camera looking at the lit object from different angles and found that the light area moves on the mesh (in my case simple plane). I'm using a typical ADS Phong lighting approach. I transform light position into camera space on the client side and then transform the interpolated vertex in the vertex shader with model view matrix.
My vertex shader looks like this:
#version 420
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uvs;
layout(location = 2) in vec3 normal;
uniform mat4 MVP_MATRIX;
uniform mat4 MODEL_VIEW_MATRIX;
uniform mat4 VIEW_MATRIX;
uniform mat3 NORMAL_MATRIX;
uniform vec4 DIFFUSE_COLOR;
//======= OUTS ============//
out smooth vec2 uvsOut;
out flat vec4 diffuseOut;
out vec3 Position;
out smooth vec3 Normal;
out gl_PerVertex
{
vec4 gl_Position;
};
void main()
{
uvsOut = uvs;
diffuseOut = DIFFUSE_COLOR;
Normal = normal;
Position = vec3(MODEL_VIEW_MATRIX * position);
gl_Position = MVP_MATRIX * position;
}
The fragment shader :
//==================== Uniforms ===============================
struct LightInfo{
vec4 Lp;///light position
vec3 Li;///light intensity
vec3 Lc;///light color
int Lt;///light type
};
const int MAX_LIGHTS=5;
uniform LightInfo lights[1];
// material props:
uniform vec3 KD;
uniform vec3 KA;
uniform vec3 KS;
uniform float SHININESS;
uniform int num_lights;
////ADS lighting method :
vec3 pointlightType( int lightIndex,vec3 position , vec3 normal) {
vec3 n = normalize(normal);
vec4 lMVPos = lights[0].Lp ; //
vec3 s = normalize(vec3(lMVPos.xyz) - position); //surf to light
vec3 v = normalize(vec3(-position)); //
vec3 r = normalize(- reflect(s , n));
vec3 h = normalize(v+s);
float sDotN = max( 0.0 , dot(s, n) );
vec3 diff = KD * lights[0].Lc * sDotN ;
diff = clamp(diff ,0.0 ,1.0);
vec3 spec = vec3(0,0,0);
if (sDotN > 0.0) {
spec = KS * pow( max( 0.0 ,dot(n,h) ) , SHININESS);
spec = clamp(spec ,0.0 ,1.0);
}
return lights[0].Li * ( spec+diff);
}
I have studied a lot of tutorials but none of those gives thorough explanation on the whole process when it comes to transform spaces.I suspect it has something to do with camera space I transform light and vertex position into.In my case the view matrix is created with
glm::lookAt()
which always negates "eye" vector so it comes that the view matrix in my shaders has negated translation part.Is is supposed to be like that? Can someone give a detailed explanation how it is done the right way in programmable pipeline? My shaders are implemented based on the book "OpenGL 4.0 Shading language cookbook" .The author seems to use also the camera space.But it doesn't work right unless that is the way it should work ...
I just moved the calculations into the world space.Now the point light stays on the spot.But how do I achieve the same using camera space?
I nailed down the bug and it was pretty stupid one.But it maybe helpful to others who are too much "math friendly" .My light position in the shaders is defined with vec3 .Now , on the client side it is represented with vec4.I was effectively setting .w component of the vec4 to be equal zero each time before transforming it with view matrix.Doing so ,I believe , the light position vector wasn't getting transformed correctly and from this all the light position problems stems in the shader.The solution is to keep w component of light position vector to be always equal 1.

Tangent Space Normal Mapping - shader sanity check

I'm getting some pretty freaky results from my tangent space normal mapping shader :). In the scene I show here, the teapot and checkered walls are being shaded with my ordinary Phong-Blinn shader (obviously teapot backface cull gives it a lightly ephemeral look and feel :-) ). I've tried to add in normal mapping to the sphere, with psychedelic results:
The light is coming from the right (just about visible as a black blob). The normal map I'm using on the sphere looks like this:
I'm using AssImp to process input models, so it's calculating tangent and bi-normals for each vertex automatically for me.
The pixel and vertex shaders are below. I'm not too sure what's going wrong, but it wouldn't surprise me if the tangent basis matrix is somehow wrong. I assume I have to compute things into eye space and then transform the eye and light vectors into tangent space and that this is the correct way to go about it. Note that the light position comes into the shader already in view space.
// Vertex Shader
#version 420
// Uniform Buffer Structures
// Camera.
layout (std140) uniform Camera
{
mat4 Camera_Projection;
mat4 Camera_View;
};
// Matrices per model.
layout (std140) uniform Model
{
mat4 Model_ViewModelSpace;
mat4 Model_ViewModelSpaceInverseTranspose;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position; // Already in view space.
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Streams (per vertex)
layout(location = 0) in vec3 attrib_Position;
layout(location = 1) in vec3 attrib_Normal;
layout(location = 2) in vec3 attrib_Tangent;
layout(location = 3) in vec3 attrib_BiNormal;
layout(location = 4) in vec2 attrib_Texture;
// Output streams (per vertex)
out vec3 attrib_Fragment_Normal;
out vec4 attrib_Fragment_Position;
out vec3 attrib_Fragment_Light;
out vec3 attrib_Fragment_Eye;
// Shared.
out vec2 varying_TextureCoord;
// Main
void main()
{
// Compute normal.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Generate matrix for tangent basis.
mat3 tangentBasis = mat3( attrib_Tangent,
attrib_BiNormal,
attrib_Normal);
// Light vector.
attrib_Fragment_Light = tangentBasis * normalize(Light_Position - position.xyz);
// Eye vector.
attrib_Fragment_Eye = tangentBasis * normalize(-position.xyz);
// Return position.
gl_Position = Camera_Projection * position;
}
... and the pixel shader looks like this:
// Pixel Shader
#version 420
// Samplers
uniform sampler2D Map_Normal;
// Global Uniforms
// Material.
layout (std140) uniform Material
{
vec4 Material_Ambient_Colour;
vec4 Material_Diffuse_Colour;
vec4 Material_Specular_Colour;
vec4 Material_Emissive_Colour;
float Material_Shininess;
float Material_Strength;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position;
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Input streams (per vertex)
in vec3 attrib_Fragment_Normal;
in vec3 attrib_Fragment_Position;
in vec3 attrib_Fragment_Light;
in vec3 attrib_Fragment_Eye;
// Shared.
in vec2 varying_TextureCoord;
// Result
out vec4 Out_Colour;
// Main
void main(void)
{
// Compute normals.
vec3 N = normalize(texture(Map_Normal, varying_TextureCoord).xyz * 2.0 - 1.0);
vec3 L = normalize(attrib_Fragment_Light);
vec3 V = normalize(attrib_Fragment_Eye);
vec3 R = normalize(-reflect(L, N));
// Compute products.
float NdotL = max(0.0, dot(N, L));
float RdotV = max(0.0, dot(R, V));
// Compute final colours.
vec4 ambient = Light_Ambient_Colour * Material_Ambient_Colour;
vec4 diffuse = Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * (pow(RdotV, Material_Shininess) * Material_Strength);
// Final colour.
Out_Colour = ambient + diffuse + specular;
}
Edit: 3D Studio Render of the scene (to show the UV's are OK on the sphere):
I think your shaders are okay, but your texture coordinates on the sphere are totally off. It's as if they got distorted towards the poles along the longitude.