How to make a billboard spherical - opengl

Following this turorial here
I have managed to create a cylindrical billboard (it utilizes a geometry shader which takes points and produces quads). The problem is that when i move the camera so that it's higher than the billboard (using gluLookat) the billboard does not rotate to truly face the camera (as if it was a cylindrical billboard).
How do I make it into spherical?
if anyone interested, here is slightly modified geometry shader code:
#version 330
//based on a great tutorial at http://ogldev.atspace.co.uk/www/tutorial27/tutorial27.html
layout (points) in;
layout (triangle_strip) out;
layout (max_vertices = 4) out;
uniform mat4 mvp;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 right = normalize(cross(up, toCamera)); //right-handed coordinate system
//vec3 right = cross(toCamera, up); //left-handed coordinate system
pos -= (right*0.5);
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(0,1);
EmitVertex();
pos.y -= 1.0;
pos += right;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,0);
EmitVertex();
pos.y += 1.0;
gl_Position = mvp*vec4(pos,1.0);
texCoord = vec2(1,1);
EmitVertex();
}
EDIT:
As I said before, I have tried the approach of setting the 3,3-submatrix to identity. I might have explained the behaviour wrong, but this gif should do it better:
In the picture above, the camera is rotated with the billboard (red) using identity submatrix approach.
The billboard, however, should not move through the surface (white), it should maintain it's position correctly and always be on one side of the surface, which does not happen.

A alternative to create billboards is to throw the geometry shaders away and do it manually like this:
Vector3 DiffCamera = Billboard.position - Camera.position;
Vector3 UpVector = new Vector3(0.0f, 1.0f, 0.0f);
Vector3 CrossA = DiffCamera.cross(UpVector).normalize(); // (Step A)
Vector3 CrossB = DiffCamera.cross(CrossA).normalize(); // (Step B)
// now you can use CrossA and CrossB and the billboard position to calculate the positions of the edges of the billboard-rectangle
// like this
Vector3 Pos1 = Billboard.position + CrossA + CrossB;
Vector3 Pos2 = Billboard.position - CrossA + CrossB;
Vector3 Pos3 = Billboard.position + CrossA - CrossB;
Vector3 Pos4 = Billboard.position - CrossA - CrossB;
we calculate in Step A the cross-product because we want the horizontal aligned direction of the billboard.
In step B we do it for the vertical direction.
do this for every billbaord in the scene.
or better as geometry shader (just a try)
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 up = vec3(0,1,0);
vec3 CrossA = normalize(cross(up, toCamera));
vec3 CrossB = normalize(cross(CrossA, toCamera));
// set coordinates of the 4 points

Just reset the top left 3×3 subpart of the modelview matrix to identity, leaving the 4th column and row as it is, i.e.:
1 0 0 …
0 1 0 …
0 0 1 …
… … … …
UPDATE World space axis following billboards
The key insight into efficiently implementing aligned billboards is to realize
how they work in view space. By definition the normal vector of a billboard in
view space is Z = (0, 0, 1). This leaves only one free parameter, namely the
rotation of the billboard around this axis. In a view aligned billboard the
billboard right and up axes are merely forced to be view X and Y. This is what
setting the upper left 3×3 of the modelview matrix does.
Now when we want the billboard be aligned to a certain axis within the scene
yet still face the viewer, the only parameter we can vary is the billboards
rotation. For this we do the following:
In world space we choose an axis that should be the up axis of the billboard.
Note that if the viewing axis is parallel to the billboard up axis the following
steps become singular, i.e. the rotation of the billboard is undefined. You have
to deal with this in some way, that I leave undefined here.
This chosen axis we bring into view space. Now an axis is the same kind of
thing like a normal, i.e. a direction, so we transform it the same way as we do
with normals. We transform it by the inverse transpose of the modelview matrix
as you to with normals; note that since we defined the axis in world space, we
need to actually use the inverse transpose of the world to view transformation
matrix then.
The transformed major axis of the billboard is now in view space. Next step is
to orthogonalize it to the viewing direction. For this you use the Gram-Schmidt
method. Now we got the Z and the Y column of the billboard transform. Remains
the X column, which we get by taking the cross product of the Z with the Y column.

In case anyone wonders how I solved this.
I have based my solution on Quonux's answer, the only problem with it was that the billboard would rotate very fast when the camera is right above it (when the up vector is almost parallel to the camera look vector). This strange behaviour is a result of using a cross product to find the right vector: when the camera hovers over the top of the billboard, the cross product changes it's sign, and so does the right vector's direction. That explains the rotation that happens.
So all I needed was to find a right vector using some other way.
As I knew camera's rotation angles (both horizontal and vertical) I decided to use that to find a right vector:
rotatedRight = Vector4.Transform(unRotatedRight, Matrix4.CreateRotationY((-alpha)));
and the geometry shader:
...
uniform vec3 rotRight;
uniform vec3 cameraPos;
out vec2 texCoord;
void main(){
vec3 pos = gl_in[0].gl_Position.xyz;
pos /= gl_in[0].gl_Position.w; //normalized device coordinates
vec3 toCamera = normalize(cameraPos - pos);
vec3 CrossA = rotRight;
... (Continues as Quonux's code)

Related

How to keep Opengl Scatter Instances size unchanged?

Here is my question, i will list them to make it clear:
I am writing a program drawing squares in 2D using instancing.
My camera direction is (0,0,-1), camera up is (0,1,0), camera position is (0,0,3), and the camera position changes when i press some keys.
What I want is that, when I zoom in (the camera moves closer to the square), the square's size(in the screen) won't change. So in my shader:
#version 330 core
layout(location = 0) in vec2 squareVertices;
layout(location = 1) in vec4 xysc;
out vec4 particlecolor;
uniform mat4 VP;
void main()
{
float particleSize = xysc.z;
float color = xysc.w;
gl_Position = VP* vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
particlecolor = vec4(1.0f * color , 1.0f * (1-color), 0.0f, 0.5f);
}
Please notice that, inorder to keep the squares' size unchanged, what I do is:
1. transform the center of the square first
VP * vec4(xysc.x, xysc.y, 2.0, 1.0)
2. then compute one of the four corners (x,y,z,1) of the square
+ vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0);
instead of:
gl_Position = VP* (vec4(xysc.x, xysc.y, 2.0, 1.0) + vec4(squareVertices.x*particleSize,squareVertices.y*particleSize,0,0));
However when I move the camera closer to z=0 plane. The squares' size grows unexpectedly. Where is the problem? I can provide a demo code if necessary.
Sounds like you use a perspective projection, and the formula you use in steps 1 and 2 won't work because VP * vec4 will in the general case result in a vec4(x,y,z,w) with the w value != 1, and adding a vec4(a,b,0,0) to that will just get you vec3( (x+a)/w, (y+b)/w, z) after the perspective divide, while you seem to want vec3(x/w + a, y/w +b, z). So the correct approach is to scale a and b by w and add that before the divde: vec4(x+a*w, y+b*w, z, w).
Note that when you move your camera closer to the geometry, the effective w value will approach towards zero, so (x+a)/w will be a greater than x/w + a, resulting in your geometry getting bigger.

gl_PointSize Corresponding to World Space Size

If you want to render an imposter geometry (say like a sphere), then the standard practice is to draw it using two triangles (say by passing one vertex and making a triangle strip with a geometry shader).
This is nice because it allows the extent of the billboard to be set fairly simply: you compute the actual world space positions directly.
Geometry shaders can alternately output point primitives, and I don't see a reason why they shouldn't. The only issue is finding some way to scale gl_PointSize so that you get that effect.
The only precedent I could find were this question (whose answer I am unsure is correct) and this question (which is unanswered).
It's worth noting that it's fairly simple to scale the point correctly with distance (by doing gl_PointSize = constant/length(gl_Position), but this isn't controllable; you can't say for example: I want this point to look like it is two world units across.
So: anyone know how to do this?
A straight forward idea is to transform a point at the top and bottom of the particle into screen space and find the distance. This cancels very nicely and it's pretty simple to work with just the y coordinate.
The billboard is screen aligned, and view matrices generally don't scale, so the particle size in world space is the same as eye space. That just leaves the projection to get to NDC, the divide by w and scaling by the viewport size.
A typical projection matrix, P, might look something like this...
[ +1.2990 +0.0000 +0.0000 +0.0000 ]
[ +0.0000 +1.7321 +0.0000 +0.0000 ]
[ +0.0000 +0.0000 -1.0002 -0.0020 ]
[ +0.0000 +0.0000 -1.0000 +0.0000 ]
Starting with y_eye, a y coordinate in eye space, the image space coordinate y_image is obtained in pixels...
Plugging in the radius above/below the billboard and subtracting cancels to...
Or, in text, pixelSize = vpHeight * P[1][1] * radius / w_clip
For a perspective projection, P[1][1] = 1 / tan(fov_y / 2). w_clip is gl_Position.w, which is also -z_eye (from the -1 in the perspective matrix). To guarantee your point covers every pixel you want, this may need an additional small constant.
Side note: A sphere on a billboard will look OK in the middle of the screen. If you have a large field of view perspective projection, a true sphere should warp as it approaches the edges of the screen. You could implicitly raycast the virtual sphere for each pixel in the billboard to get a correct result, but the billboard boundary will need to be adjusted accordingly. Quick google results: 1 2 3 4
[EDIT]
Well, since I bothered to test this I'll throw my shaders here too...
Vertex:
#version 150
in vec4 osVert;
uniform mat4 projectionMat;
uniform mat4 modelviewMat;
uniform vec2 viewportSize;
flat out vec2 centre;
flat out float radiusPixels;
const float radius = 1.0;
void main()
{
gl_Position = projectionMat * modelviewMat * osVert;
centre = (0.5 * gl_Position.xy/gl_Position.w + 0.5) * viewportSize;
gl_PointSize = viewportSize.y * projectionMat[1][5] * radius / gl_Position.w;
radiusPixels = gl_PointSize / 2.0;
}
Fragment:
#version 150
flat in vec2 centre;
flat in float radiusPixels;
out vec4 fragColour;
void main()
{
vec2 coord = (gl_FragCoord.xy - centre) / radiusPixels;
float l = length(coord);
if (l > 1.0)
discard;
vec3 pos = vec3(coord, sqrt(1.0-l*l));
fragColour = vec4(vec3(pos.z), 1.0);
}
(Note the visible gap at the bottom right is incorrect as described above)

Sprite rotation over sphere

I have a 2D mode which displays moving sprites over the world. each sprite has rotation.
When i'm trying to implement the same in 3D world, over a sphere, i met a problem calculating the sprite rotation so it will look like it is moving toward the direction. I'm aware that the sprite is billboard only and the rotation will be 2D only, and will not be 100% rotated toward the direction but at least to make it look reasonable for the eye.
I've tried to consider the vector to the north (of the world) in my rotation but still, there are allot of cases when we move the camera around the sphere that the sprite arrow is not in the direction of the movement.
Can anyone direct me for a solution ?
-------- ADDITION -----------
More explanation: I have 2D world (x,y). In this world I have a point that moves toward a direction (an angle is saved in the object). The rotations are calculated in the fragment shader of course.
In the 3D world, i'm converting this (x, y) to a (x,y,z) by simple sphere formula.
My sphere (world) origin is (0,0,0) with radius 1.
The angle (saved in the point for the direction of movement) is used in 2D for rotating the texture as well (As shown above in the first image). The problem is the rotation of the texture in 3D. The rotating should consider the point direction angle, and the camera.
-------- ADDITION -----------
My fragment shader for 2D - If it is helping. And few more pictures and my wish
varying vec2 TextureCoord;
varying vec2 TextureSize;
uniform sampler2D sampler;
varying float angle;
uniform vec4 uColor;
void main()
{
vec2 calcedCoord = gl_PointCoord;
float c = cos(angle);
float s = sin(angle);
vec2 trans = vec2(-0.5, -0.5);
mat2 rot = mat2(c, s, -s, c);
calcedCoord = calcedCoord + trans;
calcedCoord = rot * calcedCoord;
calcedCoord = calcedCoord - trans;
vec2 realTexCoord = TextureCoord + (calcedCoord * TextureSize);
vec4 fragColor = texture2D(sampler, realTexCoord);
gl_FragColor = fragColor * uColor;
}
After struggling allot with this issue I came into this solution.
Instead of attaching as attribute the direction angle to each sprite, I sent the next sprite location instead. And calculating the 2D angle in the vertex shader as follow:
varying float angle;
attribute vec3 nextPointAtt;
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
vec4 nextPnt = gl_ModelViewProjectionMatrix * vec4(nextPointAtt, gl_Vertex.w);
vec2 ver = gl_Position.xy / gl_Position.w;
vec2 nextVer = nextPnt.xy / nextPnt.w;
vec2 d = nextVer - ver;
angle = atan(d.y, d.x);
}
The angle will be used in the fragment shader (Look at my question for the fragment shader code).

How to set a specific eye point using perspective view with shaders

In these days I am reading the Learning Modern 3D Graphics Programming book by Jason L. McKesson. Basically it is a book about the OpenGL 3.3 and I am now at the chapter 4, that is about orthographic and perspective view.
At the end of the chapter, under the "Further Study" section he suggests to try few things like implementing a variable eye point (he used at the begin (0, 0, 0) in camera space for semplicity) and an arbitrary perspective plane location.
He says I am going to need to offset the X, Y camera-space positions of the vertices by E_x and E_y respectively.
I cannot understand this passage, how am I supposed to use a variable eye point modifying only the X, Y offsets?
Edit: could it be something like this?
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 color;
smooth out vec4 theColor;
uniform vec2 offset;
uniform vec2 E;
uniform float zNear;
uniform float zFar;
uniform float frustumScale;
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale + vec4(E.x, E.y, 0.0, 0.0);
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = cameraPos.z / (-E.z);
gl_Position = clipPos;
theColor = color;
}
Edit2: thanks Boris, your picture helped a lot :) especially because:
it makes clear what you previously stated regarding thinking E as projection place position and not eye point position
it underlines that the size of the project plane must be always [-1, 1], passage that I read on the book without fully understanding what it meant
Just a curiosity, why do you mention multiplying after subtracting? Is it for the same reason the book says, that is aspect ratio? Because everything logically push me doing exactly the opposite, that is first translation (-2) and then multiplication (/5).. Or maybe with the term "scaling", the book refers to the reshape function?
Here, we are interested in computing a transformation from Camera Coordinates (CC) to Normalized Device Coordinates (NDC).
Think of E as the position of the projection plane in Camera Coordinates, instead of the position of the eye point according to the projection plane. In Camera Coordinates, the eye point is by definition located at the origin, at least in my interpretation of what "Camera Coordinate" means: a coordinate frame centered from where you look at the scene. (You can mathematically define a perspective transformation centered from anywhere, but this means your input space is not the camera space, imho. This is what the World->Camera transformation is for, as you will see in chapter 6)
Summary:
you are in camera space, hence your eye point is located at (0,0,0)
you are looking toward the negative Z-axis
your projection plane is parallel to the xOy plane, with a size of [-1,1] in both direction
This is the picture here (each tick is 0.5 unit):
In this picture, you can see that the projection plane (bottom side of the gray trapezoid) is centered in (0,0,-1), with a size of [-1,1] in both X and Y direction.
Now, what is asked is instead of choosing (0,0,-1) for the center of this plane, to choose an arbitrary (E.x, E.y, E.z) position (assumes E.z is negative). But the plane has still to be parallel to xOy axis and with the same size.
You can see that the dimension E.xy plays a very different role than E.z, reason why E.xy will be involved in an substraction, while E.z will be involved in a division. This is easy to see with an example:
assume zNear = -E.z (not necessarily the case, but you can in fact always change frustumScale to have an equivalent perspective satisfying this)
consider the point E (which is the center of the projection plane).
What is its coordinate in NDC space? It is (0,0,-1) by definition. What you've done is substracting E.xy, but dividing by -E_z.
Your code got this idea, but still some things are wrong:
First, you defined uniform vec2 E; instead of uniform vec3 E; (just a typo, not a big deal)
The line clipPos.xy = ... ; is about vec2 arithmetic. Hence, you can only multiply by scalar values (i.e., a float), or add/substract vec2 values. Hence, vec4(E.x, E.y, 0.0, 0.0) is of incorrect type, you should use E.xy instead, which has the correct type vec2.
You should in fact substract E.xy instead of add it. This is easy to see in my example above.
Finally, things are more subtle ;-)
I made a picture to illustrate the modifications:
Each tick is 1 unit in this picture. Top left is your Camera Coordinate Space, with displayed zNear, zFar, and two possible projection planes. In blue is the one used in the explanation and shader here, and the red one is the one you now want to use. The colored areas correponds to what should be visible in you final screen, e.g. what should be in the cube [-1,1]^3 in the NDC Space. Hence, if you use the blue projection plane, you want to obtain the space in top right, and if you use the red projection plane, you want to optain the space in the bottom. To do this, you can observe that you need to perform the scaling and translation in NDC space, e.g. after the perspective division! (I think what is written in the book is either incorrect, or interpret the question differently).
Hence you want to do, in euclidean coordinate (i.e., not homogeneous coordinate, e.g. without W coordinate):
clipPosEuclideanRed.xy = clipPosEuclideanBlue.xy * (-E.z) - E.xy;
clipPosEuclideanRed.z = clipPosEuclideanBlue.z;
However, because you are in homogeneous coordinates, this values are in fact:
clipPosEuclidean.xyz = clipPos.xyz / clipPos.w; // with clipPos.w = -cameraPos.z;
Hence, you have to composate by writing:
clipPosRed.xy = clipPosBlue.xy * (-E.z) - E.xy * (-cameraPos.z);
clipPosRed.z = clipPosBlue.z;
So my solution to this problem would be to add only one line:
void main()
{
vec4 cameraPos = position + vec4(offset.x, offset.y, 0.0, 0.0);
vec4 clipPos;
clipPos.xy = cameraPos.xy * frustumScale;
// only add this line
clipPos.xy = - clipPos.xy * E.z + E.xy * cameraPos.z;
clipPos.z = cameraPos.z * (zNear + zFar) / (zNear - zFar);
clipPos.z += 2 * zNear * zFar / (zNear - zFar);
clipPos.w = -cameraPos.z;
gl_Position = clipPos;
theColor = color;
}

OpenGL, target spot-light "following me around the room"!

I'm implementing a target spotlight. I have the light cone, fall-off and all of that down and working OK. The problem is that as I rotate the camera around some point in space, the lighting seems to following it, i.e. regardless of where the camera is the light is always at the same angle relative to the camera.
Here's what I'm doing in my vertex shader:
void main()
{
// Compute vertex normal in eye space.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position in eye space.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute spot-light cone direction vector.
attrib_Fragment_Light_Direction = normalize(Light_LookAt - Light_Position);
// Compute vector from eye to vertex.
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
// Return position.
gl_Position = Camera_Projection * position;
}
I have a target spotlight defined by Light_Position and Light_LookAt (look-at being the point in space the spotlight is looking at of course). Both position and lookAt are already in eye space. I computed eye space CPU-side by subtracting the camera position from them both.
In the vertex shader I then go on to make a light-cone vector from the light position to the light lookAt point, which informs the pixel shader where the main axis of the light cone is.
At this point I'm wondering if I have to transform the vector as well and if so by what? I've tried the inverse transpose of the view matrix, with no luck.
Can anyone take me through this?
Here's the pixel shader for completeness:
void main(void)
{
// Compute N dot L.
vec3 N = normalize(attrib_Fragment_Normal);
vec3 L = normalize(attrib_Fragment_Light);
vec3 E = normalize(attrib_Fragment_Eye);
vec3 H = normalize(L + E);
float NdotL = clamp(dot(L,N), 0.0, 1.0);
float NdotH = clamp(dot(N,H), 0.0, 1.0);
// Compute ambient term.
vec4 ambient = Material_Ambient_Colour * Light_Ambient_Colour;
// Diffuse.
vec4 diffuse = texture2D(Map_Diffuse, attrib_Fragment_Texture) * Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
// Specular.
float specularIntensity = pow(NdotH, Material_Shininess) * Material_Strength;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * specularIntensity;
// Light attenuation (so we don't have to use 1 - x, we step between Max and Min).
float d = length(-attrib_Fragment_Light);
float attenuation = smoothstep( Light_Attenuation_Max,
Light_Attenuation_Min,
d);
// Adjust attenuation based on light cone.
vec3 S = normalize(attrib_Fragment_Light_Direction);
float LdotS = dot(-L, S);
float CosI = Light_Cone_Min - Light_Cone_Max;
attenuation *= clamp((LdotS - Light_Cone_Max) / CosI, 0.0, 1.0);
// Final colour.
Out_Colour = (ambient + diffuse + specular) * Light_Intensity * attenuation;
}
Thanks for the responses below. I still can't work this out. I'm now transforming the light into eye-space CPU-side. So no transforms of the light should be necessary, but it still doesn't work.
// Compute eye-space light position.
Math::Vector3d eyeSpacePosition = MyCamera->ViewMatrix() * MyLightPosition;
MyShaderVariables->Set(MyLightPositionIndex, eyeSpacePosition);
// Compute eye-space light direction vector.
Math::Vector3d eyeSpaceDirection = Math::Unit(MyLightLookAt - MyLightPosition);
MyCamera->ViewMatrixInverseTranspose().TransformNormal(eyeSpaceDirection);
MyShaderVariables->Set(MyLightDirectionIndex, eyeSpaceDirection);
... and in the vertex shader, I'm doing this (below). As far as I can see, light is in eye space, vertex is transformed into eye space, lighting vector (attrib_Fragment_Light) is in eye space. Yet the vector never changes. Forgive me for being a bit thick!
// Transform normal from model space, through world space and into eye space (world * view * normal = eye).
attrib_Fragment_Normal = (Model_WorldViewInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Transform vertex into eye space (world * view * vertex = eye)
vec4 position = Model_WorldView * vec4(attrib_Position, 1.0);
// Compute vector from eye space vertex to light (which has already been put into eye space).
attrib_Fragment_Light = Light_Position - position.xyz;
// Compute vector from the vertex to the eye (which is now at the origin).
attrib_Fragment_Eye = -position.xyz;
// Output texture coord.
attrib_Fragment_Texture = attrib_Texture;
It looks here like you're subtracting Light_Position, which I assume you want to be a world space coordinate (since you seem dismayed that it's currently in eye space), from position, which is an eye space vector.
// Compute vector between light and vertex.
attrib_Fragment_Light = Light_Position - position.xyz;
If you want to subtract two vectors, they must both be in the same coordinate space. If you want to do your lighting computations in world space, then you should use a world space position vector, not a view space position vector.
That means multiplying the attrib_Position variable with the Model matrix, not the ModelView matrix, and using this vector as the basis for your light computation.
You can't compute eye position by just subtracting the camera position, you have to multiply by the modelview matrix.