Calculating normals for a height map - c++

I have a small problem calculating normals for my heightmap. It has a strange behavior. At the higher and the lower points the normals are fine, but in the middle they seem wrong. They are lighted by a point light.
UNFIXED SOURCE REMOVED
EDIT:
Tried 2 new approaches:
This is per-face-normal. It looks fine but you see the single faces.
Position normal = crossP(vectorize(pOL, pUR), vectorize(pOR, pUL));
I also tried to do it per-vertex this way, but also with a strange output.
This is the suggestion Nico made:
It looks also rather odd. Maybe there is a mistake how I calculate the helping points.
UNFIXED SOURCE REMOVED
EDIT 2:
Definition of my points:
OL,OR,UL,UR are the corner vertices of the plane that is to be drawn.
postVertPosZ1 postVertPosZ2
preVertPosX1 pOL pOR postVertPosX1
preVertPosX2 pUL pUR postVertPosX2
preVertPosZ1 preVertPosZ2
EDIT3:
I solved it now. It was a stupid mistake:
I forgot to multiply the y value of the helping Vertices with the height Multiplier and had to change some values.
It is beautiful now.

There are lots of ways to solve this problem. I haven't encountered yours. I suggest using central differences to estimate partial derivatives of the height field. Then use the cross product to get the normal:
Each vertex normal can be calculated from its four neighbors. You don't need the plane plus its neighbors:
T
L O R
B
O is the vertex for which you want to calculate the normal. The other vertices (top, right, bottom, left) are its neighbors. Then we want to calculate the central differences in the horizontal and vertical direction:
/ 2 \
horizontal = | height(R) - height(L) |
\ 0 /
/ 0 \
vertical = | height(B) - height(T) |
\ 2 /
The normal is the cross product of these tangents:
normal = normalize(cross(vertical, horizontal))
/ / height(L) - height(R) \ \
= normalize | | 2 | |
\ \ height(T) - height(B) / /
Note that these calculations assume that your x-axis is aligned to the right and the z-axis down.

Related

Axis rotation with an oriented bounding box

I'm trying to put an oriented bounding box around the stamford rabbit for a project. To do this, I create a covariance matrix with the vertices and use the eigenvector columns as the new axis vectors for the OBB.
To draw the OBB, I take the cross product of the vector columns with the x,y and z axes to find the vector perpendicular both, then I use the dot product to find the angle between them.
//rv,uv,fv are the normalised column vectors from the eigenvector matrix.
// Calculate cross product for normal
crossv1x[0] = xaxis[1]*rv[2] - xaxis[2]*rv[1];
crossv1x[1] = xaxis[2]*rv[0] - xaxis[0]*rv[2];
crossv1x[2] = xaxis[0]*rv[1] - xaxis[1]*rv[0];
// Calculate cross product for normal
crossv2y[0] = yaxis[1]*uv[2] - yaxis[2]*uv[1];
crossv2y[1] = yaxis[2]*uv[0] - yaxis[0]*uv[2];
crossv2y[2] = yaxis[0]*uv[1] - yaxis[1]*uv[0];
// Calculate cross product for normal
crossv3z[0] = zaxis[1]*fv[2] - zaxis[2]*fv[1];
crossv3z[1] = zaxis[2]*fv[0] - zaxis[0]*fv[2];
crossv3z[2] = zaxis[0]*fv[1] - zaxis[1]*fv[0];
//dot product:
thetaX = dot(xaxis,rv,1)*180/PI;
thetaY = dot(yaxis,uv,1)*180/PI;
thetaZ = dot(zaxis,fv,1)*180/PI;
I then apply a rotation around the cross product vector with an angle determined by the dot product (glRotatef(angle,cross[0],cross1,cross[2]) for each axis). I then draw an axis aligned bounding box, then to the inverse rotation back to the original position.
glRotatef(thetaY,crossv2y[0],crossv2y[1],crossv2y[2]);
glRotatef(thetaZ,crossv3z[0],crossv3z[1],crossv3z[2]);
glRotatef(thetaX,crossv1x[0],crossv1x[1],crossv1x[2]);
glTranslatef(-meanX, -meanY, -meanZ);
glColor3f(1.0f,0.0f,0.0f);
AOBB(1); //Creates an axis aligned box.
glRotatef(-thetaX,crossv1x[0],crossv1x[1],crossv1x[2]);
glRotatef(-thetaZ,crossv3z[0],crossv3z[1],crossv3z[2]);
glRotatef(-thetaY,crossv2y[0],crossv2y[1],crossv2y[2]);
As you can see below, the box does not fit exactly onto the rabbit, nor does it align with the axis I have drawn... Am I missing something here? Ive fried my brain trying to find the solution but to no avail...
To draw the OBB, I take the cross product of the vector columns with the x,y and z axes to find the vector perpendicular both, then I use the dot product to find the angle between them.
Those "vector columns?" Those are actually the columns of the rotation matrix you want to generate.
So instead of using these vectors to compute rotation angles, just build a matrix with those (normalized) vectors as the columns. Upload it with glMultMatrix, and you should be fine.

GeoDjango: Determine the area of a polygon

In my model I have a polygon field defined via
polygon = models.PolygonField(srid=4326, geography=True, null=True, blank=True)
When I want to determine the area of the polygon, I call
area_square_degrees = object.polygon.area
But how can I convert the result in square degrees into m2 with GeoDjango?
This answer does not work, since area does not have a method sq_m. Is there any built-in conversion?
You need to transform your data to the correct spatial reference system.
area_square_local_units = object.polygon.transform(srid, clone=False).area
In the UK you might use the British National Grid SRID of 27700 which uses meters.
area_square_meters = object.polygon.transform(27700, clone=False).area
You may or may not want to clone the geometry depending on whether or not you need to do anything else with it in its untransformed state.
Docs are here https://docs.djangoproject.com/en/1.8/ref/contrib/gis/geos/
I have struggled a lot with this, since i could'nt find a clean solution. The trick is you have to use the postgis capabilities (and and thus its only working with postgis..):
from django.contrib.gis.db.models.functions import Area
loc_obj = Location.objects.annotate(area_=Area("poly")).get(pk=??)
# put the primary key of the object
print(loc_obj.area_) # distance object, result should be in meters, but you can change to the unit you want, e.g .mi for miles etc..
The models.py:
class Location(models.Model):
poly = gis_models.PolygonField(srid=4326, geography=True)
It's i think the best way to do it if you have to deal with geographic coordinates instead of projections. It does handle the curve calculation of the earth, and the result is precise even with big distance/area
I needed an application to get the area of poligons around the globe and if I used an invalid country/region projection I got the error OGRException: OGR failure
I ended using an OpenLayers implementation
using the 4326 projection (is the default projection) to avoid concerning about every country/region specific projection.
Here is my code:
import math
def getCoordsM2(coordinates):
d2r = 0.017453292519943295 # Degrees to radiant
area = 0.0
for coord in range(0, len(coordinates)):
point_1 = coordinates[coord]
point_2 = coordinates[(coord + 1) % len(coordinates)]
area += ((point_2[0] - point_1[0]) * d2r) *\
(2 + math.sin(point_1[1] * d2r) + math.sin(point_2[1] * d2r))
area = area * 6378137.0 * 6378137.0 / 2.0
return math.fabs(area)
def getGeometryM2(geometry):
area = 0.0
if geometry.num_coords > 2:
# Outer ring
area += getCoordsM2(geometry.coords[0])
# Inner rings
for counter, coordinates in enumerate(geometry.coords):
if counter > 0:
area -= getCoordsM2(coordinates)
return area
Simply pass your geometry to getGeometryM2 function and you are done!
I use this function in my GeoDjango model as a property.
Hope it helps!
If Its earths surface area that you are talking about, 1 square degree has 12,365.1613 square km. So multiple your square degree and multiply by 10^6 to convert to meters.

Affine transformation implementation

I am trying to implement affine transformation on two images.
First i find the matching pairs in both of the images. One of them is zoomed image and the other is a reference image. The pairs returned me co-efficients as :
|1 0 | | x | | 1 |
A = | | X = | | B = | |
|0 0 | | y | | 221 |
The equation formed is X' = AX + B;
x_co_efficients[2] = (((x_new_Cordinate[2]-x_new_Cordinate[0])*(xCordinate[1]- xCordinate[0])) - ((x_new_Cordinate[1]-x_new_Cordinate[0])*(xCordinate[2] - xCordinate[0])))/
(((xCordinate[1]-xCordinate[0])*(yCordinate[2]-yCordinate[0])) - ((xCordinate[2]-xCordinate[0])*(yCordinate[1]-yCordinate[0])));
x_co_efficients[1] = ((x_new_Cordinate[1]-x_new_Cordinate[0]) - (yCordinate[1]-yCordinate[0])*(x_co_efficients[2]))/(xCordinate[1]-xCordinate[0]);
x_co_efficients[0] = x_new_Cordinate[0] - (((x_co_efficients[1])*(xCordinate[0])) + ((x_co_efficients[2])*(yCordinate[0])));
y_co_efficients[2] = (((y_new_Cordinate[2]-y_new_Cordinate[0])*(xCordinate[1]- xCordinate[0])) - ((y_new_Cordinate[1]-y_new_Cordinate[0])*(xCordinate[2] - xCordinate[0])))/
(((xCordinate[1]-xCordinate[0])*(yCordinate[2]-yCordinate[0])) - ((xCordinate[2]-xCordinate[0])*(yCordinate[1]-yCordinate[0])));
y_co_efficients[1] = ((y_new_Cordinate[1]-y_new_Cordinate[0]) - (yCordinate[1]-yCordinate[0])*(y_co_efficients[2]))/(xCordinate[1]-xCordinate[0]);
y_co_efficients[0] = y_new_Cordinate[0] - (((y_co_efficients[1])*(xCordinate[0])) + ((y_co_efficients[2])*(yCordinate[0])));
These are the equations i am used to finned the co-efficients from using the matching pairs. The equations are working fine for same images, for zooming image it is giving me those co-efficients. Now the problem is i have a 24 bit binary image, i want to implement affine transformation on that image with respect to reference. Now when i try to find the new co-ordinates of that image and change its current value to that co-ordinate I get a very much distorted image. which is should not get otherwise if the transformation is right.
Can Some one please have a look at the equations and also explain a little bit how to implement these equations on the second image.
My code is in C++. Thank you.
My reference image is above.. and my comparison image is
The result i am getting is a distorted image with lines only.
Edit 1
I have now changed the solving method to matrices. Now i am getting the right output but the image i am getting after registration is like this..
Also i have to applied limit of 0 to 320*240 in the new co-ordinates to get the pixel value. Now my result is somewhat like this.
EDIT 2
I have changed the code and m getting this result without any black pixels. I am getting a little tilting.. have removed zoom effect in the given image though.
Your transformation matrix A is problematic. It destroys the y coordinate value and assigns 221 to all y coordinates
You can make the element at (2,2) in A just 1 and problem should be solved.

SetViewBox moving the paper

I am using the setViewBox() function in Raphael 2. The width and height is multiplied by a value like (1.2, 1.3 ...). This changes the magnification/ zooming properly but the x and y which I have given as 0,0 makes the paper display its contents after some offset. If i modify the x and y to some positive value after the rendering( using firebug!!) then the top left of the paper moves back and above to its right position. I want to know how will the value be calculated. I have no idea about how the x,y affect the viewbox. If anybody can give me any pointers for this it will be a real help.
I have tried giving the difference between the width/ height divided by 2. Also I must mention that I am not rendering an image but various raphael shapes e.g. rects, paths text etc. in my paper.
Looking forward to some help!
Kavita
this is an example showing how to calculate the setViewBox values, I included jquery (to get my SVG cocntainer X and Y : $("#"+map_name).offset().left and $("#"+map_name).offset().top) and after that I calculated how much zoom I need :
var original_width = 777;
var original_height = 667;
var zoom_width = map_width*100/original_width/100;
var zoom_height = map_height*100/original_height/100;
if(zoom_width<=zoom_height)
zoom = zoom_width;
else
zoom = zoom_height;
rsr.setViewBox($("#"+map_name).offset().left, $("#"+map_name).offset().top, (map_width/zoom), (map_height/zoom));
did you put the center of your scaling to 0,0 like:
element.scale(1.2,1.2,0,0);
this can scale your element without moving the coordinates of the top left corner.

Rotating from center rather than from left, middle

Im working on a system for skeletal animation and each bone's angle is based on its parent. I have to rotate that bone from the end of the parent joint for that angle to be accurate as illustrated in the first part of this illustration:
What I need to do is the second part of the illustration. This is because my drawing API only supports rotating around the center of the bitmap.
Thanks
Combine the rotation with a translation. Rotate the figure about the center, then move it to where it should be.
One option is to introduce extra blank pixels into your bitmap. If you can only rotate around the center of the bitmap, consider what happens if you double the width of your bitmap and then translate the image you want to rotate so that it's flush up against the right.
For example, suppose your image is
+-------+
X image |
+-------+
where the X is the point you want to rotate around. Now, construct this image:
+-------+-------+
| blank X image |
+-------+-------+
If you rotate around the center of this image, notice that you're rotating right on top of the X, which is what you wanted to do in the first place. The resulting rotated image looks like this:
+---+
| b |
| l |
| a |
| n |
| k |
+-X-+
| i |
| m |
| a |
| g |
| e |
+---+
Now, you just extract the bottom half of the image and you've got your original image, rotate 90 degrees around the indicated X point.
Hope this helps!