Math: Average out lines in polar coordinate system (c++ opencv) - c++

I am using OpenCV for some line detection with HoughLines. Then I look for there intersections.
This is the end result:
http://i.imgur.com/PaGw8RI.png
(green dots being the intersections and red lines being the raw lines after houghlines operation)
As you can see there are a lot of lines detected and to compute intersections each and every line is compiled with each other thus extremely increasing the processing time.
I am looking to optimize the number of lines by averaging out similar lines after the initial HoughLines operation.
The problem is that HoughLines outputs the data in polar coordinate system and so far I was not able to find any similar code or a mathematical equation to do this.
Any help would be much appreciated.
Edit: added R, Phi sorted according to the line it belongs to.
R Phi -11.000 , 3.124
R Phi 15.000 , 0.000
R Phi 13.000 , 0.000
R Phi 22.000 , 0.000
R Phi -18.000 , 3.124
R Phi -9.000 , 3.107
R Phi -10.000 , 3.089
R Phi -7.000 , 3.089
R Phi 19.000 , 0.017
R Phi -6.000 , 3.107
R Phi -4.000 , 3.072
R Phi -14.000 , 3.107
R Phi 27.000 , 0.017
R Phi 172.000 , 1.553
R Phi 165.000 , 1.553
R Phi 173.000 , 1.536
R Phi 170.000 , 1.571
R Phi 166.000 , 1.536
R Phi -163.000 , 3.107
R Phi 169.000 , 0.017
R Phi 172.000 , 0.035
R Phi -165.000 , 3.124
R Phi -159.000 , 3.124
R Phi 165.000 , 0.000
R Phi 167.000 , 0.000
R Phi 167.000 , 0.035
R Phi -155.000 , 3.107
R Phi 313.000 , 1.571
R Phi 319.000 , 1.536
R Phi 312.000 , 1.588
R Phi 315.000 , 1.553
R Phi 317.000 , 1.553
R Phi 24.000 , 1.536
R Phi 26.000 , 1.518
R Phi 22.000 , 1.553
An average or delta would work fine I guess but I need to learn as to why negative values have different theta. In practical view the difference appears to be Pi so for every negative value I could go with abs(r) and Pi-Phi.
However I need to know if this is a 100 % full proof solution.
EDIT2: After testing I am sure that I was not exactly right here... The lines plainly switched place...

From what I gather from the documentation the HoughLines function returns a parametrization of the lines in polar coordinates. So the (r, phi) tuple describes the shortest distance of the line to the origin and the angle between the line from the origin to the point of shortest distance and the x axis. The sketch from the function documentation illustrates the situation (there is a right angle between the blue and red line):
Now, if two lines are similar, they may be tilted a bit against each other and their distance of closest approach to the origin will vary slightly. So, you can just coalesce detected lines that differ by less than a DeltaR and a DeltaPhi (simultaneously), e.g. by taking the means of the parameters. You can also perform a weighted average if you have some kind of measure of how trustworthy the detected lines are. How big the tolerances should be depends very much on your coordinate system and your application. Of course, as albemala said, raising the threshold of the algorithm to produce less false lines in the first place will help with getting more precise results.
Edit: The above assumes that there are no ambiguities in the parameters. As this is not the case here, you have to fold r to [0,∞) and phi to [0,2π). If r is negative flip its sign and add π to phi, then add a multiple of 2π to phi so that the result is in the range 0 <= phi < 2π. You will always have a branch cut somewhere, where phi jumps by 2π. This complicates the comparison of angles and their average:
You have to compare angles "modulo 2π", i.e. for two angles phi1, phi2 take the smaller of abs(phi1-phi2) and abs(abs(phi1-phi2)-2π) as their difference.
After you identified the lines that should be coalesced you have to bring their phi values numerically close together for the average, e.g. by adding 2π to the values smaller than π. You can then fold the averaged value back into the range [0,2π).
Edit 2: Values of phi less than π combined with a negative r can only caused by lines going through sectors 2 (x<0, y>0), 3 (x<0, y<0) and 4 (x>0, y<0) of the coordinate system. These lines will never be visible in sector 1 (x>0, y>0). Therefore, if you only consider lines through this sector phi - π will be in the range [0,2π) an you can save the folding step. You still have to cope with the branch cut at phi = 0 when averaging and measuring angle differences, though.

Related

Projectile Motion in 3D: Calculating Time of Collision

I'm writing a function that takes in an object with a trajectory (including starting position, starting velocity, and acceleration, all represented as Vector3s) in 3D space and if it hits another object, returns the point of collision and time of the collision. I'm using kinematic equations with a timestep to detect possible collisions and I can get the point of collision that way, but once I have that I want to find the exact time that that collision would occur at.I thought of rearranging a kinematic equation to solve for time and plug in what I already had, but I can't figure out how I can use all three axes of motion to do this, since my other values are Vec3's and time is just scalar. I've thought about just doing the calculation on one axis, but I'm not sure if that would lead to an accurate result.
Would it be accurate to calculate just based on one axis, or is there a way to incorporate all three into the calculation? The formula I'm using to solve for time is:
t = (v_init +/- Sqrt((v_init)^2 - (accel * disp * 4 * .5)))/accel;
Where v_init is initial velocity, disp is total displacement, and accel is acceleration. I'm basing this off of the kinematic equation:
d = v*t + .5*a*t^2
Let me write in the general case. The component-wise motion law is
x(t) = x0 + v_x t + 0.5 a_x t^2
y(t) = y0 + v_y t + 0.5 a_y t^2
z(t) = z0 + v_z t + 0.5 a_z t^2
where (x0,y0,z0)^t is the initial position, (v_x, v_y, v_z)^t is the initial velocity vector, and (a_x, a_y, a_z)^t is the vector of acceleration. The 3rd component of the latter may include also the gravity acceleration.
I assume that the collision plane is horizontal, having thus equation z = k. Solve in t the equation
z(t) = k
for finding the time t_c in which the projectile hits the plane. Compute then the collision coordinates x(t_c) and y(t_c) using the above formula by substituting t with t_c.
If the plane has the general equation
a x + b y +c z + d = 0
I suggest to put the frame of reference on the plane, having the xy plane on the collision plane, and then apply the above procedure.
You may also solve the non linear system
x = x0 + v_x t + 0.5 a_x t^2
y = y0 + v_y t + 0.5 a_y t^2
z = z0 + v_z t + 0.5 a_z t^2
a x + b y +c z + d = 0
taking the solution for t>0 (I dropped the dependency on t for x, y and z).
To solve it in C++, you may search a math library, such as Eigen which has a module for non linear systems.

Foucault Pendulum simulation

Program Foucault
IMPLICIT NONE
REAL,DIMENSION(:),ALLOCATABLE :: t, x,y
REAL,PARAMETER :: pi=3.14159265358979323846, g=9.81
REAL :: L, vitessea, lat, h, omega, beta
INTEGER :: i , zeta
zeta=1000
Allocate(x(zeta),y(zeta),t(zeta))
L=67.
lat=49/180*pi
omega=sqrt(g/L)
h=0.01
Do i= 1,zeta
IF(i==1 .OR. i==2) THEN
t(1)=0.0
t(2)=0.0
x(1)=0.1
x(2)=1
y(1)=0.0
y(2)=0.0
ELSE
t(i+1)=real(i)*h
x(i+1)=(-omega**2*x(i)+2.0*((y(i)-y(i-1))/h)*latang(lat))*h**2+2.0*x(i)-x(i-1)
y(i+1)=(-omega**2*y(i)-2.0*((x(i)-x(i-1))/h)*latang(lat))*h**2+2.0*y(i)-y(i-1)
END IF
WRITE(40,*) t(i), x(i)
WRITE(60,*) t(i), y(i)
WRITE(50,*) x(i), y(i)
END DO
Contains
REAL Function latang(alpha)
REAL, INTENT(IN) :: alpha
REAL :: sol
latang=2*pi*sin(alpha)/86400
END FUNCTION
End Program Foucault
I'm trying to code the original Foucault Pendulum in Paris. My code seems to be working but so far, I could only get the below right graphic, "the flower" evolution. Therefore, I changed my parameters constantly to get the left graphic but I couldn't.
I took parameters of Foucault Pendulum installed in Paris with L=67, angular velocity of earth =2*pi/86400 and latitude of 49/180*pi.
My initial conditions are as written in the code. I tried a way range of parameters varying all of my initial conditions, my latitude and angular velocity but i couldn't get the left desired results.
I used Foucault differential equations as below : i coded them with Finite difference method (more simple than Runge-Kutta) by replacing the 2nd order derivation by its central finite difference. And the first order one by it's backward finite difference. By then, i build my loop by isolating x(i+1) and y(i+1) in both equations.
My code is very sensitive to parameters such as h (=derivation step), earth angular velocity and latitude (which is normal). I tried to change a way big range of parameters from a big h step to a small one, to a minimal and high latitude, initial conditions...etc but i couldn't ever get the left graphic which i rather need.
What could be made to get the left one ?
I was able to get the two charts, by speeding up the earth's rotation 120× fold, and allowing the simulation to run for 32 swings of the pendulum. Also, I noticed that Euler integration added energy to the system making for bad results, so I reverted to a standard RK4 implementation.
and here is the code I used to solve this ODE:
program FoucaultOde
implicit none
integer, parameter :: sp = kind(1.0), dp = kind(1d0)
! Constants
real, parameter :: g=9.80665, pi =3.1415926536
! Variables
real, allocatable :: y(:,:), yp(:), k0(:),k1(:),k2(:),k3(:)
real :: lat, omega, h, L, earth, period
real :: t0,x0,y0,vx0,vy0
integer :: i, zeta, f1, swings
! Code starts here
swings = 32
zeta = 400*swings
L = 67
lat = 49*pi/180
period = 24*60*60 ! period = 86400
earth = (2*pi*sin(lat)/period)*120 !120 multiplier for roation
omega = sqrt(g/L)
allocate(y(5,zeta))
allocate(yp(5), k0(5),k1(5),k2(5),k3(5))
! make pendulum complete 'swings' cycles in 'zeta' steps
h = swings*2*pi/(omega*zeta)
t0 = 0
x0 = 0.5 ! Initial displacement
y0 = 0
vx0 = 0
vy0 = 0
! Initial conditions in the state vector Y
Y(:,1) = [t0,x0,y0,vx0,vy0]
do i=2, zeta
! Euler method (single step)
! Yp = ode(Y(:,i-1))
! Runge-Kutta method (four steps)
k0 = ode(Y(:,i-1))
k1 = ode(Y(:,i-1) + h/2*k0)
k2 = ode(Y(:,i-1) + h/2*k1)
k3 = ode(Y(:,i-1) + h*k2)
Yp = (k0+2*k1+2*k2+k3)/6
! Take a step
Y(:,i) = Y(:,i-1) + h*Yp
end do
open( newunit=f1, file='results.csv', status = 'replace', pad='no')
! write header
write (f1, '(a15,a,a15,a,a15,a,a15,a,a15)') 't',',', 'x',',','y',',', 'vx',',','vy'
! write rows of data, comma-separated
do i=1, zeta
write (f1, '(g,a,g,a,g,a,g,a,g)') y(1,i),',',y(2,i),',',y(3,i),',',y(4,i),',',y(5,i)
end do
close(f1)
contains
function ode(Y) result(Yp)
real, intent(in) :: Y(5)
real :: Yp(5), t,px,py,vx,vy,ax,ay
! Read state vector Y to component values
t = Y(1)
px = Y(2)
py = Y(3)
vx = Y(4)
vy = Y(5)
! Reference paper:
! http://www.legi.grenoble-inp.fr/people/Achim.Wirth/final_version.pdf
ax = -(omega**2)*px + 2*vy*earth ! (equation 53)
ay = -(omega**2)*py - 2*vx*earth ! (equation 54)
! State vector rate. Note, rate of time is aways 1.0
Yp = [1.0, vx, vy, ax, ay]
end function
end program FoucaultOde
The resulting file results.csv looks like this for me (for checking)
t, x, y, vx, vy
.000000 , 5.000000 , .000000 , .000000 , .000000
.4105792E-01, 4.999383 , .1112020E-06, -.3004657E-01, .8124921E-05
.8211584E-01, 4.997533 , .8895339E-06, -.6008571E-01, .3249567E-04
.1231738 , 4.994450 , .3001796E-05, -.9011002E-01, .7310022E-04
.1642317 , 4.990134 , .7114130E-05, -.1201121 , .1299185E-03
.2052896 , 4.984587 , .1389169E-04, -.1500844 , .2029225E-03
.2463475 , 4.977810 , .2399832E-04, -.1800197 , .2920761E-03
.2874054 , 4.969805 , .3809619E-04, -.2099106 , .3973353E-03
...
from which I plotted the 2nd and 3rd columns in one chart, and the 4th and 5th for the second chart.
There is one thing that may be wrong depending on how you manage different step sizes, and an observation on the physics of the real-world example. With the initialization of the arrays, you imply an initial velocity of about 0.9/0.01=90 [m/s] in x direction away from the center. To get compatible results for different step sizes, you would need to adapt the calculation of x(2). However, in the graphs the plot starts from a point with zero velocity. This you can implement to first order by setting x(2)=x(1)=1. As the used integration method is also first order, this is sufficient.
For the second point, note that one can write the system using complex coordinates z=x+iy as
z'' = -w^2*z - 2*i*E*z', E = Omega*sin(theta)
This is a linear ODE with constant coefficients, the solution of it is
z(t) = exp(-i*E*t) * (A*cos(w1*t)+B*sin(w1*t)), w1 = sqrt(w^2+E^2)
This describes a pendulum motion of frequency w1 whose plane rotates with frequency E clockwise. The grand rotation has period T=2*pi/E, during which w1*T/(2*pi)=w1/E pendulum swings occur.
Now insert your numbers, w=sqrt(g/L)=0.383 and E=2*pi*sin(49°)/86400=5.49e-05, so that essentially w1=w. The number of pendulum cycles per full rotation is w/E=6972, so that you can expect a densely filled circle in the plot. Or a very narrow double wedge if only a few cycles are plotted. As each cycle takes 2*pi/w=16.4 [s], and the integration goes 1000 steps of step size 0.01, in the plot as it is you can expect a swing forth and part of the swing back.
To be more realistic, set the initial velocity to zero, that is, the pendulum is taken to its start position and then let go. Also increase the time to 30 [s] to have more than one pendulum cycle in the plot.
It from this we can see that the solutions converge, and with some imagination, that they converge linearly.
To get a plot like in the cited images, one needs a much smaller fraction of w/E, counting the swings, it has to be around 15. Note that you can not get this ratio anywhere on earth with a realistically scaled pendulum. So set w=pi, E=pi/16 and integrate over 15 time units using the first order method.
This detoriorates really fast, even for the smallest step size with 40 points in a pendulum cycle.
For a better result, increase the local truncation order to the next higher by using the central difference in the first derivative approximation.
z(i+1) - 2*z(i) + z(i-1) = -w^2*z(i)*dt^2 - i*E*(z(i+1)-z(i-1))*dt
z(i+1) = ( 2*z(i) - z(i-1) - w^2*z(i)*dt^2 + i*E*z(i-1)*dt ) / (1+i*E*dt)
The division by the complex number can also be easily carried out in the real components of the trajectory,
! x(i+1)-2*x(i)+x(i-1) = h^2*(-omega**2*x(i)) + h*earth*(y(i+1)-y(i-1))
! y(i+1)-2*y(i)+y(i-1) = h^2*(-omega**2*y(i)) - h*earth*(x(i+1)-x(i-1))
t(i) = t(i-1) + h
cx = (2-(h*omega)**2)*x(i) - x(i-1) - h*earth*y(i-1)
cy = (2-(h*omega)**2)*y(i) - y(i-1) + h*earth*x(i-1)
den = 1+(h*earth)**2
x(i+1) = (cx + h*earth*cy)/den
y(i+1) = (cy - h*earth*cx)/den
Now to respect the increased order, also the initial points need to have an order of accuracy more, using again zero initial speed, this gives in the second order Taylor expansion
z(2) = z(1) - 0.5*w^2*z(1)*dt^2
All the step sizes that gave deviating and structurally deteriorating results in the first order method now give a visually identical, structurally stable results in this second order method.

Find (x,y) subpixel coordinates of a maximum using discrete quadratic interpolation

I have to find the subpixel (x,y) coordinates of the maximum value given a set of discrete points.Im ny case, I run cv::matchTemplate function that slides a model window along an image and returns a score value for each pixel position.The result is an image with the score for each position and the location (x0, y0) with of the maximum value, like these values around the maximum value found:
x_1 x0 x1
y_1 |0.91 | 0.89 | 0.90|
y0 |0.92 | 0.99 | 0.89|
y1 |0.95 | 0.95 | 0.90|
I would like to use a quadratic interpolation to find where are the subpixel point coordinates of the interpolated maximum value, using just the nearest neighbors.
In a 1d case, I use this formula (assuming x0 is the origin):
interpolated_x = (x_1-x1)/(2.*(x_1-2.*x0+x1));
For example:
x_1 x0 x1
|0.92 | 0.99 | 0.89|
you get interpolated_x = -0.08823, that is correctly slightly on the left of x0.
Is there some C++ code for the 2d case?

Why my Gradient is wrong (Coursera, Logistic Regression, Julia)?

I'm trying to do Logistic Regression from Coursera in Julia, but it doesn't work.
The Julia code to calculate the Gradient:
sigmoid(z) = 1 / (1 + e ^ -z)
hypotesis(theta, x) = sigmoid(scalar(theta' * x))
function gradient(theta, x, y)
(m, n) = size(x)
h = [hypotesis(theta, x[i,:]') for i in 1:m]
g = Array(Float64, n, 1)
for j in 1:n
g[j] = sum([(h[i] - y[i]) * x[i, j] for i in 1:m])
end
g
end
If this gradient used it produces the wrong results. Can't figure out why, the code seems like the right one.
The full Julia script. In this script the optimal Theta calculated using my Gradient Descent implementation and using the built-in Optim package, and the results are different.
The gradient is correct (up to a scalar multiple, as #roygvib points out). The problem is with the gradient descent.
If you look at the values of the cost function during your gradient descent, you will see a lot of NaN,
which probably come from the exponential:
lowering the step size (e.g., to 1e-5) will avoid the overflow,
but you will have to increase the number of iterations a lot (perhaps to 10_000_000).
A better (faster) solution would be to let the step size vary.
For instance, one could multiply the step size by 1.1
if the cost function improves after a step
(the optimum still looks far away in this direction: we can go faster),
and divide it by 2 if it does not (we went too fast and ended up past the minimum).
One could also do a line search in the direction of the gradient to find the best step size
(but this is time-consuming and can be replaced by approximations, e.g., Armijo's rule).
Rescaling the predictive variables also helps.
I tried comparing gradient() in the OP's code with numerical derivative of cost_j() (which is the objective function of minimization) using the following routine
function grad_num( theta, x, y )
g = zeros( 3 )
eps = 1.0e-6
disp = zeros( 3 )
for k = 1:3
disp[:] = theta[:]
disp[ k ]= theta[ k ] + eps
plus = cost_j( disp, x, y )
disp[ k ]= theta[ k ] - eps
minus = cost_j( disp, x, y )
g[ k ] = ( plus - minus ) / ( 2.0 * eps )
end
return g
end
But the gradient values obtained from the two routines do no seem to agree very well (at least for the initial stage of minimization)... So I manually derived the gradient of cost_j( theta, x, y ), from which it seems that the division by m is missing:
#/ OP's code
# g[j] = sum( [ (h[i] - y[i]) * x[i, j] for i in 1:m ] )
#/ modified code
g[j] = sum( [ (h[i] - y[i]) * x[i, j] for i in 1:m ] ) / m
Because I am not very sure if the above code and expression are really correct, could you check them by yourself...?
But in fact, regardless of whether I use the original or corrected gradients, the program converges to the same minimum value (0.2034977016, almost the same as obtained from Optim), because the two gradients differ only by a multiplicative factor! Because the convergence was very slow, I also modified the stepsize alpha adaptively following the suggestion by Vincent (here I used more moderate values for acceleration/deceleration):
function gradient_descent(x, y, theta, alpha, n_iterations)
...
c = cost_j( theta, x, y )
for i = 1:n_iterations
c_prev = c
c = cost_j( theta, x, y )
if c - c_prev < 0.0
alpha *= 1.01
else
alpha /= 1.05
end
theta[:] = theta - alpha * gradient(theta, x, y)
end
...
end
and called this routine as
optimal_theta = gradient_descent( x, y, [0 0 0]', 1.5e-3, 10^7 )[ 1 ]
The variation of cost_j versus iteration steps is plotted below.

What does the resultant matrix of Homography denote?

I have 2 frames of shaky video. I applied homography on all the inliers points. Now the resultant matrix that i get for different frames are like this
0.2711 -0.0036 0.853
-0.0002 0.2719 -0.2247
0.0000 -0.0000 0.2704
0.4787 -0.0061 0.5514
0.0007 0.4798 -0.0799
0.0000 -0.0000 0.4797
What are those similar values in the diagonal and how can I retrieve the translation component from this matrix ?
Start with the following observation: a homography matrix is only defined up to scale. This means that if you divide or multiply all the matrix coefficients by the same number, you obtain a matrix that represent the same geometrical transformation. This is because, in order to apply the homography to a point at coordinates (x, y), you multiply its matrix H on the right by the column vector [x, y, 1]' (here I use the apostrophe symbol to denote transposition), and then divide the result H * x = [u, v, w]' by the third component w. Therefore, if instead of H you use a scaled matrix (s * H), you end up with [s*u, s*v, s*w], which represents the same 2D point.
So, to understand what is going on with your matrices, start by dividing both of them by their bottom-right component:
octave:1> a = [
> 0.2711 -0.0036 0.853
> -0.0002 0.2719 -0.2247
> 0.0000 -0.0000 0.2704
> ];
octave:2> b=[
> 0.4787 -0.0061 0.5514
> 0.0007 0.4798 -0.0799
> 0.0000 -0.0000 0.4797];
octave:3> a/a(3,3)
ans =
1.00259 -0.01331 3.15459
-0.00074 1.00555 -0.83099
0.00000 -0.00000 1.00000
octave:4> b/b(3,3)
ans =
0.99792 -0.01272 1.14947
0.00146 1.00021 -0.16656
0.00000 -0.00000 1.00000
Now suppose, for the moment, that the third column elements in both matrices were [0, 0, 1]'. Then the effect of applying it to any point (x, y) would be to move it by approx 1/100 units (say, pixels). Basically, not changing it by much.
Plugging back the actual values for the third column shows that both matrices are, essentially, translating the whole images by constant amounts.
So, in conclusion, having equal values on the diagonals, and very small values at indices (1,2) and (2,1), means that these homographies are both (essentially) pure translations.
Various transformations involve all elementary operations such as addition, multiplication, division, and addition of a constant. Only the first two can be modeled by regular matrix multiplication. Note that addition of a constant and, in case of a Homography, division is impossible to represent with matrix multiplication in 2D. Adding a third coordinate (that is converting points to homogeneous representation) solves this problem. For example, if you want to add constant 5 to x you can do this like this
1 0 5 x x+5
0 1 0 * y = y
1
Note that matrix is 2x3, not 2x2 and coordinates have three numbers though they represent 2D points. Also, the last transition is converting back from homogeneous to Euclidian representation. Thus two results are achieved: all operations (multiplication, division, addition of variables and additions of constants) can be represented by matrix multiplication; second, we can chain multiple operations (via multiplying their matrices) and still have only a single matrix as the result (of matrix multiplication).
Ok, now let’s explain Homography. Homography is better to consider in the context of the whole family of transformation moving from simple ones to complex ones. In other words, it is easier to understand the meaning of Homography coefficients by comparing them to the meaning of coefficients of simpler Euclidean, Similarity and Affine transforms. The Euclidwan transformation is the simplest and represents a rigid rotation and translation in space (note that matrix is 2x3). For 2D case,
cos(a) -sin(a) Tx
sin(a) cos(a) Ty
Similarity adds scaling to the rotation coefficients. So now the matrix looks like this:
Scl*cos(a) -scl*sin(a) Tx
Scl*sin(a) scl*cos(a) Ty
Affiliate transformation adds shearing so the rotation coefficients become unrestricted:
a11 a12 Tx
a21 a22 Ty
Homography adds another row that divides the output x and y (see how we explained the division during the transition form homogeneous to Euclidean coordinates above) and thus introduces projectivity or non uniform scaling that is a function of point coordinates. This is better understood by looking at the transition to Euclidean coordinates.
a11 a12 Tx x a11*x+a12*y+Tx (a11*x+a12*y+Tx)/(a32*x+a32*y+a33)
a21 a22 Ty * y = a21*x+a22*y+Ty -> (a21*x+a22*y+Ty)/(a32*x+a32*y+a33)
a31 a32 a33 1 a32*x+a32*y+a33
Thus homography has an extra row compared to other transformations such as affine or similarity. This extra row allows to scale objects depending on their coordinates which is how projectivity is formed.
Finally, speaking of your numbers:
0.4787 -0.0061 0.5514
0.0007 0.4798 -0.0799
0.0000 -0.0000 0.4797
This is not homography!. Just look at the last row and you will see that the first two coefficients are 0 thus there is no projectivity. Since a11=a22 this is not even an Affine transformation. This is rather a similarity transform. The translation is
Tx=0.5514/0.4797 and Ty=-0.0799/0.4797