MPC State Space MIMO Scalling - state

I am trying to simulate statespace model of "citation aircraft" by Example 2.7 page 64,Book,"Predictive Control with Constraints,Jan Maciejowski,2002". A,B,C,D are state space matricises of model.
I have just one input and three outputs. Input scales in radian. First output scales in radian. Second output scales in meter. Third output scales in meter/second.
But in plot of book scales change to input scales change to degree. first output scales change to degree. and rest of outputs remaianed as before.
My question:
Is it need to change A,B,C,D to new scale?
For example due to changing input scale from radian to degree is it need to change B matrix to new scale(degree) before using it?
B = rad2deg(B)? Or for first output I change C matrix to C(1,:) = rad2deg(C(1,:))?

Related

Backpropagation 2-Dimensional Neuron Network C++

I am learning about Two Dimensional Neuron Network so I am facing many obstacles but I believe it is worth it and I am really enjoying this learning process.
Here's my plan: To make a 2-D NN work on recognizing images of digits. Images are 5 by 3 grids and I prepared 10 images from zero to nine. For Example this would be number 7:
Number 7 has indexes 0,1,2,5,8,11,14 as 1s (or 3,4,6,7,9,10,12,13 as 0s doesn't matter) and so on. Therefore, my input layer will be a 5 by 3 neuron layer and I will be feeding it zeros OR ones only (not in between and the indexes depends on which image I am feeding the layer).
My output layer however will be one dimensional layer of 10 neurons. Depends on which digit was recognized, a certain neuron will fire a value of one and the rest should be zeros (shouldn't fire).
I am done with implementing everything, I have a problem in computing though and I would really appreciate any help. I am getting an extremely high error rate and an extremely low (negative) output values on all output neurons and values (error and output) do not change even on the 10,000th pass.
I would love to go further and post my Backpropagation methods since I believe the problem is in it. However to break down my work I would love to hear some comments first, I want to know if my design is approachable.
Does my plan make sense?
All the posts are speaking about ranges ( 0->1, -1 ->+1, 0.01 -> 0.5 etc ), will it work for either { 0 | .OR. | 1 } on the output layer and not a range? if yes, how can I control that?
I am using TanHyperbolic as my transfer function. Does it make a difference between this and sigmoid, other functions.. etc?
Any ideas/comments/guidance are appreciated and thanks in advance
Well, by the description given above, I think that the design and approach taken it's correct! With respect to the choice of the activation function, remember that those functions help to get the neurons which have the largest activation number, also, their algebraic properties, such as an easy derivative, help with the definition of Backpropagation. Taking this into account, you should not worry about your choice of activation function.
The ranges that you mention above, correspond to a process of scaling of the input, it is better to have your input images in range 0 to 1. This helps to scale the error surface and help with the speed and convergence of the optimization process. Because your input set is composed of images, and each image is composed of pixels, the minimum value and and the maximum value that a pixel can attain is 0 and 255, respectively. To scale your input in this example, it is essential to divide each value by 255.
Now, with respect to the training problems, Have you tried checking if your gradient calculation routine is correct? i.e., by using the cost function, and evaluating the cost function, J? If not, try generating a toy vector theta that contains all the weight matrices involved in your neural network, and evaluate the gradient at each point, by using the definition of gradient, sorry for the Matlab example, but it should be easy to port to C++:
perturb = zeros(size(theta));
e = 1e-4;
for p = 1:numel(theta)
% Set perturbation vector
perturb(p) = e;
loss1 = J(theta - perturb);
loss2 = J(theta + perturb);
% Compute Numerical Gradient
numgrad(p) = (loss2 - loss1) / (2*e);
perturb(p) = 0;
end
After evaluating the function, compare the numerical gradient, with the gradient calculated by using backpropagation. If the difference between each calculation is less than 3e-9, then your implementation shall be correct.
I recommend to checkout the UFLDL tutorials offered by the Stanford Artificial Intelligence Laboratory, there you can find a lot of information related to neural networks and its paradigms, it's worth to take look at it!
http://ufldl.stanford.edu/wiki/index.php/Main_Page
http://ufldl.stanford.edu/tutorial/

What is a safety wall and how do I use it?

I've Googled and found zero answers for "safety wall", so I'm pretty sure that's not the correct term. I'll explain myself:
As I've read, I'm talking about taking a two dimensional array and placing it in a same array with an addition of one cell to each side to make sure staying safe and not getting out the limits I've created.
What is the right term for this technique and how would I use it?
Like others told, you need to search it "sentinel" or something like "sentinel control"..
You can use sentinel control when you dont know size or limits of your program. For example, you are writting a program, which is calculating avarage grade of class. However you dont know how many student are in class. Or you inserting array which you dont know limits. Then you can use sentinel control for this job.
Lets look this example,
int grade;
int totalgrade = 0;
int studentCount = 0;
std::cin >> grade;
while (grade != -1)
{
totalgrade = totalgrade + grade;
studentCount ++;
std::cin >> grade;
} // loop until user enter -1
So if you dont know how many values will be entered from user, you can use sentinel control for this job. You can also read more about sentinel value.
These are usually referred to as "ghost cells", and are often used in numerical simulations or image processing where you are applying a kernel (such as a smoothing or difference operator) to an array. They allow you apply the kernel without special casing the edges.
For example; suppose you want to smooth out an image - you could use a kernel like:
0.0 0.1 0.0
0.1 0.6 0.1
0.0 0.1 0.0
You apply this by taking the source image, and for every pixel, you compute the value of the destination pixel by centering the kernel on the source pixel and adding up the weighted contributions of the 9 covered pixel (0.6 * the value of the source pixel, plus 0.1 times the value of each of the pixels above, below, and to the sides). Do this for every pixel and you'll end up with a smoothed version of your original image.
This works well, but the question is "what do you do at the border cells?" Rather than having complicated if/then logic for the border cases (which can be tricky and can degrade performance), you can just add 1 layer of ghost cells to each side.
Of course, you have to pick values for the cells before you run your algorithm. How you pick their value depends on your algorithm. You might choose to set them all to zero, but in the case of the smoothing kernel, this will darken your image at it's borders, so that's probably not what you want. A better plan would be to fill the ghost cells with the value of the nearest non-ghost cell.
You also need to figure out how many ghost cells you need, which depends on the size of your kernel. For a 3x3 kernel like above, you need 1 layer of ghost cells (to take care of the part of the kernel that might "hang off" the edge). More complicated kernels might require more (a 5x5 kernel would require 2 layers, etc).
You can google "ghost cell computation" to find out more (add 'computation' or you'll get a lot of biology results!)

fortran beginner - writing variable to output file

I am starting to work with a CFD fortran program, and want to update the variables that it writes to an output file.
I want to output several columns, I and J coordinates(IL and JL), Water Surface Elevation (SURFEL), Bottom Elevation of coordinate (BELV), Depth of Water (HP) and finally, and this is where I have the question, the Maximum Water Surface Elevation of the coordinate during the simulation (SURFELMAX). L refers to a specific I,J coordinate, LA is the last coordinate in the simulation
So far I have:
DO L=2,LA
SURFEL=BELV(L)+HP(L)
IF (SURFEL.GT.SURFELMAX)THEN
SURFELMAX=SURFEL
ELSE IF (SURFELMAX.GT.SURFEL) THEN
SURFELMAX=SURFELMAX
WRITE(10,200)IL(L),JL(L),SURFEL,SURFELMAX
ENDIF
ENDDO
Everything works ok other than the SURFELMAX, in which the highest recorded surface elevation that occurred in any coordinate in the whole domain is written for each coordinate, i.e. the column is filled with the same value, the highest experienced in the whole domain during the simulation.
Would I need to first allocate an array for SURFELMAX, and have SURFEL checked against it each time to see if it has increased? If so could somebody point me in the right direction for this?
If I understand the requirements correctly, then you want to calculate SURFELMAX before you start writing out. This could simply be:
SURFELMAX = MAXVAL(BELV(2:LA)+HP(2:LA))
WRITE(10,200) (IL(L), JL(L), BELV(L)+HP(L), SURFELMAX, L=2,LA)
(or even as a single line).
It appears I didn't understand correctly; I'll try again - keeping the above as a warning to others.
It seems that you do indeed want SURFELMAX(2:LA) where each element is the highest in a given cell to date.
do L=2, LA
SURFELMAX(L) = MAX(SURFELMAX(L), BELV(L)+HP(L)) ! Store the historical maximum
WRITE (10,200) IL(L), JL(L), BELV(L)+HP(L), SURFELMAX(L)
end do
where, initially, SURFELMAX has been set to a sufficiently small value. You could also explicitly calculate SURFEL if that is needed.
If this is time dependent, then you will have to define a 2-d array SURFELMAX of size (1:LA,1:T) (T = number of time steps, LA = number of active coordinates).
Then increment the time step (say, the iterator is called I_T) outside of the loop through the domain.
Finally assign the maximum value at each coordinate to the SURFELMAX(I_T,L)

Determine difference in stops between images with no EXIF data

I have a set of images of the same scene but shot with different exposures. These images have no EXIF data so there is no way to extract useful info like f-stop, shutter speed etc.
What I'm trying to do is to determine the difference in stops between the images i.e. Image1 is +1.3 stops of Image0.
My current approach is to first calculate luminance from the image's RGB values using the equation
L = 0.2126 * R + 0.7152 * G + 0.0722 * B
I've seen different numbers being used in the equation but generally it should not affect the end result L too much.
After that I derive the log-average luminance of the image.
exp(avg of log(luminance of image))
But somehow the log-avg luminance doesn't seem to give much indication on exposure difference btw the images.
Any ideas on how to determine exposure difference?
edit: on c/c++
You have to generally solve two problems:
1. Linearize your image data
(In case it's not obvious what is meant: two times more light collected by your pixel shall result in two times the intensity value in your linearized image.)
Your image input might be (sufficiently) linearized already -> you may skip to part 2. If your content came from a camera and it's a JPEG, then this will most certainly not be the case.
The real 'solution' to this problem is finding the camera response function, which you want to invert and apply to your image data to get linear intensity values. This is by no means a trivial task. The EMoR model is widely used in all sorts of software (Photoshop, PTGui, Photomatix, etc.) to describe camera response functions. Some open source software solving this problem (but using a different model iirc) is PFScalibrate.
Having that said, you may get away with a simple inverse gamma application. A rough 'gestimation' for the right gamma value might be found by doing this:
capture an evenly lit, static scene with two exposure times e and e/2
apply a couple of inverse gamma transforms (e.g. for 1.8 to 2.4 in 0.1 steps) on both images
multiply all the short exposure images with 2.0 and subtract them from the respective long exposure images
pick the gamma that lead to the smallest overall difference
2. Find the actual difference of irradiation in stops, i.e. log2(scale factor)
Presuming the scene was static (no moving objects or camera), this is relatively easy:
sum1 = sum2 = 0
foreach pixel pair (p1,p2) from the two images:
if p1 or p2 is close to 0 or 255:
skip this pair
sum1 += p1 and sum2 += p2
return log2(sum1 / sum2)
On large images this will certainly work just as well and a lot faster if you sub-sample the images.
If the camera was static but the scene was not (moving objects), this starts to work less well. I produced acceptable results in this case by simply repeating the above procedure several times and use the output of the previous run as an estimate for the correct scale factor and then discard pixel pairs who's quotient is too far away from the current estimate. So basically replacing the above if line with the following:
if <see above> or if abs(log2(p1/p2) - estimate) > 0.5:
I'd stop the repetition after a fixed number of iterations or if two consecutive estimates are sufficiently close to each other.
EDIT: A note about conversion to luminance
You don't need to do that at all (as Tony D mentioned already) and if you insist, then do it after the linearization step (as Mark Ransom noted). In a perfect setting (static scene, no noise, no de-mosaicing, no quantization) every channel of every pixel would have the same ratio p1/p2 (if neither is saturated). Therefore the relative weighting of the different channels is irrelevant. You may sum over all pixels/channels (weighing R, G and B equally) or maybe only use the green channel.

Designing a grid overlay based on longitudes and latitudes

I'm trying to figure out the best way to approach the following:
Say I have a flat representation of the earth. I would like to create a grid that overlays this with each square on the grid corresponding to about 3 square kilometers. Each square would have a unique region id. This grid would just be stored in a database table that would have a region id and then probably the long/lat coordinates of the four corners of the region, right? Any suggestions on how to generate this table easily? I know I would first need to find out the width and height of this "flattened earth" in kms, calculate the number of regions, and then somehow assign the long/lats to each intersection of vertical/horizontal line; however, this sounds like a lot of manual work.
Secondly, once I have that grid table created, I need to design a fxn that takes a long/lat pair and then determines which logical "region" it is in. I'm not sure how to go about this.
Any help would be appreciated.
Thanks.
Assume the Earth is a sphere with radius R = 6371 km.
Start at (lat, long) = (0, 0) deg. Around the equator, 3km corresponds to a change in longitude of
dlong = 3 / (2 * pi * R) * 360
= 0.0269796482 degrees
If we walk around the equator and put a marker every 3km, there will be about (2 * pi * R) / 3 = 13343.3912 of them. "About" because it's your decision how to handle the extra 0.3912.
From (0, 0), we walk North 3 km to (lat, long) (0.0269796482, 0). We will walk around the Earth again on a path that is locally parallel to the first path we walked. Because it is a little closer to the N Pole, the radius of this circle is a bit smaller than that of the first circle we walked. Let's use lower case r for this radius
r = R * cos(lat)
= 6371 * cos(0.0269796482)
= 6 368.68141 km
We calculate dlong again using the smaller radius,
dlong = 3 / (2 * pi * r) * 360
= 0.0269894704 deg
We put down the second set of flags. This time there are about (2 * pi * r) / 3 = 13 338.5352 of them. There were 13,343 before, but now there are 13,338. What's that? five less.
How do we draw a ribbon of squares when there are five less corners in the top line? In fact, as we walked around the Earth, we'd find that we started off with pretty good squares, but that the shape of the regions sheared out into pretty extreme parallelograms.
We need a different strategy that gives us the same number of corners above and below. If the lower boundary (SW-SE) is 3 km long, then the top should be a little shorter, to make a ribbon of trapeziums.
There are many ways to craft a compromise that approximates your ideal square grid. This wikipedia article on map projections that preserve a metric property, links to several dozen such strategies.
The specifics of your app may allow you to simplify things considerably, especially if you don't really need to map the entire globe.
Microsoft has been investing in spatial data types in their SQL Server 2008 offering. It could help you out here. Because it has data types to represent your flattened earth regions, operators to determine when a set of coordinates is inside a geometry, etc. Even if you choose not to use this, consider checking out the following links. The second one in particular has a lot of good background information on the problem and a discussion on some of the industry standard data formats for spatial data.
http://www.microsoft.com/sqlserver/2008/en/us/spatial-data.aspx
http://jasonfollas.com/blog/archive/2008/03/14/sql-server-2008-spatial-data-part-1.aspx
First, Paul is right. Unfortunately the earth is round which really complicates the heck out of this stuff.
I created a grid similar to this for a topographical mapping server many years ago. I just recoreded the coordinates of the upper left coder of each region. I also used UTM coordinates instead of lat/long. If you know that each region covers 3 square kilometers and since UTM is based on meters, it is straight forward to do a range query to discover the right region.
You do realize that because the earth is a sphere that "3 square km" is going to be a different number of degrees near the poles than near the equator, right? And that at the top and bottom of the map your grid squares will actually represent pie-shaped parts of the world, right?
I've done something similar with my database - I've broken it up into quad cells. So what I did was divide the earth into four quarters (-180,-90)-(0,0), (-180,0)-(0,90) and so on. As I added point entities to my database, if the "cell" got more than X entries, I split the cell into 4. That means that in areas of the world with lots of point entities, I have a lot of quad cells, but in other parts of the world I have very few.
My database for the quad tree looks like:
\d areaids;
Table "public.areaids"
Column | Type | Modifiers
--------------+-----------------------------+-----------
areaid | integer | not null
supercededon | timestamp without time zone |
supercedes | integer |
numpoints | integer | not null
rectangle | geometry |
Indexes:
"areaids_pk" PRIMARY KEY, btree (areaid)
"areaids_rect_idx" gist (rectangle)
Check constraints:
"enforce_dims_rectangle" CHECK (ndims(rectangle) = 2)
"enforce_geotype_rectangle" CHECK (geometrytype(rectangle) = 'POLYGON'::text OR rectangle IS NULL)
"enforce_srid_rectangle" CHECK (srid(rectangle) = 4326)
I'm using PostGIS to help find points in a cell. If I look at a cell, I can tell if it's been split because supercededon is not null. I can find its children by looking for ones that have supercedes equal to its id. And I can dig down from top to bottom until I find the ones that cover the area I'm concerned about by looking for ones with supercedeson null and whose rectangle overlaps my area of interest (using the PostGIS '&' operator).
There's no way you'll be able to do this with rectangular cells, but I've just finished an R package dggridR which would make this easy to do using a grid of hexagonal cells. However, the 3km cell requirement might yield so many cells as to overload your machine.
You can use R to generate the grid:
install.packages('devtools')
install.packages('rgdal')
library(devtools)
devools.install_github('r-barnes/dggridR')
library(dggridR)
library(rgdal)
#Construct a discrete global grid (geodesic) with cells of ~3 km^2
dggs <- dgconstruct(area=100000, metric=FALSE, resround='nearest')
#Get a hexagonal grid for the whole earth based on this dggs
grid <- dgearthgrid(dggs,frame=FALSE)
#Save the grid
writeOGR(grid, "grid_3km_cells.kml", "cells", "KML")
The KML file then contains the ids and edge vertex coordinates of every cell.
The grid looks a little like this:
My package is based on Kevin Sahr's DGGRID which can generate this same grid to KML directly, though you'll need to figure out how to compile it yourself.