Convolutional network filter always negative - c++

I asked a question about a network which I've been building last week, and I iterated on the suggestions which lead me to finding a few problems. I've come back to this project and fixed up all the issues and learnt a lot more about CNNs in the process. Now I'm stuck on an issue were all of my weights move to massively negative values, which coupled with the RELU ends in the output image always being completely black (making it impossible for the classifier to do it's job).
On two labeled images:
These are passed into a two layer network, one classifier (which gets 100% on its own) and a one filter 3*3 convolutional layer.
On the first iteration the output from the conv layer looks like (images in same order as above):
The filter is 3*3*3, due to the images being RGB. The weights are all random numbers between 0.0f-1.0f. On the next iteration the images are completely black, printing the filters shows that they are now in range of -49678.5f (the highest I can see) and -61932.3f.
This issue in turn is due to the gradients being passed back from the Logistic Regression/Linear layer being crazy high for the cross (label 0, prediction 0). For the circle (label 1, prediction 0) the values are between roughly -12 and -5, but for the cross they are all in the positive high 1000 to high 2000 range.
The code which sends these back looks something like (some parts omitted):
void LinearClassifier::Train(float * x,float output, float y)
{
float h = output - y;
float average = 0.0f;
for (int i =1; i < m_NumberOfWeights; ++i)
{
float error = h*x[i-1];
m_pGradients[i-1] = error;
average += error;
}
average /= static_cast<float>(m_NumberOfWeights-1);
for (int theta = 1; theta < m_NumberOfWeights; ++theta)
{
m_pWeights[theta] = m_pWeights[theta] - learningRate*m_pGradients[theta-1];
}
// Bias
m_pWeights[0] -= learningRate*average;
}
This is passed back to the single convolution layer:
// This code is in three nested for loops (for layer,for outWidth, for outHeight)
float gradient = 0.0f;
// ReLu Derivative
if ( m_pOutputBuffer[outputIndex] > 0.0f)
{
gradient = outputGradients[outputIndex];
}
for (int z = 0; z < m_InputDepth; ++z)
{
for ( int u = 0; u < m_FilterSize; ++u)
{
for ( int v = 0; v < m_FilterSize; ++v)
{
int x = outX + u - 1;
int y = outY + v - 1;
int inputIndex = x + y*m_OutputWidth + z*m_OutputWidth*m_OutputHeight;
int kernelIndex = u + v*m_FilterSize + z*m_FilterSize*m_FilterSize;
m_pGradients[inputIndex] += m_Filters[layer][kernelIndex]*gradient;
m_GradientSum[layer][kernelIndex] += input[inputIndex]*gradient;
}
}
}
This code is iterated over by passing each image in a one at a time fashion. The gradients are obviously going in the right direction but how do I stop the huge gradients from throwing the prediction function?

RELU activations are notorious for doing this. You usually have to use a low learning rate. The reasoning behind this is that when the RELU returns positive numbers it can continue to learn freely, but if a unit gets in a position where the signal coming into it is always negative it can become a "dead" neuron and never activate again.
Also initializing your weights is more delicate with RELU. It appears that you are initializing to range 0-1 which creates a huge bias. Two tips here - Use a range centered around 0, and a range that is much smaller. A normal distribution with mean 0 and std 0.02 usually works well.

I fixed it by downscaling the gradients int the CNN layer, but now I'm confused as to why this works/is needed so if anyone has any intuition as to why this works that'd be great.

Related

OpenCV least square (solve) solution accuracy

I am using the OpenCV method solve (https://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#solve) in C++ to fit a curve (grade 3, ax^3+bx^2+cx+d) through a set of points. I am solving A * x = B, A contain the powers of the points x-coordinates (so x^3, x^2, x^1, 1), and B contains the y coordinates of the points, x (Matrix) contains the parameters a, b, c and d.
I am using the flag DECOMP_QR on cv::solve to fit the curve.
The problem I am facing is that the set of points do not neccessarily follow a mathematical function (e.g. the function changes it's equation, see picture). So, in order to fit an accurate curve, I need to split the set of points where the curvature changes. In case of the picture below, I would split the regression at the index where the curve starts. So I need to detect where the curvature changes.
So, if I don't split, I'll get the yellow curve as a result, which is inaccurate. What I want is the blue curve.
Finding curvature changes:
To find out where the curvature changes, I want to use the solution accuracy.
So basically:
int splitIndex = 0;
for(int pointIndex = 0; pointIndex < numberOfPoints; pointIndex += 5) {
cv::Range rowR = Range(0, pointIndex); //Selected rows to index
cv::Range colR = Range(0,3); //Grade: 3 (x^3)
cv::Mat x;
bool res = cv::solve(A(rowR, colR), B(rowR, Range(0,1)),x , DECOMP_QR);
if(res == true) {
//Check for accuracy
if (accuracy too bad) {
splitIndex = pointIndex;
return splitIndex;
}
}
}
My questions are:
- is there a way of getting the accuracy / standard deviation from the solve command (efficiently & fast, because of real-time application (around 1ms compute time left))
- is this a good way of finding the curvature change / does anyone know a better way?
Thanks :)

Fluid simulation boundaries and advect

This fluid simulation is based off of a paper by Stam. On page 7, he describes the basic idea behind advection:
Start with two grids: one that contains the density values from the previous time step and one
that will contain the new values. For each grid cell of the latter we trace the cell’s center
position backwards through the velocity field. We then linearly interpolate from the grid of
previous density values and assign this value to the current grid cell.
Advect code. The two density grids are d and d0, u and v are velocity components, dt is the time step, N (global) is grid size, b can be ignored:
void advect(int b, vfloat &d, const vfloat &d0, const vfloat &u, const vfloat &v, float dt, std::vector<bool> &bound)
{
float dt0 = dt*N;
for (int i=1; i<=N; i++)
{
for (int j=1; j<=N; j++)
{
float x = i - dt0*u[IX(i,j)];
float y = j - dt0*v[IX(i,j)];
if (x<0.5) x=0.5; if (x>N+0.5) x=N+0.5;
int i0=(int)x; int i1=i0+1;
if (y<0.5) y=0.5; if (y>N+0.5) y=N+0.5;
int j0=(int)y; int j1=j0+1;
float s1 = x-i0; float s0 = 1-s1; float t1 = y-j0; float t0 = 1-t1;
d[IX(i,j)] = s0*(t0*d0[IX(i0,j0)] + t1*d0[IX(i0,j1)]) +
s1*(t0*d0[IX(i1,j0)] + t1*d0[IX(i1,j1)]);
}
}
set_bnd(b, d, bound);
}
This method is concise and works well enough, but implementing object boundaries is tricky for me to figure out because values are traced backwards and interpolated. My current solution is to simply push density out of boundaries if there is an empty space (or spaces) next to it, but that is inaccurate and causes density to build up, especially on corners and areas with diagonal velocity. only visually accurate. I'm looking for "correctness" now.
Relevant parts of my boundary code:
void set_bnd(const int b, vfloat &x, std::vector<bool> &bound)
{
//...
for (int i=1; i<=N; i++)
{
for (int j=1; j<=N; j++)
{
if (bound[IX(i,j)])
{
//...
else if (b==0)
{
// Distribute density from bound to surrounding cells
int nearby_count = !bound[IX(i+1,j)] + !bound[IX(i-1,j)] + !bound[IX(i,j+1)] + !bound[IX(i,j-1)];
if (!nearby_count) x[IX(i,j)] = 0;
else
x[IX(i,j)] = ((bound[IX(i+1,j)] ? 0 : x[IX(i+1,j)]) +
(bound[IX(i-1,j)] ? 0 : x[IX(i-1,j)]) +
(bound[IX(i,j+1)] ? 0 : x[IX(i,j+1)]) +
(bound[IX(i,j-1)] ? 0 : x[IX(i,j-1)])) / surround;
}
}
}
}
}
bound is a vector of bools with rows and columns 0 to N+1. Boundary objects are set up before the main loop by setting cell coordinates in bound to 1.
The paper vaguely states "Then we simply have to add
some code to the set_bnd() routine to fill in values for the occupied cells from the values of
their direct neighbors", which is sort of what I'm doing. I am looking for a way to implement boundaries more accurately, that is having non-fluid solid boundaries and maybe eventually supporting boundaries for multiple fluids. Visual quality is much more important than physics correctness.
Your answer comes from physics rather than simulation. Since you're dealing with boundaries, your velocity field needs to meet the Prandtl no-slip boundary condition, which says simply that the velocity at the boundary must be zero. See https://en.wikipedia.org/wiki/Boundary_layer for (a lot) more information. If your velocity field does not meet this criterion, you'll have the difficulties you describe, including advecting mass back across a boundary, which is a pretty basic violation of the model.
You should also be aware that this advection code does not conserve density (by design) and that the conservation law is fixed up at the end. You'll need to pay attention to that step, since the Hodge decomposition of the vector field also has applicable boundary conditions.
You may be interested in "The Art of Fluid Animation" by Jos Stam (Sept. 2015). Around page 69 he discusses boundary conditions in some detail..
Perhaps also of interest: https://software.intel.com/en-us/articles/fluid-simulation-for-video-games-part-1/.
"The Perfect Storm" was a while ago so now your fluid sim has to be either very big, very fast, or very detailed. Preferably all three. Some might use a GPU if their use case allows.
Maybe it helps..

Subsampling an array of numbers

I have a series of 100 integer values which I need to reduce/subsample to 77 values for the purpose of fitting into a predefined space on screen. This gives a fraction of 77/100 values-per-pixel - not very neat.
Assuming the 77 is fixed and cannot be changed, what are some typical techniques for subsampling 100 numbers down to 77. I get a sense that it will be a jagged mapping, by which I mean the first new value is the average of [0, 1] then the next value is [3], then average [4, 5] etc. But how do I approach getting the pattern for this mapping?
I am working in C++, although I'm more interested in the technique than implementation.
Thanks in advance.
Either if you downsample or you oversample, you are trying to reconstruct a signal over nonsampled points in time... so you have to make some assumptions.
The sampling theorem tells you that if you sample a signal knowing that it has no frequency components over half the sampling frequency, you can continously and completely recover the signal over the whole timing period. There's a way to reconstruct the signal using sinc() functions (this is sin(x)/x)
sinc() (indeed sin(M_PI/Sampling_period*x)/M_PI/x) is a function that has the following properties:
Its value is 1 for x == 0.0 and 0 for x == k*Sampling_period with k == 0, +-1, +-2, ...
It has no frequency component over half of the sampling_frequency derived from Sampling_period.
So if you consider the sum of the functions F_x(x) = Y[k]*sinc(x/Sampling_period - k) to be the sinc function that equals the sampling value at position k and 0 at other sampling value and sum over all k in your sample, you'll get the best continous function that has the properties of not having components on frequencies over half the sampling frequency and have the same values as your samples set.
Said this, you can resample this function at whatever position you like, getting the best way to resample your data.
This is by far, a complicated way of resampling data, (it has also the problem of not being causal, so it cannot be implemented in real time) and you have several methods used in the past to simplify the interpolation. you have to constructo all the sinc functions for each sample point and add them together. Then you have to resample the resultant function to the new sampling points and give that as a result.
Next is an example of the interpolation method just described. It accepts some input data (in_sz samples) and output interpolated data with the method described before (I supposed the extremums coincide, which makes N+1 samples equal N+1 samples, and this makes the somewhat intrincate calculations of (in_sz - 1)/(out_sz - 1) in the code (change to in_sz/out_sz if you want to make plain N samples -> M samples conversion:
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
/* normalized sinc function */
double sinc(double x)
{
x *= M_PI;
if (x == 0.0) return 1.0;
return sin(x)/x;
} /* sinc */
/* interpolate a function made of in samples at point x */
double sinc_approx(double in[], size_t in_sz, double x)
{
int i;
double res = 0.0;
for (i = 0; i < in_sz; i++)
res += in[i] * sinc(x - i);
return res;
} /* sinc_approx */
/* do the actual resampling. Change (in_sz - 1)/(out_sz - 1) if you
* don't want the initial and final samples coincide, as is done here.
*/
void resample_sinc(
double in[],
size_t in_sz,
double out[],
size_t out_sz)
{
int i;
double dx = (double) (in_sz-1) / (out_sz-1);
for (i = 0; i < out_sz; i++)
out[i] = sinc_approx(in, in_sz, i*dx);
}
/* test case */
int main()
{
double in[] = {
0.0, 1.0, 0.5, 0.2, 0.1, 0.0,
};
const size_t in_sz = sizeof in / sizeof in[0];
const size_t out_sz = 5;
double out[out_sz];
int i;
for (i = 0; i < in_sz; i++)
printf("in[%d] = %.6f\n", i, in[i]);
resample_sinc(in, in_sz, out, out_sz);
for (i = 0; i < out_sz; i++)
printf("out[%.6f] = %.6f\n", (double) i * (in_sz-1)/(out_sz-1), out[i]);
return EXIT_SUCCESS;
} /* main */
There are different ways of interpolation (see wikipedia)
The linear one would be something like:
std::array<int, 77> sampling(const std::array<int, 100>& a)
{
std::array<int, 77> res;
for (int i = 0; i != 76; ++i) {
int index = i * 99 / 76;
int p = i * 99 % 76;
res[i] = ((p * a[index + 1]) + ((76 - p) * a[index])) / 76;
}
res[76] = a[99]; // done outside of loop to avoid out of bound access (0 * a[100])
return res;
}
Live example
Create 77 new pixels based on the weighted average of their positions.
As a toy example, think about the 3 pixel case which you want to subsample to 2.
Original (denote as multidimensional array original with RGB as [0, 1, 2]):
|----|----|----|
Subsample (denote as multidimensional array subsample with RGB as [0, 1, 2]):
|------|------|
Here, it is intuitive to see that the first subsample seems like 2/3 of the first original pixel and 1/3 of the next.
For the first subsample pixel, subsample[0], you make it the RGB average of the m original pixels that overlap, in this case original[0] and original[1]. But we do so in weighted fashion.
subsample[0][0] = original[0][0] * 2/3 + original[1][0] * 1/3 # for red
subsample[0][1] = original[0][1] * 2/3 + original[1][1] * 1/3 # for green
subsample[0][2] = original[0][2] * 2/3 + original[1][2] * 1/3 # for blue
In this example original[1][2] is the green component of the second original pixel.
Keep in mind for different subsampling you'll have to determine the set of original cells that contribute to the subsample, and then normalize to find the relative weights of each.
There are much more complex graphics techniques, but this one is simple and works.
Everything depends on what you wish to do with the data - how do you want to visualize it.
A very simple approach would be to render to a 100-wide image, and then smooth scale the image down to a narrower size. Whatever graphics/development framework you're using will surely support such an operation.
Say, though, that your goal might be to retain certain qualities of the data, such as minima and maxima. In such a case, for each bin, you're drawing a line of darker color up to the minimum value, and then continue with a lighter color up to the maximum. Or, you could, instead of just putting a pixel at the average value, you draw a line from the minimum to the maximum.
Finally, you might wish to render as if you had 77 values only - then the goal is to somehow transform the 100 values down to 77. This will imply some kind of an interpolation. Linear or quadratic interpolation is easy, but adds distortions to the signal. Ideally, you'd probably want to throw a sinc interpolator at the problem. A good list of them can be found here. For theoretical background, look here.

Improving C++ algorithm for finding all points within a sphere of radius r

Language/Compiler: C++ (Visual Studio 2013)
Experience: ~2 months
I am working in a rectangular grid in 3D-space (size: xdim by ydim by zdim) where , "xgrid, ygrid, and zgrid" are 3D arrays of the x,y, and z-coordinates, respectively. Now, I am interested in finding all points that lie within a sphere of radius "r" centered about the point "(vi,vj,vk)". I want to store the index locations of these points in the vectors "xidx,yidx,zidx". For a single point this algorithm works and is fast enough but when I wish to iterate over many points within the 3D-space I run into very long run times.
Does anyone have any suggestions on how I can improve the implementation of this algorithm in C++? After running some profiling software I found online (very sleepy, Luke stackwalker) it seems that the "std::vector::size" and "std::vector::operator[]" member functions are bogging down my code. Any help is greatly appreciated.
Note: Since I do not know a priori how many voxels are within the sphere, I set the length of vectors xidx,yidx,zidx to be larger than necessary and then erase all the excess elements at the end of the function.
void find_nv(int vi, int vj, int vk, vector<double> &xidx, vector<double> &yidx, vector<double> &zidx, double*** &xgrid, double*** &ygrid, double*** &zgrid, int r, double xdim,double ydim,double zdim, double pdim)
{
double xcor, ycor, zcor,xval,yval,zval;
vector<double>xyz(3);
xyz[0] = xgrid[vi][vj][vk];
xyz[1] = ygrid[vi][vj][vk];
xyz[2] = zgrid[vi][vj][vk];
int counter = 0;
// Confine loop to be within boundaries of sphere
int istart = vi - r;
int iend = vi + r;
int jstart = vj - r;
int jend = vj + r;
int kstart = vk - r;
int kend = vk + r;
if (istart < 0) {
istart = 0;
}
if (iend > xdim-1) {
iend = xdim-1;
}
if (jstart < 0) {
jstart = 0;
}
if (jend > ydim - 1) {
jend = ydim-1;
}
if (kstart < 0) {
kstart = 0;
}
if (kend > zdim - 1)
kend = zdim - 1;
//-----------------------------------------------------------
// Begin iterating through all points
//-----------------------------------------------------------
for (int k = 0; k < kend+1; ++k)
{
for (int j = 0; j < jend+1; ++j)
{
for (int i = 0; i < iend+1; ++i)
{
if (i == vi && j == vj && k == vk)
continue;
else
{
xcor = pow((xgrid[i][j][k] - xyz[0]), 2);
ycor = pow((ygrid[i][j][k] - xyz[1]), 2);
zcor = pow((zgrid[i][j][k] - xyz[2]), 2);
double rsqr = pow(r, 2);
double sphere = xcor + ycor + zcor;
if (sphere <= rsqr)
{
xidx[counter]=i;
yidx[counter]=j;
zidx[counter] = k;
counter = counter + 1;
}
else
{
}
//cout << "counter = " << counter - 1;
}
}
}
}
// erase all appending zeros that are not voxels within sphere
xidx.erase(xidx.begin() + (counter), xidx.end());
yidx.erase(yidx.begin() + (counter), yidx.end());
zidx.erase(zidx.begin() + (counter), zidx.end());
return 0;
You already appear to have used my favourite trick for this sort of thing, getting rid of the relatively expensive square root functions and just working with the squared values of the radius and center-to-point distance.
One other possibility which may speed things up (a) is to replace all the:
xyzzy = pow (plugh, 2)
calls with the simpler:
xyzzy = plugh * plugh
You may find the removal of the function call could speed things up, however marginally.
Another possibility, if you can establish the maximum size of the target array, is to use an real array rather than a vector. I know they make the vector code as insanely optimal as possible but it still won't match a fixed-size array for performance (since it has to do everything the fixed size array does plus handle possible expansion).
Again, this may only offer very marginal improvement at the cost of more memory usage but trading space for time is a classic optimisation strategy.
Other than that, ensure you're using the compiler optimisations wisely. The default build in most cases has a low level of optimisation to make debugging easier. Ramp that up for production code.
(a) As with all optimisations, you should measure, not guess! These suggestions are exactly that: suggestions. They may or may not improve the situation, so it's up to you to test them.
One of your biggest problems, and one that is probably preventing the compiler from making a lot of optimisations is that you are not using the regular nature of your grid.
If you are really using a regular grid then
xgrid[i][j][k] = x_0 + i * dxi + j * dxj + k * dxk
ygrid[i][j][k] = y_0 + i * dyi + j * dyj + k * dyk
zgrid[i][j][k] = z_0 + i * dzi + j * dzj + k * dzk
If your grid is axis aligned then
xgrid[i][j][k] = x_0 + i * dxi
ygrid[i][j][k] = y_0 + j * dyj
zgrid[i][j][k] = z_0 + k * dzk
Replacing these inside your core loop should result in significant speedups.
You could do two things. Reduce the number of points you are testing for inclusion and simplify the problem to multiple 2d tests.
If you take the sphere an look at it down the z axis you have all the points for y+r to y-r in the sphere, using each of these points you can slice the sphere into circles that contain all the points in the x/z plane limited to the circle radius at that specific y you are testing. Calculating the radius of the circle is a simple solve the length of the base of the right angle triangle problem.
Right now you ar testing all the points in a cube, but the upper ranges of the sphere excludes most points. The idea behind the above algorithm is that you can limit the points tested at each level of the sphere to the square containing the radius of the circle at that height.
Here is a simple hand draw graphic, showing the sphere from the side view.
Here we are looking at the slice of the sphere that has the radius ab. Since you know the length ac and bc of the right angle triangle, you can calculate ab using Pythagoras theorem. Now you have a simple circle that you can test the points in, then move down, it reduce length ac and recalculate ab and repeat.
Now once you have that you can actually do a little more optimization. Firstly, you do not need to test every point against the circle, you only need to test one quarter of the points. If you test the points in the upper left quadrant of the circle (the slice of the sphere) then the points in the other three points are just mirror images of that same point offset either to the right, bottom or diagonally from the point determined to be in the first quadrant.
Then finally, you only need to do the circle slices of the top half of the sphere because the bottom half is just a mirror of the top half. In the end you only tested a quarter of the point for containment in the sphere. This should be a huge performance boost.
I hope that makes sense, I am not at a machine now that I can provide a sample.
simple thing here would be a 3D flood fill from center of the sphere rather than iterating over the enclosing square as you need to visited lesser points. Moreover you should implement the iterative version of the flood-fill to get more efficiency.
Flood Fill

super mysterious logic error for a planar fitting image processing algorithm

so i have this image processing program where i am using a linear regression algorithm to find a plane that best fits all of the points (x,y,z: z being the pixel color intensity (0-255)
Simply speaking i have this picture of ? x ? dimension. I run this algorithm and i get these A, B, C values. (3 float values)
then i go every pixel in the program and minus the pixel value with mod_val where
mod_val = (-A * x -B * y ) / C
A,B,C are constants while x,y is the pixel location in a x,y plane.
When the dimension of the picture is divisible by 100 its perfect but when its not the picture fractures. The picture itself is the same as the original but there is a diagonal line with color contrast that goes across the picture. The program is supposed to make the pixel color uniform from the center.
I tried running the picture where mod_val = 0 for not divisble by 100 dimension pictures and it copies a new picture perfectly. So i doubt there is a problem with storing and writing the read data in terms of alignment. (fyi this picture is a grey scale 8 bit.bmp)
I have tried changing the A,B,C values but the diagonal remains the same. The color of the image fragments within the diagonals change.
when i run 1400 x 1100 picture it works perfectly with the mod_val equation written above which is the most baffling part.
I spent a lot of time looking for rounding errors. They are virtually all floats. The dimension i used for breaking picture is 1490 x 1170.
here is a gragment of the code where i think a error is occuring:
int img_row = row_length;
int img_col = col_length;
int i = 0;
float *pfAmultX = new float[img_row];
for (int x = 0; x < img_row; x++)
{
pfAmultX[x] = (A * x)/C;
}
for (int y = 0; y < img_col; y++)
{
float BmultY = B*y/C;
for (int x = 0; x < img_row; x++, i++)
{
modify_val = pfAmultX[x] + BmultY;
int temp = (int) data.data[i];
data.data[i] += (unsigned char) modify_val;
if(temp >= 250){
data.data[i] = 255;
}
else if(temp < 0){
data.data[i] = 0;
}
}
}
delete[] pfAmultX;
The img_row, img_col is correct according to VS debugger mode
Any help would be greatly appreciated. I've been trying to find this bug for many hours now and my boss is telling me that i can't go back home until i find this bug.....
before algorithm (1400 x 1100, works)
after
before (1490 x 1170, demonstrates the problem)
after
UPDATE:
well i have boiled down the problem as something with the x coordinate after extensive testing.
This is because when i use large A or B values or both (C value is always ~.999) for 1400x1100 it does not create diagonals.
However, for the other image, large B values do not create diagonals but a fairly small - avg A value creates diagonals.
Whats even more, when i test a picture where x is disivible by 100 but y is divisible by 10. the answer is correct.
well in the end i found the solution. It was a problem due to the padding the the bitmap. When the dimension on the x was not divisible by 4 it would use padding which would throw off all of the x coordinates. This also meant that the row_value i received from the bmp header was the same as the dimension but not really the same in reality. I had to make a edit where i had to do: 4 * (row_value_from_bmp_header + 3)/ 4.