Google Earth Engine: How to use a shape file as area of interest during calculation of NDVI with Cloud masking and land masking - shapefile

I am trying to calculate the annual NDVI with Cloud masking and land masking for Australia.
I am getting this error
Image (Error)
reduce.mean: Error in map(ID=0): Image.bitwiseAnd: Bitwise operands must be integer only.
my code is
https://code.earthengine.google.com/?scriptPath=users%2Ftafzilamouly%2Ffpc%3AEXPLORE

Related

C++ OpenCV : Masking the frequency domain image

I have obtained the dft output of an image and then tried to save the real part of the image I get a completely black one. When I display the real part of it I get a good spectral image. I want to know, can I use different processing techniques like line Finders, circle Finders, count non zero functions on a frequency domain image?

Training dataset generator in OpenCV

I'm working on my bachaleor theses called "Traffic sign detection in image and video" and I'm using neural network called YOLO (You Only Look Once). I think its name is pretty self-explaining, but paper may be found here.
This network is learning from not-cropped annotated images (Regular networks use to train on cropped images). To be able to learn this network, i need not-cropped annotated european traffic signs dataset. I wasn't able to find any, even not here, so i decided to generate my own dataset.
First, I load many images of road taken from static camera on the car.
I've got few TRANSPARENT traffic (stop) signs like this
Then I'm doing few operations to make the traffic sign look "real" and copy it to random positions (where traffic signs usually are located). The size of traffic sign is adjusted due to its position in image -- closer to the middle sign is, the smaller sign is.
Operations I'm performing on traffic sign are:
Blur sign with kernel of random size from 1x1 to 31x31.
Rotate image left/right in X axis, random angle 0 to 20.
Rotate image left/right in Z axis, random angle 0 to 30.
Increase luminescence by adding/subtracting random value from 0 to 50
Here you may see few result examples (the better ones i guess): click.
Here is the source code: click.
Question:
Is there anything i could do to make the signs look more real and let the neural network train better ?
If the question would suit better for different kind of site, please, let me know.

Finding regions of higher numbers in a matrix

I am working on a project to detect certain objects in an aerial image, and as part of this I am trying to utilize elevation data for the image. I am working with Digital Elevation Models (DEMs), basically a matrix of elevation values. When I am trying to detect trees, for example, I want to search for tree-shaped regions that are higher than their surrounding terrain. Here is an example of a tree in a DEM heatmap:
https://i.stack.imgur.com/pIvlv.png
I want to be able to find small regions like that that are higher than their surroundings.
I am using OpenCV and GDAL for my actual image processing. Do either of those already contain techniques for what I'm trying to accomplish? If not, can you point me in the right direction? Some ideas I've had are going through each pixel and calculating the rate of change in relation to it's surrounding pixels, which would hopefully mean that pixels with high rates change/steep slopes would signify an edge of a raised area.
Note that the elevations will change from image to image, and this needs to work with any elevation. So the ground might be around 10 meters in one image but 20 meters in another.
Supposing you can put the DEM information into a 2D Mat where each "pixel" has the elevation value, you can find local maximums by applying dilate and then substract the result from the original image.
There's a related post with code examples in: http://answers.opencv.org/question/28035/find-local-maximum-in-1d-2d-mat/

Calculate body volume using kinect

I need an algorithm for calculating the body volume (in cubic meters) using kinect.
I know I can extract the cloud and the depth frame (isolating the body by using some methods of the skeleton NUI) but I don't know how to calculate the volume value from this matrix.
Exporting a volume block would be of any help?
If you need to compute the body volume precisely you can use the algorithm for generating Avatars from Kinect for monitoring obesity as it is demonstrated in this video, which shows an example of computing the volume of pregnant women using Kinect. Watch demo video.
The algorithm is described in details in the technical paper: A. Barmpoutis. 'Tensor Body: Real-time Reconstruction of the Human Body and Avatar Synthesis from RGB-D', IEEE Transactions on Cybernetics, Special issue on Computer Vision for RGB-D Sensors: Kinect and Its Applications, October 2013, Vol. 43(5), Pages: 1347-1356. Read PDF.
If you have depth and can determine distance via kinect sensor, and hence height, then you have x, y centimeters dimensions, via depth distance per z delta, and a rough z/2 per pixel/ray cloud-based approximation of depth cm. Keep in mind that human anterior and posterior are asymmetrical (hence the "rough" z/2 approximation - multiplying by 2).
If you can formalize a model of the human form, you can create a fitting algorithm that gives better approximate volume based on the given sensor information.

Computer vision cross correlation with spacially constant increase in brightness

If I'm taking the correlation between two images as described in the attached formula:
Which is taken from the following online computer vision textbook: Szelski page 386.
This function does not seem like it would ever be reliable, since if one of your images is brighter than the other, the correlation would be higher than if the images are identical. For instance, take a look at these examples printed on a white board:
As you can see the brighter image has a better correlation with the first image than an identical copy of the first image. What am I doing wrong?
I guess what you're looking for is the normalized cross-correlation, where the values are subtracted by the mean intensity and then divided by the standard deviation of the intensity.