In GCP face detection, where should we assume to be the axis to measure the roll angle? - google-cloud-platform

I need to rotate images of faces. I want to understand the output of GCP face detection. For the roll angle, where is the axis supposed to be? at the mouth center? at the nose? at the bottom left of the image file?
The definition of roll angle is here: http://www.conitec.net/beta/aentity-pan.htm
GCP explanation of the output seems to be in this outdated page: https://developers.google.com/vision/face-detection-concepts
Here it seems the z-axis is located at the lower left corner, which cannot be if we want to be precise. To get the image with that rotation we will need the axis to be located in the head.
Update: I have been told that the rotation is with respect to the center of the image. Can anyone confirm this?

Cloud Vision API is the GCP service for Face Detection (let me point out that this feature doesn't support Facial Recognition). You can refer to this similar SO thread that contains additional information about Google Cloud Vision API where it may be helpful to better understand this feature.
As mentioned, in case this feature doesn't cover your current needs, you can use the Send Feedback button, located at the lower left and upper right corners of the service public documentation, as well as take a look at the Issue Tracker tool in order to raise a Vision API feature request and notify Google about this desired functionality.

Related

Where can I get the pre trained model used by expo Face Detector in React Native?

The documentation page of expo Face Detector states that it uses Google Mobile Vision framework (Now MLKIT) to detect faces on images. I would like to obtain this model to use on a python program to obtain face landmarks on some local images that I have. These face landmark will be used to train another model later on.
However, after searching around, I was not able to obtain this exact model. The closest one I found is Face Detect by Media Pipe (https://google.github.io/mediapipe/solutions/face_detection). This model returns 6 key points (right eye, left eye, nose tip, mouth center, right ear tragion, and left ear tragion).
For the later part of my application, I require the positions of left cheek, right cheek, left mouth tip, right mouth tip (all these points are returned by expo Face Detector). Thus I am searching for the pre-trained model which returns all these features. Could anyone point to any resources where this might be available?

Get point cloud of object in specific position on the image

I have an RGBD camera and using ROS and PCL for further point cloud processing.
Question:
Is it possible to get the point cloud of the object if I know the position relative to the camera?
For example, I know that the object will be in the centre of the image, so I don't want to capture all noise around and get only the point cloud of the object (and possibly points around it, not the environment around object).
I'm new to PCL and ROS, so any comments and advises will be useful!
Yes. If its filtering point clouds you can take a look at https://pcl-tutorials.readthedocs.io/en/latest/passthrough.html#passthrough .
Points are usually reported in metres. So its not too hard to allow only a certain area through.

PCL, SACSegmentation detecting spheres

I'm trying to find spheres from a point cloud with pcl::sacSegmentation using RANSAC. The cloud is scanned with an accurate terrestial scanner from one station. The cloud density is about 1cm. The best results so far are shown in the image below. As you can see the cloud contains 2 spheres (r=7,25cm) and a steel beam where the balls are attached.. I am able to find three sphere candidates whose inlier points are extracted from cloud in the image (You can see two circle shapes on the beam near the spheres).
Input point cloud. Inlier points extracted
So, it seems that I am close. Still the found sphere centers are too much (~10cm) away from the truth. Any suggestion how could I improve this? I have been tweaking the model parameters quite some time. Here are the parameters for the aforementioned results:
seg.setOptimizeCoefficients(true);
seg.setModelType(pcl::SACMODEL_SPHERE);
seg.setMethodType(pcl::SAC_RANSAC);
seg.setMaxIterations(500000);
seg.setDistanceThreshold(0.0020);
seg.setProbability(0.99900);
seg.setRadiusLimits(0.06, 0.08);
seg.setInputCloud(cloud);
I also tried to improve the results by including point normals in the model with no better results. Yet there are couple parameters more to adjust so there might be some combinations I had not tried.
I happily give you more information if needed.
Thaks
naikh0u
After some investigation I have come in to conclusion that I can't find spheres with SACSegmentation from a cloud that contains lot of other points that don't belong in any sphere shape. Like in my case the beam is too much for the algorithm.
Thus, I have to choose the points that show some potential being a part of a sphere shape. Also I think, I need to separate the points belonging in different spheres. I tested and saw that my code works pretty well if the input cloud has only sphere points for single sphere with some "natural" noise.
Some have solved this problem by first extracting all points belonging to planes and then searched for spheres. Others have used colors of the target (in case of camera) to extract only needed points.
Deleting plane points should work for my example cloud, but my application may have more complex shapes too so it may be too simple..
..Finally, I got satisfied results by clustering the cloud with pcl::EuclideanClusterExtraction and feeding the clusters for sphere search one by one.

Programming a ZED camera in C++

The UAV flies in an environment and it equips with a dual lens camera(ZED stereo camear) in front of an UAV. This camera is responsible for detecting the environment and it can offer the depth information of the environment and generate the point cloud of the environment. The generated point will have for origin the ZED camera itself. There is no obstacle detection in the SDK, but you can loop over the point cloud looking for points a less than 5 meters for example. If you detect a group of point, you can consider this as an obstacle positioned in the ZED referential. With this program(main.cpp) you can achieve the 3d coordinates(the origin is in the middle of left len, x is horizontal axis, y is vertically downward, z axis is the distance between camera and point cloud ) of point cloud on real time when UAV is flying. This program is modified by my partner. So it is a little bit different from the original one what it is offered by official website.
Continue to program in C++ and commend each line.
Tasks are:
The view field of this camera is 110 degree. But I will not use all of them. So define a view field at 30 degree of camera. When the camera detects the groups of point cloud in this 30 degree within 5m, the program gives an alarm and return a command to control system to make UAV hover. You should continue to program the main.cpp.
In the end you should give me 2 programs: 1 is with alarm, the other is with a command to control system. I do not need any report.
I will run this program in JETSON TX1 platform in Ubuntu, later maybe it will involve in some debugs process, but no problem, I will try my best to debug, if somewhere I cannot deal with, I will ask you for help.
This topic has a set of tasks, this is the first one, if you work well, we can talk about the others.
What I can offer you are:
globaldefined.hpp
Camera.hpp
Mat.hpp
Main.cpp
All the information about this camera is under: https://www.stereolabs.com/zed/specs/
Suppose UAV data:
Length:1500mm
Width:1500mm
Height:500mm
Speed:1000mm/s

How to determine sunset/sunrise including terrain shadows.

In Google Earth you can use the "Sunlight" layer to view shadows cast by the terrain at any given DateTime: http://i.stack.imgur.com/YFGMj.png
However, I have not been able to find any way to access the sunlight/luminosity/shadow/etc values from the API.
I'm looking for a way to supply Lat, Long and DateTime to determine if an area is in sunlight (taking terrain shadows in to account, there are countless services that will provide simple Sunrise and Sunset times, but these do not consider terrain). This can be done manually with Google Earth, but I'm looking for a programatic method.
Thanks for any thoughts, ideas, leads...
I realise that this is an old question, but it surfaced in a google search I just did, and I liked the focus.
Since you're looking for a programmatic way of determining if a point on earth given by a longitude and latitude tuple is exposed to sun at a given time, I can't help you right now. However, I'm in a position to be able to set up such an API quite easily if we see that this is a feature that many people need. At suncurves.com we calculate sunrise and sunset times accounting for terrain. The solution we've set up so far is a web interface where a user can search for an address or drag and drop the icon on a map to get sunrise and sunset times through the year for that exact spot accounting for terrain. We want to create an API to our data, but we do not have a clear specification of the scope of this API yet. What you ask for requires that we need to:
Calculate the apparent horizon from the viewing point of the
longitude and latitude. This means scanning the terrain data in a
search radius of 30-50 km around your point.
Calculate the sun's position at the specified time.
Calculate the sun's position at the specified time. Determine if the
sun is under or over the horizon as given by the terrain surrounding
your point accounting for atmospheric refraction.
Here's an example from Chamonix, France where the common flat terrain versions of sunrise, sunset times are pretty worthless.
http://suncurves.com/v/7/
I am not sure about determining whether an AOI in in the sun or shade at a certain time, however you can set the SUN to be on or off in the API by using
GESun.setVisibility
Edit:
Using the GE-plugin, create a LookAt with your desired AOI lat/long where the view is directly above looking straight down. Depending on the size of you actual AOI I would keep the view as low to the ground as possible.
Then capture a screenshot/image - I do not think this is possible through GE (if anyone knows a way I would like to find out), so maybe use javascript to take it - I found this Q on SO that provides some insight.
Take a screenshot with GESun.setVisibility set ON and then another with it OFF
Compare the two images for darkness/lightness or something and determine if your AOI is in the shade or not. You might find it better to surround your AOI in a Polygon of some sort in order to help your program distinguish it from the rest of the image - depending on the height the LookAt was taken from etc etc....
I do not have any ideas on how to compare the images, but yet again another search on SO resulted in this (I would presume finding the values of COLOR_BLACK in PHP ImageMagick) and this (Color Buckets idea).
Depending on your method of choice, it might help to alter your images to black/white before doing the comparing.