Do you know the base of amcharts logarithmic scale? - amcharts4

I am porting a chart form Dundas to amCharts. This charts uses a logarithmic scale base on the natural log. It does not seem you can change the base of the scale in amcharts like you can in Dundas. Does anyone know what the base of the logarithmic scale is in amcharts. Is it natural log (2.7~) or base 10 log?

Related

programmatic zoom and pan chartjs with logarithmic scale

In chartjs (v3) I'm programmatically zooming and panning by setting the min and max values for a series.
This works great, except if the series has a logarithmic scale ... this method doesn't seem to work, and has some strange results.
Any ideas on how I can achieve programmatic panning and zoom with logarithmic scales? Simply adding/subtracting the min/max values doesn't work correctly
EDIT: I see that the zoom plugin API has a zoomScale() function, but setting the min and max has the same effect... should min and max be calculated differently for logarithmic scales?
EDIT2: I'm trying to call the pan() function, which accepts Scale[] as a parameter ... I'm strugging to work out how to pass one of my scales, any ideas?
For zooming and panning, I'm using Zoom plugin: https://github.com/chartjs/chartjs-plugin-zoom
You can also find a specific sample on log axis: https://www.chartjs.org/chartjs-plugin-zoom/latest/samples/wheel/log.html
For anyone with a similar problem, I've got it working ...
I simply need to call the pan function like this:
mychart.pan(10, [mychart.scales['yMyScale']], 'none');

Dynamic vehicle modeling and simulation start condition

In one of my previous questions about WeBots I asked which vehicle model is implemented. Apparently the Ackerman vehicle dynamics is used. Could anyone give me a reference that has/explain this model?
My second question is somewhat more practical. I would like to start the simulation with the vehicle having a predefined velocity. How can I do that? I do not know any field name that allows me to do it.
Can I change the maximal acceleration and minimal deceleration? Currently I am using only setCruisingSpeed to send velocity commands, even if I have to break. I believe that the time0to100 is used to calculate the maximal acceleration (which is applied uniformly), is it the same for deceleration?
Thanks
The documentation about the Webots ackermann vehicle can be found here: https://www.cyberbotics.com/doc/automobile/ackermannvehicle
Additionally, you can easily find information about the ackermann steering geometry on google, e.g.:
https://en.wikipedia.org/wiki/Ackermann_steering_geometry
About the initial speed, the simplest solution is to make the vehicle drive at the desired speed and then save the simulation. However, this is not recommended, starting a simulation in Webots with non-zero speed can lead to physic instabilities.
About the maximum acceleration and deceleration, if you are only using cruising control (and not throttle/torque control) yes, the time0to100 is used to compute the maximum uniform acceleration/deceleration.

How to classify edge-images which belong to two highly changeable but distinguishable classes according to edges (contours) "curviness"

Introduction
I am working on a project in OpenCV in C++ and I am trying to classify a small image (a frame from a video) containing many edges into two groups. I would also like to retain the information of how close the image is to class A or to class B, because this classification does not seem to be simply a binary problem - sometimes the images is a mixture of class A and B.
Class A can be roughly determined by existence of curvy/smooth edges, similar to arches or parts of elliptic structures, which often are oriented into some kind of center (like in a tunnel).
The other class, class B is usually very chaotic and the edges on such image are definitely less curvy, they are often winding and usually don't have any kind of "center of attention".
Classes:
The images from both classes are in following links:
Group A
Group B
Previous approaches and ideas
I was trying to separate each sufficiently long contour and then calculate some kind of curvature/curviness coefficient - basically I downsampled the contour (by 10), then I traversed along the new contour and calculated the average value of absolute angle between two segments created by 3 consecutive points. Then based on this value I determined whether a current contour is "curvy" or not. According to that I calculated a few features:
total length of curvy contours / total length of all contours
number of curvy contours / number of all contours
etc.
However, the very curvature calculation seems to be not working very robustly (in one frame a contour is considered to be curvy, in second the same contour with slighly changed shape is not etc.) and also setting the threshold to determine which contour is curvy and which is not (based on "avereage curvines of single contour") is difficult to set properly. Also, such approach in no way takes into account the specific "shape" of class A and therefore classification results are very poor.
I was thinking about some kind of ellipse fitting, but as you can see, the class A is more like a group of arches than actual ellipse or circle.
I was reading about some ways of comparing edge maps such as Hausdorff matching, but it does not seem to be very helpful in my case. Also, it is important, that I want to keep the algorithm simple, because it has to be able to work in real time and also it is only a part of bigger software.
Finally, my question is:
Do you have any ideas of any other, better features and calculations that I can use to describe and then classify such edges/images? Is there a robust solution to describe my classes?

What is class_weight parameter does in scikit-learn SGD

I am a frequent user of scikit-learn, I want some insights about the “class_ weight ” parameter with SGD.
I was able to figure out till the function call
plain_sgd(coef, intercept, est.loss_function,
penalty_type, alpha, C, est.l1_ratio,
dataset, n_iter, int(est.fit_intercept),
int(est.verbose), int(est.shuffle), est.random_state,
pos_weight, neg_weight,
learning_rate_type, est.eta0,
est.power_t, est.t_, intercept_decay)
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/linear_model/stochastic_gradient.py
After this it goes to sgd_fast and I am not very good with cpython. Can you give some celerity on these questions.
I am having a class biased in the dev set where positive class is somewhere 15k and negative class is 36k. does the class_weight will resolve this problem. Or doing undersampling will be a better idea. I am getting better numbers but it’s hard to explain.
If yes then how it actually does it. I mean is it applied on the features penalization or is it a weight to the optimization function. How I can explain this to layman ?
class_weight can indeed help increasing the ROC AUC or f1-score of a classification model trained on imbalanced data.
You can try class_weight="auto" to select weights that are inversely proportional to class frequencies. You can also try to pass your own weights has a python dictionary with class label as keys and weights as values.
Tuning the weights can be achieved via grid search with cross-validation.
Internally this is done by deriving sample_weight from the class_weight (depending on the class label of each sample). Sample weights are then used to scale the contribution of individual samples to the loss function used to trained the linear classification model with Stochastic Gradient Descent.
The feature penalization is controlled independently via the penalty and alpha hyperparameters. sample_weight / class_weight have no impact on it.

Target Detection - Algorithm suggestions

I am trying to do image detection in C++. I have two images:
Image Scene: 1024x786
Person: 36x49
And I need to identify this particular person from the scene. I've tried to use Correlation but the image is too noisy and therefore doesn't give correct/accurate results.
I've been thinking/researching methods that would best solve this task and these seem the most logical:
Gaussian filters
Convolution
FFT
Basically, I would like to move the noise around the images, so then I can use Correlation to find the person more effectively.
I understand that an FFT will be hard to implement and/or may be slow especially with the size of the image I'm using.
Could anyone offer any pointers to solving this? What would the best technique/algorithm be?
In Andrew Ng's Machine Learning class we did this exact problem using neural networks and a sliding window:
train a neural network to recognize the particular feature you're looking for using data with tags for what the images are, using a 36x49 window (or whatever other size you want).
for recognizing a new image, take the 36x49 rectangle and slide it across the image, testing at each location. When you move to a new location, move the window right by a certain number of pixels, call it the jump_size (say 5 pixels). When you reach the right-hand side of the image, go back to 0 and increment the y of your window by jump_size.
Neural networks are good for this because the noise isn't a huge issue: you don't need to remove it. It's also good because it can recognize images similar to ones it has seen before, but are slightly different (the face is at a different angle, the lighting is slightly different, etc.).
Of course, the downside is that you need the training data to do it. If you don't have a set of pre-tagged images then you might be out of luck - although if you have a Facebook account you can probably write a script to pull all of yours and your friends' tagged photos and use that.
A FFT does only make sense when you already have sort the image with kd-tree or a hierarchical tree. I would suggest to map the image 2d rgb values to a 1d curve and reducing some complexity before a frequency analysis.
I do not have an exact algorithm to propose because I have found that target detection method depend greatly on the specific situation. Instead, I have some tips and advices. Here is what I would suggest: find a specific characteristic of your target and design your code around it.
For example, if you have access to the color image, use the fact that Wally doesn't have much green and blue color. Subtract the average of blue and green from the red image, you'll have a much better starting point. (Apply the same operation on both the image and the target.) This will not work, though, if the noise is color-dependent (ie: is different on each color).
You could then use correlation on the transformed images with better result. The negative point of correlation is that it will work only with an exact cut-out of the first image... Not very useful if you need to find the target to help you find the target! Instead, I suppose that an averaged version of your target (a combination of many Wally pictures) would work up to some point.
My final advice: In my personal experience of working with noisy images, spectral analysis is usually a good thing because the noise tend to contaminate only one particular scale (which would hopefully be a different scale than Wally's!) In addition, correlation is mathematically equivalent to comparing the spectral characteristic of your image and the target.