Statmodels compare results of models - compare

Is there a way how to compare results of 2 different statsmodels and show them side by side? Like this:
enter image description here
Thank you very much!

Related

AWS Quicksight calculated fields gives incorrect result for simple division

I have a dataset with fields targeted and opens and I need to add calculated field opens per targeted which essentially means doing simple devision of those 2 values.
My calculated field is as follows
{opens}/{targeted}
but then displaying simple table with values they are completely incorrect
If I try any other operator like + * etc calculations are correct.
I'm completely out of ideas on how to debug this. I've simplified the dataset to just columns of targeted and opens, can't get any simpler.
Had the same problem, I fixed it by wrapping the columns with the sum() function. Like this:
sum({opens})/sum({targeted})
I think you need to make AWS understand that you are working with float numbers.
1.0*{opens}/{targeted}
if still not working try also
(1.0*{opens})/({targeted}*1.0)
it should give you the desired output (not tested, let me know if it doesnt work)

Why is my dictionary of trained models not predicting accurately?

I have a lot of data on brands and items from those brands. What I'm doing is a GridSearch on a bunch of regression models and adding the best estimator to a dictionary. So it looks like this-
{'Kellogs': {'Fruit Loops': MLPRegressor(<best parameters>)}}
Then it predicts and 1 box costs $2.
Then I pickle it for use in a Django Application. The problem is that when I train it locally it runs SO accurately that it's hard to deny that it's right, but when I import 'trained_models.p' and call it like so- trained_models[brand][cereal].predict(np.array(1).reshape(-1, 1) predicts $12.
What am I doing wrong?
All of the regressors contained in the dictionary need to be fit again in order to predict most accurately.
It usually doesn't take too long.

Python: Trying to create bar plot from data in multi-index pivot table

I've seen some answers that are tangentially related to my question but they're sufficiently different that I'm still not sure how to go about this. I have a pivot table in Python organized as follows:
and I would like to create a bar plot where on the x-axis I have 6 sections for each of the 6 departments (A,B,C, etc.) and then for each I have two bar plots that show the value in the "Perc Admitted" column for Males and Females, with them colored differently for clarity. I've been trying to use Seaborn's barplot for this, but cannot seem to get it to come together, i.e. trying for example
sns.barplot(data = admit_data, x = 'Dept', y = 'Perc Accepted', hue = 'Gender')
gets me an "ValueError: Could not interpret input 'Dept'" error. Still learning Python and not sure how to best do this...any suggestions would be greatly appreciated. I'm also open to using matplotlib.pyplot or other library if that also provides for an elegant solution. Thank you!

SegNet results of train set (test via test_segmentation.py)

I run SegNet on my own dataset (by Segnet tutorial). I see great results via test_segmentation.py.
my problem is that I want to see the real net results and not test_segmentation own colorisation (via classes).
for example, if I have trained net with 2 classes, so after the train I will see not only 2 colors (as we see with the classes), but we will see the real net color segmentation ([0.22,0.19,0.3....) lighter and darker as the net see it]
I hope that I explained myself well. thanks for helping.
You could use a python script to achieve what you want. Take a look at this script.
The command out = out['argmax'], extracts the raw output, so you can get a segmentation map with 'lighter and darker' values as you wanted.
When you say the 'real' net color segmentation I will assume that you mean the probability maps. Effectively the last layer will have one map for every class; and if you check the function predict in inference.py, they take the argmax; that is the channel (which represents the class) with the highest probability. If you want to get these maps, you just have to get the data without computing the argmax; something like:
predicted = net.blobs['prob'].data
I solve it. the solution is to range cmin and cmax from 0 to 1 in the scipy saving method. for example: scipy.misc.toimage(output, cmin=0.0, amax=1).save(/path/.../image.png)

AWS Machine Learning issue

I use AWS Machine Learning to predict if a tweet message is positive or negative.
I have a CSV file with about 1000 tweets (2 columns "message" TEXT and "is_postive" BINARY).
If the message contains some words that I've defined by my side, "is_positive" is set to 0 (else 1)
My issue is that evaluations always return 1 (even if I try a message with a "bad" word).
How can I have more relevant results?
Thanks for your help!
Navigate to your datasource and select your LM model. Clicking on the attributes will give you an idea of how "statistically relevant" the columns in your teaching data are. Your result is most probably due to your teaching data. Since the entire tweet message is in one column, the model is most likely looking for a correlation on all words in the sample tweets. A better model may be to use a "sentiment" library of which there are publicly available versions which would shift your model to look at each word in the tweet vs. the tweet as a whole as yours currently is.