EPOCH time in M Language - powerbi

How can I convert current datetime to epoch/unix time in seconds?
The functions listed below do not offer this as far as I can see..
DateTime functions
in C# it would be done this way
var epoch = (DateTime.UtcNow - new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc)).TotalSeconds;

You can do something quite similar in M:
epoch = Duration.TotalSeconds(DateTimeZone.UtcNow() - #datetimezone(1970, 1, 1, 0, 0, 0, 0, 0))

Related

How can I make something happen x percentage?

I have to write a piece of code in the form of c*b, where c and b are random numbers and the product is smaller than INT_MAx. But b or c has to be equal to 0 10% of the time and I don't know how to do that.
srand ( time(NULL) );
int product = b*c;
c = rand() % 10000;
b = rand() % INT_MAX/c;
b*c < INT_MAX;
cout<<""<<endl;
cout << "What is " << c << "x" << b << "?"<<endl;
cin >> guess;
You can use std::piecewise_constant_distribution
std::random_device rd;
std::mt19937 gen(rd());
double interval[] = {0, 0, 1, Max};
double weights[] = { .10, 0, .9};
std::piecewise_constant_distribution<> dist(std::begin(interval),
std::end(interval),
weights);
dist(gen);
An int is always less than or equal to INT_MAX therefore you can simply multiply a random boolean variable that is true with 90% probability with the product of two uniformly distributed integers:
std::random_device rd;
std::mt19937 generator(rd());
std::uniform_int_distribution<int> uniform;
std::bernoulli_distribution bernoulli(0.9); // 90% 1 ; 10% 0
const int product = bernoulli(generator) * uniform(generator) * uniform(generator)
If you had a specific limit in mind, like say N for the individual numbers and M for the product of the two numbers you can do:
std::default_random_engine generator;
std::uniform_int_distribution<int> uniform(0,N);
std::bernoulli_distribution bernoulli(0.9); // 90% 1 ; 10% 0
int product;
do { product = bernoulli(generator) * uniform(generator) * uniform(generator) }
while(!(product<M));
edit: std::piecewise_constant_distribution is more elegant, didn't know about it until I read the other answer.
If you want a portable solution that does not depend on the standard C++ library and also which is faster, and maybe simpler to understand, you can use the following snippet. The variable random_sequence is a pre-generated array of random numbers where the 0 happens 10% of the time. The variable runs and len are used to index into this array as an endless sequence. This is however, a simple solution, since the pattern will repeat after 90 runs. But if you don't care about the pattern repeating then this method will work fine.
int runs = 0;
int len = 90; //The length of the random sequence.
int random_sequence[] = { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };
int coefficient = random_sequence[runs % len];
runs++;
Then, whatever variable you want to be 0 10% of the time you do it like this:
float b = coefficient * rand();
or
float c = coefficient * rand();
If you want both variables to be 0 10% of the times individidually then it's like this:
float b = coefficient * rand();
coefficient = random_sequence[runs % len];
float c = coefficient * rand();
And if you want them to be 0 10% of the times jointly then the random_sequence array must be like this:
int len = 80;
int random_sequence[] = {1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1 };
And use
float b = coefficient * rand();
float c = coefficient * rand();
I gave it a shot here #[ http://codepad.org/DLbVfNVQ ]. Average value is somewhere in the neighborhood of 0.4. -CR

Python: How to make values of a list in a list of lists zeros

I would like to make all the values in the first list inside the list of lists named "child_Before" below zero. The piece of code I wrote to accomplish this task is also shown below after the list:
child_Before = [[9, 12, 7, 3, 13, 14, 10, 5, 4, 11, 8, 6, 2],
[1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1],
[[1, 0], [1, 1]]]
for elem in range(len(child_Before[0])):
child_Before[0][elem] = 0
Below is the expected result:
child_After = [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1],
[[1, 0], [1, 1]]]
However, I think there should be a more nibble way to accomplish this exercise. Hence, I welcome your help. Thank you in advance.
just to add a creative answer
import numpy as np
child_Before[0] = (np.array(child_Before[0])&0).tolist()
this is bad practice though since i'm using bitwise operasions in a senario where it is not intuitive, and i think there is a slight chance i'm making 2 loops xD on the bright site the & which is making all the zeros is O(1) time complexity
Just create a list of [0] with the same length as the original list.
# Answer to this question - make first list in the list to be all 0
child_Before[0] = [0] * len(child_Before[0])
As for you answer, I can correct it to make all the elements in the lists of this list to be zero.
# Make all elements 0
for child in range(len(child_Before)):
child_Before[child] = [0] * len(child_Before[child])
Use list comprehension:
child_after = [[i if n != 0 else 0 for i in j] for n, j in enumerate(child_Before)]

Extract Optimal Features from Recursive Feature Elimination (RFE)

I have a dataset consisting of categorical and numerical data with 124 features. In order to reduce its dimensionality I want to remove irrelevant features. However, to run the dataset against a feature selection algorithm I one hot encoded it with get_dummies, which increased the number of features to 391.
In[16]:
X_train.columns
Out[16]:
Index([u'port_7', u'port_9', u'port_13', u'port_17', u'port_19', u'port_21',
...
u'os_cpes.1_2', u'os_cpes.1_1'], dtype='object', length=391)
With the resulting data I can run recursive feature elimination with cross validation, as per the Scikit Learn example:
Which produces:
Cross Validated Score vs Features Graph
Given that the optimal number of features identified was 8, how do I identify the feature names? I am assuming that I can extract them into a new DataFrame for use in a classification algorithm?
[EDIT]
I have achieved this as follows, with help from this post:
def column_index(df, query_cols):
cols = df.columns.values
sidx = np.argsort(cols)
return sidx[np.searchsorted(cols, query_cols, sorter = sidx)]
feature_index = []
features = []
column_index(X_dev_train, X_dev_train.columns.values)
for num, i in enumerate(rfecv.get_support(), start=0):
if i == True:
feature_index.append(str(num))
for num, i in enumerate(X_dev_train.columns.values, start=0):
if str(num) in feature_index:
features.append(X_dev_train.columns.values[num])
print("Features Selected: {}\n".format(len(feature_index)))
print("Features Indexes: \n{}\n".format(feature_index))
print("Feature Names: \n{}".format(features))
which produces:
Features Selected: 8
Features Indexes:
['5', '6', '20', '26', '27', '28', '67', '98']
Feature Names:
['port_21', 'port_22', 'port_199', 'port_512', 'port_513', 'port_514', 'port_3306', 'port_32768']
Given that one hot encoding introduces multicollinearity, I don't think the target column selection is ideal because the features it has chosen are non-encoded continual data features. I have tried re-adding the target column unencoded but RFE throws the following error because the data is categorical:
ValueError: could not convert string to float: Wireless Access Point
Do I need to group multiple one hot encoded feature columns to act as the target?
[EDIT 2]
If I simply LabelEncode the target column, I can use this target as 'y' see example again. However, the output determines only a single feature (the target column) as optimal. I think this might be because of the one hot encoding, should I be looking at producing a dense array and if so, can it be run against RFE?
Thanks,
Adam
You can do this:
`
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
rfe = RFE(model, 5)
rfe = rfe.fit(X, y)
print(rfe.support_)
print(rfe.ranking_)
f = rfe.get_support(1) #the most important features
X = df[df.columns[f]] # final features`
Then you can use X as input in your neural network or any algorithm
Answering my own question, I figured out the issue was related to the way I had one-hot encoded the data. Initially, I ran one hot encoding against all categorical columns as follows:
ohe_df = pd.get_dummies(df[df.columns]) # One-hot encode all columns
This introduced a large number of additional features. Taking a different approach, with some help from here, I have modified the encoding to encode multiple columns on a per-column/feature basis as follows:
cf_df = df.select_dtypes(include=[object]) # Get categorical features
nf_df = df.select_dtypes(exclude=[object]) # Get numerical features
ohe_df = nf_df.copy()
for feature in cf_df:
ohe_df[feature] = ohe_df.loc[:,(feature)].str.get_dummies().values.tolist()
Producing:
ohe_df.head(2) # Only showing a subset of the data
+---+---------------------------------------------------+-----------------+-----------------+-----------------------------------+---------------------------------------------------+
| | os_name | os_family | os_type | os_vendor | os_cpes.0 |
+---+---------------------------------------------------+-----------------+-----------------+-----------------------------------+---------------------------------------------------+
| 0 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... | [0, 1, 0, 0, 0] | [1, 0, 0, 0, 0] | [0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0] | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, ... |
| 1 | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... | [0, 0, 0, 1, 0] | [0, 0, 0, 1, 0] | [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0] | [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... |
+---+---------------------------------------------------+-----------------+-----------------+-----------------------------------+---------------------------------------------------+
Unfortunately, although this was what I was searching for, it didn't execute against RFECV. Next I thought perhaps I could take a slice of all the new features and pass them in as the target, but this resulted in an error. Finally, I realised I would have to iterate through all target values and take the top outputs from each. The code ended up looking something like this:
for num, feature in enumerate(features, start=0):
X = X_dev_train
y = X_dev_train[feature]
# Create the RFE object and compute a cross-validated score.
svc = SVC(kernel="linear")
# The "accuracy" scoring is proportional to the number of correct classifications
# step is the number of features to remove at each iteration
rfecv = RFECV(estimator=svc, step=1, cv=StratifiedKFold(kfold), scoring='accuracy')
try:
rfecv.fit(X, y)
print("Number of observations in each fold: {}".format(len(X)/kfold))
print("Optimal number of features : {}".format(rfecv.n_features_))
g_scores = rfecv.grid_scores_
indices = np.argsort(g_scores)[::-1]
print('Printing RFECV results:')
for num2, f in enumerate(range(X.shape[1]), start=0):
if g_scores[indices[f]] > 0.80:
if num2 < 10:
print("{}. Number of features: {} Grid_Score: {:0.3f}".format(f + 1, indices[f]+1, g_scores[indices[f]]))
print "\nTop features sorted by rank:"
results = sorted(zip(map(lambda x: round(x, 4), rfecv.ranking_), X.columns.values))
for num3, i in enumerate(results, start=0):
if num3 < 10:
print i
# Plot number of features VS. cross-validation scores
plt.rc("figure", figsize=(8, 5))
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("CV score (of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()
except ValueError:
pass
I'm sure this could be cleaner, may be even plotted in one graph, but it works for me.
Cheers,

Find all keys with the max same value in python

Hi I have a dictionary like the below:
b = {'tat': 0, 'del': 4, 'galadriel': 0, 'sire': 0, 'caulimovirus': 4, 'retrofit': 0, 'tork': 0, 'caulimoviridae_dom2': 0, 'reina': 4, 'oryco': 2, 'cavemovirus': 1, 'soymovrius': 0, 'badnavirus': 0, 'crm': 0, 'athila': 0}
I want to find all keys with the maximum value as a list. However,
max(a, key=a.get)
only gives the first key element, 'del'.
How should I find all the keys with the maximum values? Like the below.
new_list = ['del', 'caulimovirus', 'reina']
maxv = max(b.values())
new_list = [k for k, v in b.items() if v == maxv]

Getting the number of trailing 1 bits

Are there any efficient bitwise operations I can do to get the number of set bits that an integer ends with? For example 1110 = 10112 would be two trailing 1 bits. 810 = 10002 would be 0 trailing 1 bits.
Is there a better algorithm for this than a linear search? I'm implementing a randomized skip list and using random numbers to determine the maximum level of an element when inserting it. I am dealing with 32 bit integers in C++.
Edit: assembler is out of the question, I'm interested in a pure C++ solution.
Calculate ~i & (i + 1) and use the result as a lookup in a table with 32 entries. 1 means zero 1s, 2 means one 1, 4 means two 1s, and so on, except that 0 means 32 1s.
Taking the answer from Ignacio Vazquez-Abrams and completing it with the count rather than a table:
b = ~i & (i+1); // this gives a 1 to the left of the trailing 1's
b--; // this gets us just the trailing 1's that need counting
b = (b & 0x55555555) + ((b>>1) & 0x55555555); // 2 bit sums of 1 bit numbers
b = (b & 0x33333333) + ((b>>2) & 0x33333333); // 4 bit sums of 2 bit numbers
b = (b & 0x0f0f0f0f) + ((b>>4) & 0x0f0f0f0f); // 8 bit sums of 4 bit numbers
b = (b & 0x00ff00ff) + ((b>>8) & 0x00ff00ff); // 16 bit sums of 8 bit numbers
b = (b & 0x0000ffff) + ((b>>16) & 0x0000ffff); // sum of 16 bit numbers
at the end b will contain the count of 1's (the masks, adding and shifting count the 1's).
Unless I goofed of course. Test before use.
The Bit Twiddling Hacks page has a number of algorithms for counting trailing zeros. Any of them can be adapted by simply inverting your number first, and there are probably clever ways to alter the algorithms in place without doing that as well. On a modern CPU with cheap floating point operations the best is probably thus:
unsigned int v=~input; // find the number of trailing ones in input
int r; // the result goes here
float f = (float)(v & -v); // cast the least significant bit in v to a float
r = (*(uint32_t *)&f >> 23) - 0x7f;
if(r==-127) r=32;
GCC has __builtin_ctz and other compilers have their own intrinsics. Just protect it with an #ifdef:
#ifdef __GNUC__
int trailingones( uint32_t in ) {
return ~ in == 0? 32 : __builtin_ctz( ~ in );
}
#else
// portable implementation
#endif
On x86, this builtin will compile to one very fast instruction. Other platforms might be somewhat slower, but most have some kind of bit-counting functionality that will beat what you can do with pure C operators.
There may be better answers available, particularly if assembler isn't out of the question, but one viable solution would be to use a lookup table. It would have 256 entries, each returning the number of contiguous trailing 1 bits. Apply it to the lowest byte. If it's 8, apply to the next and keep count.
Implementing Steven Sudit's idea...
uint32_t n; // input value
uint8_t o; // number of trailing one bits in n
uint8_t trailing_ones[256] = {
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 6,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 7,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 6,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 5,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 4,
0, 1, 0, 2, 0, 1, 0, 3, 0, 1, 0, 2, 0, 1, 0, 8};
uint8_t t;
do {
t=trailing_ones[n&255];
o+=t;
} while(t==8 && (n>>=8))
1 (best) to 4 (worst) (average 1.004) times (1 lookup + 1 comparison + 3 arithmetic operations) minus one arithmetic operation.
This code counts the number of trailing zero bits, taken from here (there's also a version that depends on the IEEE 32 bit floating point representation, but I wouldn't trust it, and the modulus/division approaches look really slick - also worth a try):
int CountTrailingZeroBits(unsigned int v) // 32 bit
{
unsigned int c = 32; // c will be the number of zero bits on the right
static const unsigned int B[] = {0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF};
static const unsigned int S[] = {1, 2, 4, 8, 16}; // Our Magic Binary Numbers
for (int i = 4; i >= 0; --i) // unroll for more speed
{
if (v & B[i])
{
v <<= S[i];
c -= S[i];
}
}
if (v)
{
c--;
}
return c;
}
and then to count trailing ones:
int CountTrailingOneBits(unsigned int v)
{
return CountTrailingZeroBits(~v);
}
http://graphics.stanford.edu/~seander/bithacks.html might give you some inspiration.
Implementation based on Ignacio Vazquez-Abrams's answer
uint8_t trailing_ones(uint32_t i) {
return log2(~i & (i + 1));
}
Implementation of log2() is left as an exercise for the reader (see here)
Taking #phkahler's answer you can define the following preprocessor statement:
#define trailing_ones(x) __builtin_ctz(~x & (x + 1))
As you get a one left to all the prior ones, you can simply count the trailing zeros.
Blazingly fast ways to find the number of trailing 0's are given in Hacker's Delight.
You could complement your integer (or more generally, word) to find the number of trailing 1's.
I have this sample for you :
#include <stdio.h>
int trailbits ( unsigned int bits, bool zero )
{
int bitsize = sizeof(int) * 8;
int len = 0;
int trail = 0;
unsigned int compbits = bits;
if ( zero ) compbits = ~bits;
for ( ; bitsize; bitsize-- )
{
if ( compbits & 0x01 ) trail++;
else
{
if ( trail > 1 ) len++;
trail = 0;
}
compbits = compbits >> 1;
}
if ( trail > 1 ) len++;
return len;
}
void PrintBits ( unsigned int bits )
{
unsigned int pbit = 0x80000000;
for ( int len=0 ; len<32; len++ )
{
printf ( "%c ", pbit & bits ? '1' : '0' );
pbit = pbit >> 1;
}
printf ( "\n" );
}
void main(void)
{
unsigned int forbyte = 0x0CC00990;
PrintBits ( forbyte );
printf ( "Trailing ones is %d\n", trailbits ( forbyte, false ));
printf ( "Trailing zeros is %d\n", trailbits ( forbyte, true ));
}