What is the average of probabilities used in Weka? what is the code inside distributionForInstanceAverage doing?
It's just returning the mean of the probability distributions for each of the base classifiers (learnt within Vote or built outside and loaded by Vote).
It first sums them up for any models learnt within Vote, then any loaded in, and then elementwise divides the array by the total number of models.
protected double[] distributionForInstanceAverage(Instance instance) throws Exception {
//Init probs array with first classifier used within model (learnt or loaded)
double[] probs = (m_Classifiers.length > 0)
? getClassifier(0).distributionForInstance(instance)
: m_preBuiltClassifiers.get(0).distributionForInstance(instance);
//Add the distributions of any classifiers built within the Vote classifier to probs array
for (int i = 1; i < m_Classifiers.length; i++) {
double[] dist = getClassifier(i).distributionForInstance(instance);
for (int j = 0; j < dist.length; j++) {
probs[j] += dist[j];
}
}
//Add the distributions of any classifiers built outside of the Vote classifier (loaded in) to the probs array
int index = (m_Classifiers.length > 0) ? 0 : 1;
for (int i = index; i < m_preBuiltClassifiers.size(); i++) {
double[] dist = m_preBuiltClassifiers.get(i).distributionForInstance(instance);
for (int j = 0; j < dist.length; j++) {
probs[j] += dist[j];
}
}
//Divide each probability by the total number of classifiers used in Vote (to get the mean)
for (int j = 0; j < probs.length; j++) {
probs[j] /= (double)(m_Classifiers.length + m_preBuiltClassifiers.size());
}
return probs;
}
Related
Given an m x n integer grid, return the size (i.e., the side length k) of the largest magic square that can be found within this grid.
The question can be found here on leetcode
I first wanted to see if a naive brute force approach would pass, so I came up with the following algorithm
Iterate through all values of k (from min(rows,cols) of the matrix to 1)
For each of the k values, check if it's possible to create a magic of square of dimensions kxk by checking all possible sub matrices and
return k if it's possible. This would be O(rows*cols*k^2)
So that would make the overall complexity O(k^3*rows*cols). (Please correct me if I am wrong)
I have attached my code in C++ below
class Solution {
public:
int largestMagicSquare(vector<vector<int>>& grid) {
int rows = grid.size(),cols = grid[0].size();
for(int k = min(rows,cols); k >= 2; k--){ // iterate over all values of k
for(int i = 0; i < rows-k+1; i++){
for(int j = 0; j < cols-k+1; j++){
int startX = i, startY = j, endX = i+k-1, endY = j+k-1;
int diagSum = 0, antiDiagSum = 0;
bool valid = true;
// calculate sum of diag
for(int count = 0; count < k; count++){
diagSum += grid[startX][startY];
startX++,startY++;
}
// this is the sum that must be same across all rows, cols, diag and antidiag
int sum = diagSum;
// calculate sum of antidiag
for(int count = 0; count < k; count++){
antiDiagSum += grid[endX][endY];
endX--,endY--;
}
if(antiDiagSum != sum) continue;
// calculate sum across cols
for(int r = i; r <=i+k-1; r++){
int colSum = 0;
for(int c = j; c <= j+k-1; c++){
colSum += grid[r][c];
}
if(colSum != sum){
valid = false;
break;
}
}
if(!valid) continue;
// calculate sum across rows
for(int c = j; c <= j+k-1; c++){
int rowSum = 0;
for(int r = i; r <= i+k-1; r++){
rowSum += grid[r][c];
}
if(rowSum != sum){
valid = false;
break;
}
}
if(!valid) continue;
return k;
}
}
}
return 1;
}
};
I thought I would optimize the solution once this works (Maybe binary search over the values of k). However, my code is failing for a really large test case for a matrix of dimension 50x50 after passing 74/80 test cases on Leetcode.
I tried to find out the source(s) that could be causing it to fail, but I am not really sure where the error is.
Any help would be appreciated. Thanks!
Please do let me know if further clarification about the code is needed
The calculation of antiDiagSum is wrong: it actually sums the values on the same diagonal as diagSum, just in reverse order. To traverse the opposite diagonal, you need to increment the Y coordinate and decrement the X coordinate (or vice versa), but your code decrements both of them.
It is probably easiest if you fix this by calculating both diagonal sums in the same loop:
for(int count = 0; count < k; count++){
diagSum += grid[startX][startY];
antiDiagSum += grid[endX][startY];
startX++, startY++, endX--;
}
How to calculate time complexity of this function step by step ?
This function converts an adjacency list to a matrix, manipulate the matrix and then convert the matrix back to a list
Graphe *Graphe::grapheInverse( void ){
Graphe *r = new Graphe (_adjacences.size() );
std::vector<vector<int> > matrix(_adjacences.size(), vector<int>( _adjacences.size() ) );
std::vector<vector<int> > liste(matrix.size());
for (unsigned i = 0; i < _adjacences.size(); i++)
for (auto j : *_adjacences[i])
matrix[i][j] = 1;
for (int i = 0; i < matrix.size(); i++) {
for (int j = 0; j < matrix[i].size(); j++) {
if (matrix[i][j] == 1)
matrix[i][j] = 0;
else
matrix[i][j] = 1;
if (i == j)
matrix[i][j] = 0;
}
}
for (int i = 0; i < matrix.size(); i++){
for (int j = 0; j < matrix[i].size(); j++){
if (matrix[i][j] == 1){
liste[i].push_back(j);
}
}
}
for (int i = 0; i < liste.size(); i++) {
for (int j = 0; j < liste[i].size(); j++) {
r->ajouterArcs( i, liste[i][j] );
}
}
return r;
}
Note that the following all applies to big-O time complexity:
Calculating the time complexity involves looking at how many times you iterate over the data. One for loop is N, as you touch each point once. A nested for loop (for i in data, for j in data) is N^2 because you touch each point once for each point there is.
A for loop next to a for loop (For i in data do X, for i in data do Y) touches the data N + N times. This is still considered N time complexity because as N approaches a very large number, 2N doesn't make much difference. The same goes for nested loops, N^2+N^2 = 2N^2 -> Essentially, you would ignore any multipliers, and go based on the times you touch N. That means 2N^2 changes to N^2
To reiterate, this is specifically for big-O time complexity
I'm working on a GECODE solver to implement a Matrix Generation problem. I have figured out all the constraints I require except for one:
Given a Matrix[M, N], all column vectors must be pairwise distinct.
This is the code I would like to write:
for(int i = 0; i < N; i++)
{
for(int j = 0; j < N; j++)
{
if( i != j )
{
notsame(*this, m.col(i), m.col(j));
}
}
}
But I can't figure out how to express that with the provided primitive constraints. I know distinct() exists, however I can't figure out how to operate over columns in a matrix, instead of elements in the column matrix itself. What would be the best way to express this constraint over matricies?
I have come up with an implementation that seems to work.
for(int i = 0; i < N; i++)
{
for(int j = 0; j < N; j++)
{
if( i != j )
{
// Every column should not be equal to any other column
// Check each column pairwise element
BoolVarArgs equalities;
for(int r = 0; r < M; r++)
{
equalities << expr(*this, m(i, r) == m(j, r));
}
rel(*this, BOT_AND, equalities, 0);
}
}
}
I'm trying to experiment with 2D arrays in C++ and I'm working on a project that makes a 4x4 2D array that holds a number of student grades but it is partially filled via a text file. Only 3 out of the 4 columns are filled. I want to filled the last column with an average of the grades in the previous rows of each column.
The problem is I can't figure out exactly how to fill the last column.
This my code for calculating the average.
const int SIZE = 4;
const int ROWS = 4;
const int COLS = 4;
int total = 0;
for (int i = 0; i < ROWS; i++)
{
total = 0;
for (int j = 0; j < COLS - 1; j++)
{
total += studentGrades[i][j];
average = total / (COLS - 1);
studentGrades[0][3] = average;
studentGrades[1][3] = average;
studentGrades[2][3] = average;
studentGrades[3][3] = average;
}
}
It seems like I'm close because I'm getting good results but the last column isn't displaying the right values and I feel like there's a more efficient way to fill the last column instead of manually inserting into each index.
You are assigning the last computed average to all rows every time. This means at the end you will have the average of row 4 in all 4 columns. Also consider changing your variables (studentGrades and total) to a floating point type for more accuracy.
const int SIZE = 4;
const int ROWS = 4;
const int COLS = 4;
for (int i = 0; i < ROWS; i++)
{
int total = 0;
for (int j = 0; j < COLS - 1; j++)
total += studentGrades[i][j];
studentGrades[i][COLS - 1] = total / (COLS - 1);
}
You could also make use of the standard library:
#include <numeric>
// ...
constexpr int Rows = 4, Cols = 4, NGrades = Cols - 1;
for (int i = 0; i < Rows; i++)
studentGrades[i][NGrades] = std::accumulate(studentGrades[i], studentGrades[i] + NGrades, 0) / NGrades;
As in my first solution, consider using floating point types. To enable float arithmetic change the 0 of std::accumulate to 0.0 or 0.0f.
Here is an explanation of std::accumulate.
The logic is wrong. You can only calculate the average and fill the last column after you have totaled the other columns, and you can only fill one row at a time, instead of trying to do all four rows together. This is the correct loop
for (int i = 0; i < ROWS; i++)
{
total = 0;
for (int j = 0; j < COLS - 1; j++)
{
total += studentGrades[i][j];
}
average = total / (COLS - 1);
studentGrades[i][3] = average;
}
It's just a matter of doing things in the right order and at the right time.
Plus you should pay attention to the integer division problem that Yksisarvinen pointed out.
I wrote a simple N-body code using a leapfrog algorithm. Now what I am trying to do is create a close encounter condition. I want the code to let me know whenever two particles come closer than a certain distance. This is what I came up with, it seems to me that it should work, but it doesn't. It's in c++:
In the code: num is the number of particles, r[num][3] is a global 2d array that holds the 3d position of each particle (x,y,z coords).
I am just writing the function check_collisions, which is executed in the main function under a time loop, which evolves the system.
What I was trying to do is to store the distances between any given pair of particles, and compare them to some distance (in this case rad[i]+rad[j], where rad[num] is a global array of the radius of each particle). If the distance between 2 particles is less than rad[i]+rad[j], then I want the variable dummy to increase. Then I want to do some stuff those particles, leaving the others intact.
The problem is that dummy is still 0 no matter what. I independently check that 2 particles in one of my trials are actual close to each other for several timesteps, but the dummy variable keeps on zero.
Here is the fucntion
int check_collisions(int num, double dt)
{ double rji[3]; double r2, dis; int dummy = 0;
double rad_sum[num][num]; double coll_dis[num][num];
for (int l = 0; l < num; l++)
{ for (int m = 0; m < num; m++)
{ coll_dis[l][m] = 10000; rad_sum[l][m] = 0;} }
for (int i = 0; i < num; i++)
{ for (int j = i+1; j < num; j++)
{
for (int k = 0; k < 3; k++)
{ rji[k] = r[j][k] - r[i][k]; }
for (int k = 0; k < 3; k++)
{ r2 += rji[k] * rji[k]; }
dis = sqrt(r2);
coll_dis[i][j] = dis;
rad_sum[i][j] = rad[i]+rad[j];
}//end for j
}//end for i
for (int i = 0; i < num; i++)
{ for (int j = i+1; j < num; j++)
{
if ( coll_dis[i][j] <= rad_sum[i][j] )
{ dummy++;
}
if (dummy != 0)
{ do { some stuff involving dt
} while (coll_dis[i][j] <= rad_sum[i][j]);
}
}
}
}