My code for the graham scan is not working, it is supposed to get the perimeter of the convex hull. It gets the input of n points, which can have decimals. The algorithm returns a value higher than the actual perimeter.
I am using what I understood from:
http://en.wikipedia.org/wiki/Graham_scan
#include <iostream>
#include <cstdio>
#include <cmath>
#include <vector>
#include <algorithm>
using namespace std;
#define PI 3.14159265
int nodes;
double xmin=10000000, ymin=10000000, totes=0;
struct ppoint
{
double x, y, angle;
void anglemake()
{
angle=atan2(y-ymin, x-xmin)*180/PI;
if(angle<0)
{
angle=360+angle;
}
}
} np;
The point structure, with a function to make the angle between it and the point with lowest y and x coordinates
vector<ppoint> ch, clist;
bool hp(ppoint i, ppoint j)
{
return i.angle<j.angle;
}
double cp(ppoint a, ppoint b, ppoint c)
{
return ((b.x-a.x)*(c.y-a.y))-((b.y-a.y)*(c.x-a.x));
}
The z-cross product function
double dist(ppoint i, ppoint j)
{double vd, hd;
vd=(i.y-j.y)*(i.y-j.y);
hd=(i.x-j.x)*(i.x-j.x);
return sqrt(vd+hd);
}
Distance generator
int main()
{
scanf("%d", &nodes);
for(int i=0; i<nodes; i++)
{
scanf("%lf%lf", &np.x, &np.y);
if(np.y<ymin || (np.y==ymin && np.x<xmin))
{
ymin=np.y;
xmin=np.x;
}
ch.push_back(np);
}
Gets the points
for(int i=0; i<nodes; i++)
{
ch[i].anglemake();
}
sort(ch.begin(), ch.end(), hp);
clist.push_back(ch[0]);
clist.push_back(ch[1]);
ch.push_back(ch[0]);
Sorts and starts Graham Scan
for(int i=2; i<=nodes; i++)
{
while(cp(clist[clist.size()-2], clist[clist.size()-1], ch[i])<0)
{
clist.pop_back();
}
clist.push_back(ch[i]);
}
Graham scan
for(int i=0; i<nodes; i++)
{
totes+=dist(clist[i], clist[i+1]);
}
Gets length of the perimeter
printf("%.2lf\n", totes);
return 0;
}
Just for the interest, print out value of nodes and clist.size() before the dist summming.
At glance clist can have nodes+1 items only if pop_back never happens. and if it does you have undefined behavior.
I think the problem is here:
for(int i=0; i<nodes; i++)
{
totes+=dist(clist[i], clist[i+1]);
}
clist will only have the remaining number of points, not nodes + 1 which is the number of points you loaded plus one. Storing this number in the first place is a fault IMHO, because it starts with the number of points, then you add one to close the loop, then again you remove points to make the hull convex. Just use container.size() and everything is clear.
One more note: Use a checked C++ standard library implementation for debugging. These would have warned you of undefined behaviour like accessing a vector beyond its range. C++ is a language that allows you to shoot yourself in the foot in to many ways, all in the name of performance. This is nice and well, unless when debugging, which is when you want the best diagnostics available.
Related
I have been trying to do a graph search for a problem from Hackerrank. Lastly, I have come up with
#include <cstdio>
#include <list>
using namespace std;
void bfs(list<int> adjacencyList[], int start, int countVertices) {
// initialize distance[]
int distance[countVertices];
for(int i=0;i < countVertices; i++) {
distance[i] = -1;
}
list<int>::iterator itr;
int lev = 0;
distance[start-1] = lev; // distance for the start vertex is 0
// using start -1 since distance is array which are 0-indexed
list<int> VertexQueue;
VertexQueue.push_back(start);
while(!VertexQueue.empty()) {
int neighbour = VertexQueue.front();
itr = adjacencyList[neighbour].begin();
while(itr != adjacencyList[neighbour].end()) {
int vertexInd = (*itr) - 1;
if(distance[vertexInd] == -1) { // a distance of -1 implies that the vertex is unexplored
distance[vertexInd] = (lev + 1) * 6;
VertexQueue.push_back(*itr);
}
itr++;
}
VertexQueue.pop_front();
lev++;
}
// print the result
for(int k=0;k< countVertices;k++) {
if (k==start-1) continue; // skip the start node
printf("%d ",distance[k]);
}
}
int main() {
int countVertices,countEdges,start,T,v1,v2;
scanf("%d", &T);
for(int i=0; i<T; i++) {
scanf("%d%d", &countVertices,&countEdges);
list<int> adjacencyList[countVertices];
// input edges in graph
for(int j=0; j<countEdges; j++) {
scanf("%d%d",&v1,&v2);
adjacencyList[v1].push_back(v2);
adjacencyList[v2].push_back(v1); // since the graph is undirected
}
scanf("%d",&start);
bfs(adjacencyList, start, countVertices);
printf("\n");
}
return 0;
}
However, this is resulting in 'Segmentation Fault' and I cannot figure out where I am going wrong.
Also, I have comes across segmentation fault a lot of times, but have no idea how to debug it. Would be great if someone can give me an idea of that.
scanf("%d%d", &countVertices,&countEdges);
list<int> adjacencyList[countVertices];
Above code appears wrong. If your indices start with 1, either make adjacencyList of size countVertices + 1 or decrease u and v before putting them in the list.
You can also use a (an unordered) map mapping vertex to a list which will not segfault.
Also not that VLA are not part of standard C++, so avoid them even if your compiler support them as extension.
The classic coin change problem is well described here: http://www.algorithmist.com/index.php/Coin_Change
Here I want to not only know how many combinations there are, but also print out all of them. I'm using the same DP algorithm in that link in my implementation but instead of recording how many combinations in the DP table for DP[i][j] = count, I store the combinations in the table. So I'm using a 3D vector for this DP table.
I tried to improve my implementation noticing that when looking up the table, only information from last row is needed, so I don't really need to always store the entire table.
However my improved DP solution still seems quite slow, so I'm wondering if there's some problem in my implementation below or there can be more optimization. Thanks!
You can run the code directly:
#include <iostream>
#include <stdlib.h>
#include <iomanip>
#include <cmath>
#include <vector>
#include <algorithm>
using namespace std;
int main(int argc, const char * argv[]) {
int total = 10; //total amount
//available coin values, always include 0 coin value
vector<int> values = {0, 5, 2, 1};
sort(values.begin(), values.end()); //I want smaller coins used first in the result
vector<vector<vector<int>>> empty(total+1); //just for clearing purpose
vector<vector<vector<int>>> lastRow(total+1);
vector<vector<vector<int>>> curRow(total+1);
for(int i=0; i<values.size(); i++) {
for(int curSum=0; curSum<=total; curSum++){
if(curSum==0) {
//there's one combination using no coins
curRow[curSum].push_back(vector<int> {});
}else if(i==0) {
//zero combination because can't use coin with value zero
}else if(values[i]>curSum){
//can't use current coin cause it's too big,
//so total combination for current sum is the same without using it
curRow[curSum] = lastRow[curSum];
}else{
//not using current coin
curRow[curSum] = lastRow[curSum];
vector<vector<int>> useCurCoin = curRow[curSum-values[i]];
//using current coin
for(int k=0; k<useCurCoin.size(); k++){
useCurCoin[k].push_back(values[i]);
curRow[curSum].push_back(useCurCoin[k]);
}
}
}
lastRow = curRow;
curRow = empty;
}
cout<<"Total number of combinations: "<<lastRow.back().size()<<endl;
for (int i=0; i<lastRow.back().size(); i++) {
for (int j=0; j<lastRow.back()[i].size(); j++) {
if(j!=0)
cout<<" ";
cout<<lastRow.back()[i][j];
}
cout<<endl;
}
return 0;
}
It seems that you copy too many vectors: at least the last else can be rewritten as
// not using current coin
curRow[curSum] = lastRow[curSum];
const vector<vector<int>>& useCurCoin = curRow[curSum - values[i]]; // one less copy here
// using current coin
for(int k = 0; k != useCurCoin.size(); k++){
curRow[curSum].push_back(useCurCoin[k]);
curRow[curSum].back().push_back(values[i]); // one less copy here too.
}
Even if it is readable to clean curRow = empty;, that may create allocation.
Better to create a function
void Clean(vector<vector<vector<int>>>& vecs)
{
for (auto& v : vecs) {
v.clear();
}
}
In this question, firstly, you have to write two functions:
new_array (char** a, int n, int m): create a two-dimension matrix of characters whose size is m*n.
del_array (char** a, int n, int m): delete a two-dimension matrix of characters whose size is m*n.
After that, you use two above functions to perform the following task:
You are given a big image with size M*N and some small images with size m*n. Each image is presented by a matrix of characters with its size. Your task is finding the number of positions which each small image occurs in that big image.
Input file: image.inp.
The first line of the input file contains two positive integers M and N which are respectively the height and the width of the big image.
Each line 2...M+1 consists of N characters (a...z, A...Z) which describe a row of the big image.
Subsequently, there are some small images which you must find the big image. Each small image is written in the formation of the big image. Specially, there is a line having m = 0 and n = 0, you have to end your finding process.
Output file: image.out.
For each small image in the input file, you must write a number which presents the number of positions which that small image occurs in the big image.
image.inp image.out
4 4 3
Aaaa 1
Aaaa
Aaab
Aaaa
2 2
Aa
Aa
2 2
aa
ab
0 0
I did this:
file header: image.h:
#ifndef _IMAGE_H_
#define _IMAGE_H_
using namespace std;
void new_array (char** , int , int );
void del_array (char** , int , );
bool small_image(char**,char**,int,int,int,int)
int count_small_image(char** , char** , int ,int ,int ,int );
#endif
file image.cpp:
#include<iostream>
#include "image.h"
#include <fstream>
using namespace std;
void new_array(char** a, int n,int m)
{
ifstream inStream;
inStream.open("image.inp");
a=new char* [m] ;
for(int i=0; i<m; i++)
{
a[i]=new char[n];
for(int j=0;j<n; j++)
inStream>>a[i][j];
}
}
void del_array(char** a,int m)
{
for(int i=0;i<m ;i++)
{
delete [] a[i];
}
delete [] a;
}
bool small_image(char** a,char** b, int i,int j,int p,int q)
{
for(int u=i;u<i+p;u++ )
{
for(int v=j;v<j+q;v++ )
{
if(a[u][v]!=b[u-i][v-j]) return false;
}
}
return true;
}
int count_small_image(char** a,char** b,int m,int n,int p, int q)
{
int COUNT=0;
for(int i=0;i<m;i++ )
for(int j=0;j<n;j++ )
{
if(a[i][j]==b[0][0])
{
if((m-i+1)>=p && (n-j+1)>=q)
{
if(small_image(a,b,i,j,p,q)==false) break;
else COUNT++;
}
}
}
return COUNT;
}
file main_count_small_image.cpp:
#include <iostream>
#include "image.h"
#include <fstream>
using namespace std;
int main()
{
ifstream inStream;
inStream.open("image.inp");
ofstream outStream;
outStream.open("image.out");
int m,n,p,q;
char** a;
char** b;
inStream>>n>>m;
new_array(a,n,m);
inStream>>q>>p;
new_array(b,q,p);
int c;
c=count_small_image(a,b,m,n,p,q);
outStream<<c;
del_array(a,m);
del_array(b,p);
return 0;
getchar();
}
But, I get:
[error]: has stopped working ...
This is a simple bit of code best stepped through with a debugger. The OP will learn a lot more tracing the execution low than they will from being handed a canned answer.
Brute force works, but a previous question has an answer suggesting better approaches. See How to detect occurrencies of a small image in a larger image? .
The new array method is implemented incorrectly. Its inability to return the built array has been covered already so I'm skipping it. Nowhere in the specification does it say the new_array should read in the data from the file. Further, reopening the file will require the new stream to tart at the beginning and reread m and n before getting to the image data. This is not taken into account.
The lack of descriptive variable names makes this program difficult to read and is a disincentive to assisting the OP. Likewise the lack of rational indentation and braces use. The program by its appearance seems to ask the reader not to render assistance.
In count_small_image given the call
count_small_image(a,b,m,n,p,q);
The two for loops set up small_image for out-of-range array access. I believe that is that this is trying to prevent.
if((m-i+1)>=p && (n-j+1)>=q)
Maybe it does, but it's a convoluted and clumsy way to do it. Remember: Code not written has no bugs. Instead, try something along the lines of
for(int m = 0; m < largeMaxM - smallMaxM; m++)
{
for(int n = 0; n < largeMaxM - smallMaxN; n++)
Where smallMaxM and smallMaxN are the m and n bounds of the small image and largeMaxM and largeMaxN are the m and n bounds of the large image.
Small count is also overly complicated. Sorting it out so that it is based on iterating through the small image eliminates the cruft. descriptive variable names also makes the function much more readable.
bool small_image(char** a,char** b, int offsetM,int offsetN,int maxM,int maxN)
{
for(int m = 0; m < maxM; m++)
{
for(int n = 0; n < maxN; n++)
{
if(a[m+offsetM][n+offsetN]!=b[m][n]) return false;
}
}
return true;
}
I'm operating on tablet without a compiler, so forgive me if I'm off by one.
You've been told wrong (or you've misunderstood what you were told). Rewrite your code like this
char** new_array(int n, int m)
{
char** a;
...
return a;
}
int main()
{
...
char** a = new_array(n, m);
etc.
You should read up how functions can return values (including pointers). And also read up on how pointers can be used to implement arrays.
Input: set of Points
Output: perimeter of convex hull made from these points
I don't why, but I'm still getting bad perimeter on some inputs (I don't know which inputs).
Can you please tell me if there is something bad im my alghorithm? (or implementation)
#include<iostream>
#include<vector>
#include<algorithm>
#include<cmath>
#include<iomanip>
using namespace std;
struct Point{
int x;
int y;
bool operator<(const Point &p)const
{
return (x<p.x || (x==p.x && y<p.y));
}
};
long long cross(Point A, Point B, Point C)
{
return (B.x-A.x)*(C.y-A.y)-(B.y-A.y)*(C.x-A.x);
}
vector<Point> ConvexHull(vector<Point> P) //Andrew's monotone chain
{
vector<Point> H; //hull
H.resize(2*P.size());
int k=0;
if(P.size()<3) return H;
sort(P.begin(),P.end());
//lower
for(int i=0;i<P.size();i++)
{
while(k>=2 && cross(H[k-2],H[k-1],P[i])<=0)
k--;
H[k]=P[i];
k++;
}
int t=k+1;
//upper
for(int i=P.size()-2; i>=0 ;i--)
{
while(k>=t && cross(H[k-2],H[k-1],P[i])<=0)
k--;
H[k]=P[i];
k++;
}
H.resize(k);
return H;
};
double perimeter(vector<Point> P)
{
double r=0;
for(int i=1;i<P.size();i++)
r+=sqrt(pow(P[i].x-P[i-1].x,2)+pow(P[i].y-P[i-1].y,2));
return r;
}
int main(){
int N;
cin>>N;
vector<Point>P;
P.resize(N);
for(int i=0;i<N;i++)
cin>>P[i].x>>P[i].y;
vector<Point>H;
H=ConvexHull(P);
cout<<setprecision(9)<<perimeter(H)<<endl;
//system("pause");
return 0;
};
Assuming the algorithm is correct, I imagine: You are running on 32 bit and get an integer overflow.
Shouldn't you add the code bellow after the for loop in the perimeter function:
r += sqrt(pow(P[P.size() - 1].x-P[0].x,2)+pow(P[P.size() - 1].y-P[0].y,2));
You want to add the distance between the first and the last point in the convex hull.
I have an algorithm to pick up and sum up the specific data from a 2-dimensional array at a time, which was done with the following 2-fold loop
#include <iostream>
#include <vector>
#include <algorithm>
#include <iterator>
#include <vector>
#include <cmath>
using namespace std;
int main(int argc, char* argv[])
{
double data[2000][200];
double result[200];
int len[200];
vector< vector<int> > index;
srand (time(NULL));
// initialize data here
for (int i=0; i<2000; i++) for (int j=0; j<200; j++) data[i][j] = rand();
// each index element contains some indices for some elements in data, each index elements might have different length
// len[i] tell the size of vector at index[i]
for (int i=0; i<200; i++)
{
vector<int> c;
len[i] = (int)(rand()%100 + 1);
c.reserve(len[i]);
for (int j=0; j<len[i]; j++)
{
int coord= (int)(rand()%(200*2000));
c.push_back(coord);
}
index.push_back(c);
}
for (int i=0; i<200; i++)
{
double acc=0.0;
for (int j=0; j<index[i].size(); j++) acc += *(&data[0][0] + (int)(index[i][j]));
result[i] = acc;
}
return 0;
}
Since this algorithm will be applied to a big array and the 2-fold might be executed in quite long time. I am thinking if stl algorithm will help this case but stl is so far too abstract for me and I don't know how to use that in 2-fold loop. Any suggestion or idea is more then welcomed.
Following other posts and information I found online, I am trying to use for_each to solve the issue
double sum=0.0;
void sumElements(const int &n)
{
sum += *(&data[0][0] + n);
}
void addVector(const vector<int>& coords)
{
for_each( coords.begin(), coords.end(), sumElements)
}
for_each( index.begin(), index.end(), addVector);
But this code have two issues. Firstly, it doesn't compile on void sumElements(const int &n), many error message comes out. Second, even it works it won't store the result to the right place. For the first for_each, my intention is to enumerate each index, so to calculate the corresponding sum and store the sum as the corresponding element of results array.
First off, STL is not going to give you magic performance benefits.
There already is a std::accumulate which is easier then building your own. Probably not faster, though. Similarly, there's std::generate_n which calls a generator (such as &rand) N times.
You first populate c before you call index.push_back(c);. It may be cheaper to push an empty vector, and set std::vector<int>& c = index.back().