for my application, i need to use a hash map, so i have written a test program in which i store some instances of a baseclass in a boost::unordered_map. but i want to reach the instances by calling special functions which return a derived class of the base and i use those functions' parameters for hash key of unordered_map. if no class is found with certain parameters then a class is generated and stored in map. the purpose of the program may not be clear but here is the code.
#include <boost/unordered_map.hpp>
#include <iostream>
using namespace std;
using namespace boost;
typedef unsigned char BYT;
typedef unsigned long long ULL;
class BaseClass
{
public:
int sign;
size_t HASHCODE;
BaseClass(){}
};
class ClassA : public BaseClass
{
public:
int AParam1;
int AParam2;
ClassA(int s1, int s2) : AParam1(s1), AParam2(s2)
{
sign = AParam1;
}
};
struct HashKey
{
ULL * hasharray;
size_t hashNum;
size_t HASHCODE;
HashKey(ULL * ULLarray, size_t Hashnum) : hasharray(ULLarray), hashNum(Hashnum), HASHCODE(0)
{ }
bool operator == (const HashKey & hk ) const
{
bool deg = (hashNum == hk.hashNum);
if (deg)
{
for (int i = 0; i< hashNum;i++)
if(hasharray[i] != hk.hasharray[i]) return false;
}
return deg;
}
};
struct ihash : std::unary_function<HashKey, std::size_t>
{
std::size_t operator()(HashKey const & x) const
{
std::size_t seed = 0;
if (x.hashNum == 1)
seed = x.hasharray[0];
else
{
int amount = x.hashNum * 8;
const std::size_t fnv_prime = 16777619u;
BYT * byt = (BYT*)x.hasharray;
for (int i = 0; i< amount;i++)
{
seed ^= byt[0];
seed *= fnv_prime;
}
}
return seed;
}
};
typedef std::pair<HashKey,BaseClass*> HashPair;
unordered_map<HashKey,BaseClass*,ihash> UMAP;
typedef unordered_map<HashKey,BaseClass*,ihash>::iterator iter;
BaseClass * & FindClass(ULL* byt, int Num, size_t & HCode)
{
HashKey hk(byt,Num);
HashPair hp(hk,0);
std::pair<iter,bool> xx = UMAP.insert(hp);
// if (xx.second) UMAP.rehash((UMAP.size() + 1) / UMAP.max_load_factor() + 1);
if (!xx.first->second) HCode = UMAP.hash_function()(hk);
return xx.first->second;
}
template <typename T, class A,class B>
T* GetClass(size_t& hashcode ,A a, B b)
{
ULL byt[3] = {a,b,hashcode};
BaseClass *& cls = FindClass(byt, 3, hashcode);
if(! cls){ cls = new T(a,b); cls->HASHCODE = hashcode;}
return static_cast<T*>(cls);
}
ClassA * findA(int Period1, int Period2)
{
size_t classID = 100;
return GetClass<ClassA>(classID,Period1,Period2);
}
int main(int argc, char* argv[])
{
int limit = 1000;
int modnum = 40;
int result = 0;
for(int i = 0 ; i < limit; i++ )
{
result += findA( rand() % modnum ,4)->sign ;
}
cout << UMAP.size() << "," << UMAP.bucket_count() << "," << result << endl;
int x = 0;
for(iter it = UMAP.begin(); it != UMAP.end(); it++)
{
cout << ++x << "," << it->second->HASHCODE << "," << it->second->sign << endl ;
delete it->second;
}
return 0;
}
the problem is, i expect that the size of UMAP is equal to modnum however it is allways greater than modnum which means there are more than one instance that has the same parameters and HASHCODE.
what is the solution to my problem? please help.
thanks
Here are a couple of design problems:
struct HashKey
{
ULL * hasharray;
...
Your key type stores a pointer to some array. But this pointer is initialized with the address of a local object:
BaseClass * & FindClass(ULL* byt, int Num, size_t & HCode)
{
HashKey hk(byt,Num); // <-- !!!
HashPair hp(hk,0);
std::pair<iter,bool> xx = UMAP.insert(hp);
if (!xx.first->second) HCode = UMAP.hash_function()(hk);
return xx.first->second;
}
template <typename T, class A,class B>
T* GetClass(size_t& hashcode ,A a, B b)
{
ULL byt[3] = {a,b,hashcode}; // <-- !!!
BaseClass *& cls = FindClass(byt, 3, hashcode);
if(! cls){ cls = new T(a,b); cls->HASHCODE = hashcode;}
return static_cast<T*>(cls);
}
This makes the map store a HashKey object with a dangling pointer. Also you are returning a reference to a member of a function local object called xx in FindClass. The use of this reference invokes undefined behaviour.
Consider renaming the map's key type. The hash code itself shouldn't be a key. And as your operator== for HashKey suggests, you don't want the actual key to be the hash code but the sequence of integers of variable length. Also, consider storing the sequence inside of the key type instead of a pointer, for example, as a vector. In addition, avoid returning references to function local objects.
Using unordered_map does not guarantee that you do not get has collisions, which is what you describe here.
there are more than one instance that
has the same parameters and HASHCODE
You can tune your hashing algorithm to minimize this, but in the (inevitable) collision case, the hash container extends the list of objects in the bucket corresponding to that hashcode. Equality comparison is then used to resolve the collision to a specific matching object. This may be where your problem lies - perhaps your operator== does not properly disambiguate similar but not identical objects.
You cannot expect one object per bucket, or the container would grow unbounded in large collection size cases.
btw if you are using a newer compiler you may find it supports std::unordered_map, so you can use that (the official STL version) instead of the Boost version.
Related
I have an algorithm which generates combinations from entries of a container and I want to find the combination which minimizes a cost function:
struct Vec { double x; double y; };
double cost( Vec a, Vec b ) {
double dx = a.x - b.x;
double dy = a.y - b.y;
return dx*dx + dy*dy;
}
pair<Vec,Vec> get_pair_with_minimum_cost ( vector<Vec> inp, double (*cost_fun)(Vec,Vec) )
{
pair<Vec,Vec> result;
double min_cost = FLT_MAX;
size_t sz = inp.size();
for(size_t i=0; i<sz; i++) {
for (size_t j=i; j<sz; j++) {
double cost = cost_fun(inp[i], inp[j]);
if (cost < min_cost) {
min_cost = cost;
result = make_pair(inp[i], inp[j]);
}
}
}
return result;
}
vector <Vec> inp = {....};
auto best_pair = get_pair_with_minimum_cost ( inp, cost );
Unfortunately, get_pair_with_minimum_cost() does 2 jobs:
generates the combinations
gets the minimum element
I could break them in two functions, like:
the generator:
template <class Func>
void generate_all_combinations_of( vector<Vec> inp, Func fun )
{
size_t sz = inp.size();
for(size_t i=0; i<sz; i++) {
for (size_t j=i; j<sz; j++) {
fun(make_pair(inp[i], inp[j]));
}
}
}
and then use std::min_element on the output of the generator, i.e.
vector<Vec> inp = {....};
vector<pair<Vec,Vec>> all_combinations;
generate_all_combinations_of(inp, [&](vector<pair<Vec,Vec>> o){all_combinations.push_back(o); } );
auto best_pair = *min_element(all_combinations.begin(), all_combinations.end(), cost);
but I do not want the pay the cost of creating and extra container with temporary data (all_combinations).
Questions:
Can I rewrite the generate_all_combinations_of() such that it uses yield or the new std::ranges in such a way that I can combine it with STL algorithms such as find_if, any_of, min_element or even adjacent_pair ?
The great thing about this 'generator' function is that it is easy to read, so I would like to keep it as readable as possible.
NB: some of these algorithms need to break the loop.
What is the official name of combining entries this way?
It this the combinations used in 'bubble-sort'.
Here's how I would write the function in c++20, using range views and algorithms so that there isn't a separate container that stores the intermediate results:
double get_minimum_cost(auto const & inp)
{
namespace rs = std::ranges;
namespace rv = std::ranges::views;
// for each i compute the minimum cost for all j's
auto min_cost_from_i = [&](auto i)
{
auto costs_from_i = rv::iota(i + 1, inp.size())
| rv::transform([&](auto j)
{
return cost(inp[i], inp[j]);
});
return *rs::min_element(costs_from_i);
};
// compute min costs for all i's
auto all_costs = rv::iota(0u, inp.size())
| rv::transform(min_cost_from_i);
return *rs::min_element(all_costs);
}
Here's a demo.
Note that the solution doesn't compare the cost between same elements, since the cost function example you showed would have a trivial result of 0. For a cost function that doesn't return 0, you can adapt the solution to generate a range from i instead of i + 1. Also, if the cost function is not symmetric, make that range start from 0 instead of i.
Also, this function has UB if you call it with an empty range, so you should check for that as well.
There is http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2168r0.pdf who's development I would follow
If you are using MSVC, and can use their experimental/generator (not sure if others support it yet), you can use
std::experimental::generator<std::size_t> Generate(std::size_t const end){
for(std::size_t i = 0; i < end; ++i)
co_yield i;
}
int main(){
auto vals = Generate(22);
auto const result = *std::min_element(std::begin(vals),std::end(vals));
std::cout <<'\n' << " " << result;
}
Here you would need to modify the Generate function to Yield a pair/or to yield cost
(My recommendation would be to Keep things simple and yield the cost)
Then use vals to find min_cost
Ranges
Based on what I can find about the Ranges Proposal, it works on the basis of std::begin and std::end both of which experimental::generator provides
So it should probably work
Here's how I would write the function in c++17, using algorithms' min_element function, with no need for a separate container that stores the intermediate results. I know you were looking for a c++20 solution, but this code does work fine under c++20, and perhaps it gives you some ideas about adapting functions to ranges when the range isn't just one of the ranges supplied by c++20's ranges library.
// TwoContainerRanger is an iterable container where the iterator consists
// of two indices that match the given filter, and whose iterators, when
// dereferenced, return the result of calling func with the
// elements of the two containers, at those two indices.
// filter can be nullptr.
template <typename Container1, typename Container2, typename Func>
struct TwoContainerRanger {
Container1 &c1;
Container2 &c2;
const Func &fun;
bool (*restriction)(size_t i1, size_t i2);
TwoContainerRanger(Container1 &container1, Container2 &container2,
bool (*filter)(size_t i1, size_t i2), const Func &func)
: c1(container1), c2(container2), fun(func), restriction(filter) {}
struct Iterator {
const TwoContainerRanger *gen;
size_t index1, index2;
auto &operator++() {
do {
if (++index1 == gen->c1.size()) {
if (++index2 == gen->c2.size()) {
// we leave both indices pointing to the end
// to indicate that we have reached the end.
return *this;
} else {
index1 = 0u;
}
}
} while (gen->restriction && gen->restriction(index1, index2) == false);
return *this;
}
bool operator==(const Iterator &other) const = default;
bool operator!=(const Iterator &other) const = default;
auto operator*() const {
return gen->fun(gen->c1[index1], gen->c2[index2]);
}
};
Iterator begin() {
Iterator b{this, size_t(0) - 1, 0u};
return ++b; // automatically applies the restriction
}
Iterator end() { return Iterator{this, c1.size(), c2.size()}; }
};
Calling it looks like this:
int main() {
std::array<Vec, 5> ar = {Vec{0, 0}, Vec{1, 1}, Vec{3, 3}, Vec{7, 7},
Vec{3.1, 3.1}};
TwoContainerRanger tcr{ar, ar, Triangle, cost};
auto result = std::min_element(tcr.begin(), tcr.end());
std::cout << "Min was at (" << result.index1 << "," << result.index2
<< "); cost was " << *result << '\n';
}
Here's a demo.
I'm trying to make light-weight layer on top of continuous array of arbitrary structs (lets call it DataItem), which will handle common operations like file-IO, rendering on screen/GUI (like excel table), searching and sorting by difernet properties etc.
But I want to make my class Table and user-defined struct/class DataItem to be completely independent of each other (i.e. both can compile without knowing each others header-file .h ). I think it cannot be like template<class T> class Table{ std::vectro<T> data;}; because then user would be obligated to implement functionality like DataItem::toString(int icolumn) and I don't want to put that constrain on DataItem struct.
My current implementation rely on pointer arithmetics, switch, and can handle only few types of data members (bool,int,float,double). I wonder if e.g. using templates this could be improved (to make it more generic, safe etc...) without considerably increasing complexity and performance cost.
I want to use it like this:
#include "Table.h"
#include "GUI.h"
#include "Vec3d.h"
// example of user defined DataItem struct
struct TestStruct{
int inum = 115;
double dnum = 11.1154546;
double fvoid= 0.0;
float fnum = 11.115;
Vec3d dvec = (Vec3d){ 1.1545, 2.166, 3.1545};
};
int main(){
// ==== Initialize test data
Table* tab1 = new Table();
tab1->n = 120;
TestStruct* tab_data = new TestStruct[tab1->n];
for(int i=0; i<tab1->n; i++){
tab_data[i].inum = i;
tab_data[i].fnum = i*0.1;
tab_data[i].dnum = i*0.01;
}
// ==== Bind selected properties/members of TestStruct as columns int the table
tab1->bind(tab_data, sizeof(*tab_data) );
// This is actually quite complicated =>
// I would be happy if it could be automatized by some template magic ;-)
tab1->addColum( &(tab_data->inum), 1, DataType::Int );
tab1->addColum( &(tab_data->fnum), 1, DataType::Float );
tab1->addColum( &(tab_data->dnum), 1, DataType::Double );
tab1->addColum( &(tab_data->dvec), 3, DataType::Double );
// ==== Visualize the table Table in GUI
gui.addPanel( new TableView( tab1, "tab1", 150.0, 250.0, 0, 0, 5, 3 ) );
gui.run();
}
My current implementation looks like this:
enum class DataType{ Bool, Int, Float, Double, String };
struct Atribute{
int offset; // offset of data member from address of struct instance [bytes]
int nsub; // number of sub units. e.g. 3 for Vec3
DataType type; // type for conversion
Atribute() = default;
Atribute(int offset_,int nsub_,DataType type_):offset(offset_),nsub(nsub_),type(type_){};
};
class Table{ public:
int n; // number of items/lines in table
int itemsize = 0; // number of bytes per item
char* data = 0; // pointer to data buffer with structs; type is erased to make it generic
std::unordered_map<std::string,int> name2column;
std::vector <Atribute> columns;
void bind(void* data_, int itemsize_){
data=(char*)data_;
itemsize=itemsize_;
}
int addColum(void* ptr, int nsub, DataType type){
// determine offset of address of given data-member with respect to address of enclosing struct
int offset = ((char*)ptr)-((char*)data);
columns.push_back( Atribute( offset, nsub, type ) );
return columns.size()-1;
}
char* toStr(int i, int j, char* s){
const Atribute& kind = columns[j];
void* off = data+itemsize*i+kind.offset; // address of j-th member of i-th instance in data array
// I don't like this switch,
// but still it seems simpler and more efficient that alternative solutions using
// templates/lambda function or function pointers
switch(kind.type){
case DataType::Bool :{ bool* arr=(bool *)off; for(int i=0; i<kind.nsub; i++){ s+=sprintf(s,"%c ", arr[i]?'T':'F' ); }} break;
case DataType::Int :{ int* arr=(int *)off; for(int i=0; i<kind.nsub; i++){ s+=sprintf(s,"%i ", arr[i] ); }} break;
case DataType::Float :{ float* arr=(float *)off; for(int i=0; i<kind.nsub; i++){ s+=sprintf(s,"%g ", arr[i] ); }} break;
case DataType::Double :{ double* arr=(double*)off; for(int i=0; i<kind.nsub; i++){ s+=sprintf(s,"%g ", arr[i] ); }} break;
case DataType::String :{ char* arr=(char *)off; for(int i=0; i<kind.nsub; i++){ s+=sprintf(s,"%s ", arr[i] ); }} break;
}
return s;
}
};
// .... Ommited most of TableView GUI ....
void TableView::render(){
Draw ::setRGB( textColor );
char stmp[1024];
for(int i=i0; i<imax;i++){
int ch0 = 0;
for(int j=j0; j<jmax;j++){
int nch = table->toStr(i,j,stmp)-stmp; // HERE!!! I call Table::toStr()
Draw2D::drawText( stmp, nch, {xmin+ch0*fontSizeDef, ymax-(i-i0+1)*fontSizeDef*2}, 0.0, GUI_fontTex, fontSizeDef );
ch0+=nchs[j];
}
}
}
One way of solving this type of problem is by providing a "traits" class which tells one class how to deal with another class without having to modify the second class. This pattern is used extensively in the standard library.
Your code could be written as:
#include <iostream>
#include <string>
#include <vector>
#include <array>
template <typename T>
struct TableTraits;
template <typename T>
class Table
{
public:
void setData( const std::vector<T>& value )
{
data = value;
}
std::string toString( size_t row, size_t column )
{
return TableTraits<T>::toString( data[ row ], column );
}
void print()
{
for ( size_t row = 0; row < data.size(); row++ )
{
for ( size_t column = 0; column < TableTraits<T>::columns; column++ )
{
std::cout << toString( row, column ) << ", ";
}
std::cout << "\n";
}
}
private:
std::vector<T> data;
};
struct TestStruct
{
int inum = 115;
double dnum = 11.1154546;
double fvoid = 0.0;
float fnum = 11.115f;
std::array<double, 3> dvec = { 1.1545, 2.166, 3.1545 };
};
template <typename T>
std::string stringConvert( const T& value )
{
return std::to_string( value );
}
template <typename T, size_t N>
std::string stringConvert( const std::array<T, N>& value )
{
std::string result;
for ( auto& v : value )
{
result += stringConvert( v ) + "; ";
}
return result;
}
template <>
struct TableTraits<TestStruct>
{
static const size_t columns = 5;
static std::string toString( const TestStruct& row, size_t column )
{
switch ( column )
{
case 0:
return stringConvert( row.inum );
case 1:
return stringConvert( row.dnum );
case 2:
return stringConvert( row.fvoid );
case 3:
return stringConvert( row.fnum );
case 4:
return stringConvert( row.dvec );
default:
throw std::invalid_argument( "column out of range" );
}
}
};
int main()
{
std::vector<TestStruct> data( 10 );
Table<TestStruct> table;
table.setData( data );
table.print();
}
The exact details of the traits class could be different if this example doesn't exactly meet your needs.
You might also find it useful to have the traits methods and constants non-static so that you can pass a traits object into your table to allow customisation per table instance.
You might also want to allow use of a custom traits class in your table, something like this:
template <typename T, typename Traits = TableTraits<T>>
class Table
{
...
std::string toString( size_t row, size_t column )
{
return Traits::toString( data[ row ], column );
}
It looks to me you are trying to use dynamic polymorphism (at run-time) using C-like structures and C-like polymorphism in C++. Templates are useful for static polymorphism. The proper direction is to use OOP (especially class polymorphism), defining concepts as classes: table, cell, column, row, cell-value, etc. As examples, you can check on Github:
OpenXLSX
Spreadsheets-Lower
Hey i have a table of teams with the names and the points they have and i'm trying to figure out how to display the last 3 teams with the least amount of points in the table?
It displays all the teams and i want it to display only the last 3 in the table but don't know what way to go about it.
These are my Accessors
string GetName
int GetPoints
int lowest = 1000;
for (int i = 0; i < numTeams; i++)
{
if (league[i].GetPoints() < lowest)
{
lowest = league[i].GetPoints();
}
}
for (int i = 0; i < numTeams; i++)
{
if (league[i].GetPoints() == lowest)
{
cout << "\tThe lowest goals against is: " << league[i].GetName() << endl;
}
}
Actually, you don't need variable lowest, if you would sort the data before printing.
#include <algorithm>
// Sort using a Lambda expression.
std::sort(std::begin(league), std::end(league), [](const League &a, const League &b) {
return a.GetPoints() < b.GetPoints();
});
int last = 3;
for (int i = 0; i < last; i++)
{
cout << "\tThe lowest goals against is: " << league[i].GetName() << endl;
}
U could probably start by sorting your array
#include <algorithm>
std::array<int> foo;
std::sort(foo.begin(), foo.end());
and then Iterate From Your Last Element to your Last - 3. (U can use Reverse Iterators)
for (std::vector<int>::reverse_iterator it = v.rend() ; it != v.rend() + 3;
it++) {
//Do something
}
or by using auto
for (auto it = v.rend() ; it != v.rend() + 3; ++it) {
//Do something
}
In my example I've created test class(TestTeam) to implement several important methods for objects in your task.
I use std::sort method to sort container of objects, by default std::sort compares objects by less(<) operation, so I have overrided operator < for TestTeam object
bool operator < ( const TestTeam& r) const
{
return GetPoints() < r.GetPoints();
}
Also we could pass as third parameter another compare method or lambda method as shown in below answers:
std::sort(VecTeam.begin(), VecTeam.end(), [](const TestTeam& l, const TestTeam& r)
{
return l.GetPoints() < r.GetPoints();
});
And example when we use global method to compare:
bool CompareTestTeamLess(const TestTeam& l, const TestTeam& r)
{
return l.GetPoints() < r.GetPoints();
}
//...
// some code
//...
// In main() we use global method to sort
std::sort(VecTeam.begin(), VecTeam.end(), ::CompareTestTeamLess);
You can try my code with vector as container:
#include <iostream>
#include <algorithm>
#include <vector>
#include <string>
// Test class for example
class TestTeam
{
public:
TestTeam(int16_t p, const std::string& name = "Empty name"):mPoints(p), mName(name)
{
};
int16_t GetPoints() const {return mPoints;}
const std::string& GetName() const {return mName;}
void SetName( const std::string& name ) {mName=name;}
bool operator < ( const TestTeam& r) const
{
return GetPoints() < r.GetPoints();
}
private:
int16_t mPoints;
std::string mName;
};
int main(int argc, const char * argv[])
{
const uint32_t COUNT_LOWEST_ELEMENTS_TO_FIND = 3;
// Fill container by test data with a help of non-explicit constructor to solve your task
std::vector<TestTeam> VecTeam {3,5,8,9,11,2,14,7};
// Here you can do others manipulations with team data ...
//Sort vector by GetPoints overloaded in less operator. After sort first three elements will be with lowest points in container
std::sort(VecTeam.begin(), VecTeam.end());
//Print results as points - name
std::for_each( VecTeam.begin(), VecTeam.begin() + COUNT_LOWEST_ELEMENTS_TO_FIND, [] (TestTeam el)
{
std::cout << el.GetPoints() << " - " << el.GetName() << std::endl;
} );
}
I made test class TestTeam only to implement test logic for your object.
If you try launch the program you can get next results:
2 - Empty name
3 - Empty name
5 - Empty name
Program ended with exit code: 0
I have the following comparator for string objects
struct Comparator{
int x;
bool operator() (string a, string b) {
int i = 1;
if(a < b) {
i = -1;
}
i*= x;
if(i==-1) {
return true;
}
return false;
}
};
As you can see, it has a parameter x. when it is = 1 the comparison of strings is normal and when it is =-1 the comparison is inverted.
When using it in a method like sort for vector elements I can give the object instance of this comparator with the right x, but when I want to give this comparator to a template class like set, I need to give a class and not an object instance. So the only solution I see is to make x static. Is there a better solution?
Here is the example of a main where I would like to use this comparator:
int main(int argc, char** argv)
{
vector<string> vec;
vec.push_back("c");
vec.push_back("a");
vec.push_back("b");
Comparator comp;
comp.x = 1; // for normal comparison
comp.x = -1; // for inverse comparison
sort(vec.begin(),vec.end(), comp); // here I can give the functor instance
for(vector<string>::iterator it = vec.begin() ; it != vec.end(); it++)
{
cout << *it << endl;
}
set<string, Comparator> ss; // but here I must give the name of the functor class
ss.insert("c");
ss.insert("a");
ss.insert("b");
for(set<string>::iterator it = ss.begin() ; it != ss.end(); it++)
{
cout << *it << endl;
}
return 0;
}
All the relevant constructors of set also take an instance of Comp class.
set<string, Comparator> ss(Comparator(-1));
Now you only need a constructor for Comparator that initializes its member x with an appropriate value.
That said, the standard library already comes with a comparator class for this purpose:
set<string, std::greater<std::string> > ss;
That will not work: a < b does not(!) mean b < a.
You might utilize std::string::compare:
bool operator() (const std::string& a, const std::string& b) {
int result = a.compare(b) * x;
return result < 0;
}
I am trying to initialize a 2D concurrent_hash_map, a container available in the Intel TBB library. Compilation passes and there is no error at runtime. However, not all initialized values are available in the container leading to incorrect behavior.
The hash map is defined as
template<typename K>
struct MyHashCompare {
static size_t hash(const K& key) { return boost::hash_value(key); }
static bool equal(const K& key1, const K& key2) { return (key1 == key2); }
};
typedef concurrent_hash_map<int, int, MyHashCompare<int> > ColMap;
typedef concurrent_hash_map<int, ColMap, MyHashCompare<int> > RowMap;
The function object is defined as follows. Could the reason for the incorrect behavior originate here?
class ColumnInit {
RowMap *const pMyEdgeMap;
public:
void operator()(const blocked_range<size_t>& r) const {
RowMap* pEdgeMap = pMyEdgeMap;
RowMap::accessor accessX;
ColMap::accessor accessY;
for(size_t n1 = r.begin(); n1 != r.end(); n1++)
{
pEdgeMap->insert(accessX, n1);
for(int n2 = 1; n2 <= 64; n2++)
{
int diff = abs((int)n1 - n2);
if ((diff == 8) || (diff == 1))
{
assert((accessX->second).insert(accessY, n2));
accessY->second = -1;
}
else
{
assert((accessX->second).insert(accessY, n2));
accessY->second = 0;
}
}
}
}
ColumnInit(RowMap* pEdgeMap): pMyEdgeMap(pEdgeMap)
{
}
};
The function object is invoked from a call to parallel_for as follows:
parallel_for(blocked_range<size_t>(1,64,16), ColumnInit((RowMap*)&mEdges), simple_partitioner());
Any suggestions or feedback would be great.
Thanks.
If you intend to create a 64x64 table, use blocked_range(1,65,16) as the first argument to parallel_for. The reason is that a blocked_range represents a half-open interval, which includes the lower bound but excludes the upper bound.