C++: creating multiple data structures using a for loop - c++

I have a program where I use records of the form:
// declaring a struct for each record
struct record
{
int number; // number of record
vector<int> content; // content of record
};
Within main I then declare each record:
record batch_1; // stores integers from 1 - 64
record batch_2; // stores integers from 65 - 128
Where each batch stores 64 integers from a list of numbers (in this instance from a list of 128 total numbers). I would like to make this program open ended, such that the program is capable of handling any list size (with the constraint of it being a multiple of 64). Therefore, if the list size was 256 I would need four records (batch_1 - batch_4). I am not sure how I can create N-many records, but I am looking for something like this (which is clearly not the solution):
//creating the batch records
for (int i = 1; i <= (list_size / 64); i++)
{
record batch_[i]; // each batch stores 64 integers
}
How can this be done, and will the scope of something declared within the for loop extend beyond the loop itself? I imagine an array would satisfy the scope requirement, but I am not sure how to implement it.

Like many suggested in the comments why not use a resizable vector provided by the C++ Standard Library: std::vector?
So, instead of having this:
record batch_1; // stores integers from 1 - 64
record batch_2; // stores integers from 65 - 128
.
.
record batch_n // Stores integers x - y
Replace with:
std::vector<record> batches;
//And to create the the batch records
for (int i = 1; i <= (list_size / 64); i++) {
record r;
r.number = i;
r.content = ....;
batches.push_back(r);
// You could also declare a constructor for your record struct to facilitate instantiating it.
}

Why don't you try this
// code
vector<record> v(list_size / 64);
// additinal code goes here
Now,you can access your data as follow
(v[i].content).at(j);

Related

Broadcast STL Map using MPI

I have a variable looks like this
map< string, vector<double> > a_data;
long story short, a_data can be filled only by node 0. Hence, broadcasting it using MPI_Bcast() is necessary.
As we know that we can only use primitive data type. So, how should I do to broadcast STL datatype like map using MPI_Bcast()??
One approach that you can do is to:
first broadcast the number of keys to every process; So that every process knows the number of keys that will have to compute;
broadcast an array that has coded the size of each of those keys;
broadcast another array that has coded the size of each array of values;
create a loop to iterate over the keys;
broadcast first the key string (as an array of chars);
broadcast next the values as an array of doubles.
So in pseudo-code would look like:
// number_of_keys <- get number of keys from a_data;
// MPI_Bcast() number_of_keys;
// int key_sizes[number_of_keys];
// int value_sizes[number_of_keys];
//
// if(node == 0){ // the root process
// for every key in a_data do
// key_sizes[i] = the size of the key;
// value_sizes[i] = size of the vector of values associated to key
// }
//
// MPI_Bcast() the array key_sizes
// MPI_Bcast() the array value_sizes
//
// for(int i = 0; i < number_of_keys; i++){
// key <- get key in position 0 from a_data
// values <- get the values associated with the key
//
// MPI_Bcast() the key and use the size stored on key_sizes[i]
// MPI_Bcast() the values and use the size stored on value_sizes[i]
//
// // Non root processes
// if(node != 0){
// add key to the a_data of the process
// add the values to the corresponded key
// }
// }
You just need to adapt the code to C++ (which I am not an expert) so you might have to adapt a bit, but the big picture is there. After having the approach working you can optimized further by reducing the number of broadcast needed. That can be done by packing more information per broadcast. For instance, you can broadcast first the number of items, the sizes of the keys and values, and finally the keys and the values together. For the latter you would need to create your custom MPI Datatype similar to the example showcased here.

Writing 2-D array int[n][m] to HDF5 file using Visual C++

I'm just getting started with HDF5 and would appreciate some advice on the following.
I have a 2-d array: data[][] passed into a method. The method looks like:
void WriteData( int data[48][100], int sizes[48])
The size of the data is not actually 48 x 100 but rather 48 x sizes[i]. I.e. each row could be a different length! In one simple case I'm dealing with, all rows are the same size (but not 100), so you can say that the array is 48 X sizes[0].
How best to write this to HDF5?
I have some working code where I loop through 0 to 48 and create a new dataset for each row.
Something like:
for (int i = 0; i < 48; i++)
{
hsize_t dsSize[2];
dsSize[0] = 48;
dsSize[1] = sizes[0]; // use sizes[i] in most general case
// Create the Data Space
DataSpace dataSpace = DataSpace(2, dsSize);
DataSet dataSet = group.createDataSet(dataSetName, intDataType, dataSpace);
dataSet.write(data[i], intDataType);
}
Is there a way to write the data all at once in one DataSet? Perhaps one solution for the simpler case of all rows the same length, and another for the ragged rows?
I've tried a few things to no avail. I called dataSet.write(data, intDataType), i.e. I threw the whole array at it. I seemed to get garbage in the file, I suspect because the array the data is stored in is actually 48x100 and I only need a small part of that.
It occurred to me that I could maybe use double ptrs int** or vector> but I'm stuck on that. As far as I can tell, "write" need a void* ptr. Also, I'd like the file to "look correct". I.e. one giant row with all rows of data is not desirable, if I must go that route, someone would need to communicate a slick way to store the info that would allow me to read the data back in from file (perhaps store row lengths as attributes?).
Perhaps my real problem is finding C++ examples of non-trivial use cases.
Any help is much appreciated.
Dave
Here is how you can do it using variable length arrays if your data is a vector of vectors (which seems to make sense for your use case):
void WriteData(const std::vector< std::vector<int> >& data)
{
hsize_t dim(data.size());
H5::DataSpace dspace(1, &dim);
H5::VarLenType dtype(H5::PredType::NATIVE_INT);
H5::DataSet dset(group.createDataSet(dataSetName, dtype, dspace));
hvl_t vl[dim];
for (hsize_t i = 0; i < dim; ++i)
{
vl[i].len = data[i].size();
vl[i].p = &data[i][0];
}
dset.write(vl, dtype);
}

implementing LRU in C++

I am trying to implement LRU Page Replacement. I was able to get FIFO algorithm to work. But i am not sure how to keep track of the least recently used?
I am reading in a file. its structured like first number is pid(1) and second number is the ref(45) and so forth Like:
1 45
1 46
1 45
1 44
2 76
2 75
2 77
2 77
So, i am using a class array, and parsing the file line by line and if its not in the array to put the pid and ref there in that index. If the array is full then go back to the beginning ad start all over.
class pagetable
{
public:
int pid;
int ref;
int faults;
pagetable();
};
pagetable* page = new pagetable[frames];
I am prompting for the number of frames.
I am prompting for the file name and storing it in
ifstream inputStream;
Then i can call my LFU function and grab each pid and ref to check.
int runsimLFU(ifstream &inputStream, pagetable* page, int frames ){
int i =0;
int j=0;
bool flag = false;
int cnt=0;
int index = 0;
int value = 0;
while(1){
inputStream >> pid;
inputStream >> ref;
page[count].pid = pid;
page[count].ref = ref;
pagefaults++;
Something like this i can keep grabbing each line of the file.
this is how i am searching the array
bool findinarray(pagetable* page, int frames, int pid, int ref)
{
for(int i=0; i < frames+1; i++) {
if(page[i].pid == pid && page[i].ref == ref)
{
return true;
}
}
return false;
}
Two questions
1) I am unsure how to keep track of the LRU. i would imagine a second array and a counter variable but thats as far as i can see what to do.
2) once i know the LRU and when the incoming pid, ref is not in the array i stuff that into the array at index LRU number?
Thank you
In general, you have two competing needs for an LRU:
quickly find an entry - suggesting an array lookup, hash table, or binary map as an index, and
quickly prepend/append/remove an entry - suggesting a linked-list
If you address either requirement separately, you end up with brute-force inefficiencies. You could coordinate two containers yourself - ideally wrapping them into a LRU class to provide some encapsulation and improve reliability. Alternatively, boost multi-index containers address such requirements: www.boost.org/libs/multi_index/

Optimizating my code simulating a database (2)

Some days ago I made you a question and I got some really useful answers. I will make a summary to those of you who didn't read and I will explain my new doubts and where I have problems now.
Explanation
I have been working on a program, simulating a small database, that first of all read information from txt files and store them in the computer memory and then, I can make queries taking normal tables and/or transposed tables. The problem is that the performance is not good enough yet. It works slower than what I expect. I have improved it but I think I should improve it more. I have specific points where my program doesn't have a good performance.
Current problem
The first problem that I have now (where my program is slower) is that I spend more time to, for example table with 100,000 columns & 100 rows (0.325 min, I've improved this thanks to your help) than 100,000 rows & 100 columns (1.61198 min, the same than before). But on the other hand, access time to some data is better in the second case (in a determined example, 47 seconds vs. 6079 seconds in the first case) any idea why??
Explanation
Now let me remind you how my code works (with an atached summary of my code)
First of all I have a .txt file simulating a database table with random strings separated with "|". Here you have an example of table (with 7 rows and 5 columns). I also have the transposed table
NormalTable.txt
42sKuG^uM|24465\lHXP|2996fQo\kN|293cvByiV|14772cjZ`SN|
28704HxDYjzC|6869xXj\nIe|27530EymcTU|9041ByZM]I|24371fZKbNk|
24085cLKeIW|16945TuuU\Nc|16542M[Uz\|13978qMdbyF|6271ait^h|
13291_rBZS|4032aFqa|13967r^\\`T|27754k]dOTdh|24947]v_uzg|
1656nn_FQf|4042OAegZq|24022nIGz|4735Syi]\|18128klBfynQ|
6618t\SjC|20601S\EEp|11009FqZN|20486rYVPR|7449SqGC|
14799yNvcl|23623MTetGw|6192n]YU\Qe|20329QzNZO_|23845byiP|
TransposedTable.txt (This is new from the previous post)
42sKuG^uM|28704HxDYjzC|24085cLKeIW|13291_rBZS|1656nn_FQf|6618t\SjC|14799yNvcl|
24465\lHXP|6869xXj\nIe|16945TuuU\Nc|4032aFqa|4042OAegZq|20601S\EEp|23623MTetGw|
2996fQo\kN|27530EymcTU|16542M[Uz\|13967r^\\`T|24022nIGz|11009FqZN|6192n]YU\Qe|
293cvByiV|9041ByZM]I|13978qMdbyF|27754k]dOTdh|4735Syi]\|20486rYVPR|20329QzNZO_|
14772cjZ`SN|24371fZKbNk|6271ait^h|24947]v_uzg|18128klBfynQ|7449SqGC|23845byiP|
Explanation
This information in a .txt file is read by my program and stored in the computer memory. Then, when making queries, I will access to this information stored in the computer memory. Loading the data in the computer memory can be a slow process, but accessing to the data later will be faster, what really matters me.
Here you have the part of the code that read this information from a file and store in the computer.
Code that reads data from the Table.txt file and store it in the computer memory
int h;
do
{
cout<< "Do you want to query the normal table or the transposed table? (1- Normal table/ 2- Transposed table):" ;
cin>>h;
}while(h!=1 && h!=2);
string ruta_base("C:\\Users\\Raul Velez\\Desktop\\Tables\\");
if(h==1)
{
ruta_base +="NormalTable.txt"; // Folder where my "Table.txt" is found
}
if(h==2)
{
ruta_base +="TransposedTable.txt";
}
string temp; // Variable where every row from the Table.txt file will be firstly stored
vector<string> buffer; // Variable where every different row will be stored after separating the different elements by tokens.
vector<ElementSet> RowsCols; // Variable with a class that I have created, that simulated a vector and every vector element is a row of my table
ifstream ifs(ruta_base.c_str());
while(getline( ifs, temp )) // We will read and store line per line until the end of the ".txt" file.
{
size_t tokenPosition = temp.find("|"); // When we find the simbol "|" we will identify different element. So we separate the string temp into tokens that will be stored in vector<string> buffer
// --- NEW PART ------------------------------------
const char* p = temp.c_str();
char* p1 = strdup(p);
char* pch = strtok(p1, "|");
while(pch)
{
buffer.push_back(string(pch));
pch = strtok(NULL,"|");
}
free(p1);
ElementSet sss(0,buffer);
buffer.clear();
RowsCols.push_back(sss); // We store all the elements of every row (stores as vector<string> buffer) in a different position in "RowsCols"
// --- NEW PART END ------------------------------------
}
Table TablesStorage(RowsCols); // After every loop we will store the information about every .txt file in the vector<Table> TablesDescriptor
vector<Table> TablesDescriptor;
TablesDescriptor.push_back(TablesStorage); // In the vector<Table> TablesDescriptor will be stores all the different tables with all its information
DataBase database(1, TablesDescriptor);
Information already given in the previous post
After this, comes the access to the information part. Let's suppose that I want to make a query, and I ask for input. Let's say that my query is row "n", and also the consecutive tuples "numTuples", and the columns "y". (We must say that the number of columns is defined by a decimal number "y", that will be transformed into binary and will show us the columns to be queried, for example, if I ask for columns 54 (00110110 in binary) I will ask for columns 2, 3, 5 and 6). Then I access to the computer memory to the required information and store it in a vector shownVector. Here I show you the part of this code.
Problem
In the loop if(h == 2) where data from the transposed tables are accessed, performance is poorer ¿why?
Code that access to the required information upon my input
int n, numTuples;
unsigned long long int y;
cout<< "Write the ID of the row you want to get more information: " ;
cin>>n; // We get the row to be represented -> "n"
cout<< "Write the number of followed tuples to be queried: " ;
cin>>numTuples; // We get the number of followed tuples to be queried-> "numTuples"
cout<<"Write the ID of the 'columns' you want to get more information: ";
cin>>y; // We get the "columns" to be represented ' "y"
unsigned int r; // Auxiliar variable for the columns path
int t=0; // Auxiliar variable for the tuples path
int idTable;
vector<int> columnsToBeQueried; // Here we will store the columns to be queried get from the bitset<500> binarynumber, after comparing with a mask
vector<string> shownVector; // Vector to store the final information from the query
bitset<5000> mask;
mask=0x1;
clock_t t1, t2;
t1=clock(); // Start of the query time
bitset<5000> binaryNumber = Utilities().getDecToBin(y); // We get the columns -> change number from decimal to binary. Max number of columns: 5000
// We see which columns will be queried
for(r=0;r<binaryNumber.size();r++) //
{
if(binaryNumber.test(r) & mask.test(r)) // if both of them are bit "1"
{
columnsToBeQueried.push_back(r);
}
mask=mask<<1;
}
do
{
for(int z=0;z<columnsToBeQueried.size();z++)
{
ElementSet selectedElementSet;
int i;
i=columnsToBeQueried.at(z);
Table& selectedTable = database.getPointer().at(0); // It simmulates a vector with pointers to different tables that compose the database, but our example database only have one table, so don't worry ElementSet selectedElementSet;
if(h == 1)
{
selectedElementSet=selectedTable.getRowsCols().at(n);
shownVector.push_back(selectedElementSet.getElements().at(i)); // We save in the vector shownVector the element "i" of the row "n"
}
if(h == 2)
{
selectedElementSet=selectedTable.getRowsCols().at(i);
shownVector.push_back(selectedElementSet.getElements().at(n)); // We save in the vector shownVector the element "n" of the row "i"
}
n=n+1;
t++;
}
}while(t<numTuples);
t2=clock(); // End of the query time
showVector().finalVector(shownVector);
float diff ((float)t2-(float)t1);
float microseconds = diff / CLOCKS_PER_SEC*1000000;
cout<<"Time: "<<microseconds<<endl;
Class definitions
Here I attached some of the class definitions so that you can compile the code, and understand better how it works:
class ElementSet
{
private:
int id;
vector<string> elements;
public:
ElementSet();
ElementSet(int, vector<string>&);
const int& getId();
void setId(int);
const vector<string>& getElements();
void setElements(vector<string>);
};
class Table
{
private:
vector<ElementSet> RowsCols;
public:
Table();
Table(vector<ElementSet>&);
const vector<ElementSet>& getRowsCols();
void setRowsCols(vector<ElementSet>);
};
class DataBase
{
private:
int id;
vector<Table> pointer;
public:
DataBase();
DataBase(int, vector<Table>&);
const int& getId();
void setId(int);
const vector<Table>& getPointer();
void setPointer(vector<Table>);
};
class Utilities
{
public:
Utilities();
static bitset<500> getDecToBin(unsigned long long int);
};
Summary of my problems
Why the load of the data is different depending on the table format???
Why the access to the information also depends on the table (and the performance is in the opposite way than the table data load?
Thank you very much for all your help!!! :)
One thing I see that may explain both your problems is that you are doing many allocations, a lot of which appear to be temporary. For example, in your loading you:
Allocate a temporary string per row
Allocate a temporary string per column
Copy the row to a temporary ElementSet
Copy that to a RowSet
Copy the RowSet to a Table
Copy the Table to a TableDescriptor
Copy the TableDescriptor to a Database
As far as I can tell, each of these copies is a complete new copy of the object. If you only had a few 100 or 1000 records that might be fine but in your case you have 10 million records so the copies will be time consuming.
Your loading times may differ due to the number of allocations done in the loading loop per row and per column. Memory fragmentation may also contribute at some point (when dealing with a large number of small allocations the default memory handler sometimes takes a long time to allocate new memory). Even if you removed all your unnecessary allocations I would still expect the 100 column case to be slightly slower than the 100,000 case due to how your are loading and parsing by line.
Your information access times may be different as you are creating a full copy of a row in selectedElementSet. When you have 100 columns this will be fast but when you have 100,000 columns it will be slow.
A few specific suggestions to improving your code:
Reduce the number of allocations and copies you make. The ideal case would be to make one allocation for reading the file and then another allocation per record when stored.
If you're going to store the data in a Database then put it there from the beginning. Don't make half-a-dozen complete copies of your data to go from a temporary object to the Database.
Make use of references to the data instead of actual copies when possible.
When profiling make sure you get times when running a new instance of the program. Memory use and fragmentation may have a significant impact if you test both cases in the same instance and the order in which you do the tests will matter.
Edit: Code Suggestion
To hopefully improve your speed in the search loop try something like:
for(int z=0;z<columnsToBeQueried.size();z++)
{
int i;
i=columnsToBeQueried.at(z);
Table& selectedTable = database.getPointer().at(0);
if(h == 1)
{
ElementSet& selectedElementSet = selectedTable.getRowsCols().at(n);
shownVector.push_back(selectedElementSet.getElements().at(i));
}
else if(h == 2)
{
ElementSet& selectedElementSet = selectedTable.getRowsCols().at(i);
shownVector.push_back(selectedElementSet.getElements().at(n));
}
n=n+1;
t++;
}
I've just changed the selectedElementSet to use a reference which should complete eliminate the row copies taking place and, in theory, it should have a noticeable impact in performance. For even more performance gain you can change shownVector to be a reference/pointer to avoid yet another copy.
Edit: Answer Comment
You asked where you were making copies. The following lines in your original code:
ElementSet selectedElementSet;
selectedElementSet = selectedTable.getRowsCols().at(n);
creates a copy of the vector<string> elements member in ElementSet. In the 100,000 column case this will be a vector containing 100,000 strings so the copy will be relatively expensive time wise. Since you don't actually need to create a new copy changing selectedElementSet to be a reference, like in my example code above, will eliminate this copy.

Optimizating my code simulating a database

I have been working on a program, simulating a small database where I could make queries, and after writing the code, I have executed it, but the performance is quite bad. It works really slow. I have tried to improve it, but I started with C++ on my own a few months ago, so my knowledge is still very low. So I would like to find a solution to improve the performance.
Let me explain how my code works. Here I have atached a summarized example of how my code works.
First of all I have a .txt file simulating a database table with random strings separated with "|". Here you have an example of table (with 5 rows and 5 columns).
Table.txt
0|42sKuG^uM|24465\lHXP|2996fQo\kN|293cvByiV
1|14772cjZ`SN|28704HxDYjzC|6869xXj\nIe|27530EymcTU
2|9041ByZM]I|24371fZKbNk|24085cLKeIW|16945TuuU\Nc
3|16542M[Uz\|13978qMdbyF|6271ait^h|13291_rBZS
4|4032aFqa|13967r^\\`T|27754k]dOTdh|24947]v_uzg
This information in a .txt file is read by my program and stored in the computer memory. Then, when making queries, I will access to this information stored in the computer memory. Loading the data in the computer memory can be a slow process, but accessing to the data later will be faster, what really matters me.
Here you have the part of the code that read this information from a file and store in the computer.
Code that reads data from the Table.txt file and store it in the computer memory
string ruta_base("C:\\a\\Table.txt"); // Folder where my "Table.txt" is found
string temp; // Variable where every row from the Table.txt file will be firstly stored
vector<string> buffer; // Variable where every different row will be stored after separating the different elements by tokens.
vector<ElementSet> RowsCols; // Variable with a class that I have created, that simulated a vector and every vector element is a row of my table
ifstream ifs(ruta_base.c_str());
while(getline( ifs, temp )) // We will read and store line per line until the end of the ".txt" file.
{
size_t tokenPosition = temp.find("|"); // When we find the simbol "|" we will identify different element. So we separate the string temp into tokens that will be stored in vector<string> buffer
while (tokenPosition != string::npos)
{
string element;
tokenPosition = temp.find("|");
element = temp.substr(0, tokenPosition);
buffer.push_back(element);
temp.erase(0, tokenPosition+1);
}
ElementSet ss(0,buffer);
buffer.clear();
RowsCols.push_back(ss); // We store all the elements of every row (stores as vector<string> buffer) in a different position in "RowsCols"
}
vector<Table> TablesDescriptor;
Table TablesStorage(RowsCols);
TablesDescriptor.push_back(TablesStorage);
DataBase database(1, TablesDescriptor);
After this, comes the IMPORTANT PART. Let's suppose that I want to make a query, and I ask for input. Let's say that my query is row "n", and also the consecutive tuples "numTuples", and the columns "y". (We must say that the number of columns is defined by a decimal number "y", that will be transformed into binary and will show us the columns to be queried, for example, if I ask for columns 54 (00110110 in binary) I will ask for columns 2, 3, 5 and 6). Then I access to the computer memory to the required information and store it in a vector shownVector. Here I show you the part of this code.
Code that access to the required information upon my input
int n, numTuples;
unsigned long long int y;
clock_t t1, t2;
cout<< "Write the ID of the row you want to get more information: " ;
cin>>n; // We get the row to be represented -> "n"
cout<< "Write the number of followed tuples to be queried: " ;
cin>>numTuples; // We get the number of followed tuples to be queried-> "numTuples"
cout<<"Write the ID of the 'columns' you want to get more information: ";
cin>>y; // We get the "columns" to be represented ' "y"
unsigned int r; // Auxiliar variable for the columns path
int t=0; // Auxiliar variable for the tuples path
int idTable;
vector<int> columnsToBeQueried; // Here we will store the columns to be queried get from the bitset<500> binarynumber, after comparing with a mask
vector<string> shownVector; // Vector to store the final information from the query
bitset<500> mask;
mask=0x1;
t1=clock(); // Start of the query time
bitset<500> binaryNumber = Utilities().getDecToBin(y); // We get the columns -> change number from decimal to binary. Max number of columns: 5000
// We see which columns will be queried
for(r=0;r<binaryNumber.size();r++) //
{
if(binaryNumber.test(r) & mask.test(r)) // if both of them are bit "1"
{
columnsToBeQueried.push_back(r);
}
mask=mask<<1;
}
do
{
for(int z=0;z<columnsToBeQueried.size();z++)
{
int i;
i=columnsToBeQueried.at(z);
vector<int> colTab;
colTab.push_back(1); // Don't really worry about this
//idTable = colTab.at(i); // We identify in which table (with the id) is column_i
// In this simple example we only have one table, so don't worry about this
const Table& selectedTable = database.getPointer().at(0); // It simmulates a vector with pointers to different tables that compose the database, but our example database only have one table, so don't worry ElementSet selectedElementSet;
ElementSet selectedElementSet;
selectedElementSet=selectedTable.getRowsCols().at(n);
shownVector.push_back(selectedElementSet.getElements().at(i)); // We save in the vector shownVector the element "i" of the row "n"
}
n=n+1;
t++;
}while(t<numTuples);
t2=clock(); // End of the query time
float diff ((float)t2-(float)t1);
float microseconds = diff / CLOCKS_PER_SEC*1000000;
cout<<"The query time is: "<<microseconds<<" microseconds."<<endl;
Class definitions
Here I attached some of the class definitions so that you can compile the code, and understand better how it works:
class ElementSet
{
private:
int id;
vector<string> elements;
public:
ElementSet();
ElementSet(int, vector<string>);
const int& getId();
void setId(int);
const vector<string>& getElements();
void setElements(vector<string>);
};
class Table
{
private:
vector<ElementSet> RowsCols;
public:
Table();
Table(vector<ElementSet>);
const vector<ElementSet>& getRowsCols();
void setRowsCols(vector<ElementSet>);
};
class DataBase
{
private:
int id;
vector<Table> pointer;
public:
DataBase();
DataBase(int, vector<Table>);
const int& getId();
void setId(int);
const vector<Table>& getPointer();
void setPointer(vector<Table>);
};
class Utilities
{
public:
Utilities();
static bitset<500> getDecToBin(unsigned long long int);
};
So the problem that I get is that my query time is very different depending on the table size (it has nothing to do a table with 100 rows and 100 columns, and a table with 10000 rows and 1000 columns). This makes that my code performance is very low for big tables, what really matters me... Do you have any idea how could I optimizate my code????
Thank you very much for all your help!!! :)
Whenever you have performance problems, the first thing you want to do is to profile your code. Here is a list of free tools that can do that on windows, and here for linux. Profile your code, identify the bottlenecks, and then come back and ask a specific question.
Also, like I said in my comment, can't you just use SQLite? It supports in-memory databases, making it suitable for testing, and it is lightweight and fast.
One obvious issue is that your get-functions return vectors by value. Do you need to have a fresh copy each time? Probably not.
If you try to return a const reference instead, you can avoid a lot of copies:
const vector<Table>& getPointer();
and similar for the nested get's.
I have not done the job, but you may analyse the complexity of your algorithm.
The reference says that access an item is in constant time, but when you create loops, the complexity of your program increases:
for (i=0;i<1000; ++i) // O(i)
for (j=0;j<1000; ++j) // O(j)
myAction(); // Constant in your case
The program complexity is O(i*j), so how big may be i an j?
What if myAction is not constant in time?
No need to reinvent the wheel again, use FirebirdSQL embedded database instead. That combined with IBPP C++ interface gives you a great foundation for any future needs.
http://www.firebirdsql.org/
http://www.ibpp.org/
Though I advise you to please use a profiler to find out which parts of your code are worth optimizing, here is how I would write your program:
Read the entire text file into one string (or better, memory-map the file.) Scan the string once to find all | and \n (newline) characters. The result of this scan is an array of byte offsets into the string.
When the user then queries item M of row N, retrieve it with code something like this:
char* begin = text+offset[N*items+M]+1;
char* end = text+offset[N*items+M+1];
If you know the number of records and fields before the data is read, the array of byte offsets can be a std::vector. If you don't know and must infer from the data, it should be a std::deque. This is to minimize costly memory allocation and deallocation, which I imagine is the bottleneck in such a program.