I am trying to write a triple vector to a file and then be able to read back into the data structure afterward. When I try to read the file back after its been saved the first fifty values come out correct but the rest of the values are garbage. I'd be really appreciative if someone could help me out here. Thanks a lot!
File declaration:
fstream memory_file("C:\\Users\\Amichai\\Pictures\\output.txt", ios::in | ios::out);
Save function:
void save_training_data(fstream &memory_file, vector<vector<vector<long double> > > &training_data)
{
int sizeI = training_data.size();
memory_file.write((const char *)&sizeI, sizeof(int));
for (int i=0; i < sizeI; i++)
{
int sizeJ = training_data[i].size();
memory_file.write((const char *)&sizeJ, sizeof(int));
for (int j=0; j < sizeJ; j++)
{
int sizeK = training_data[i][j].size();
memory_file.write((const char *)&sizeK, sizeof(int));
for (int k = 0; k < sizeK; k++)
{
int temp;
temp = (int)training_data[i][j][k];
memory_file.write((const char *)&temp, sizeof(int));
}
}
}
}
Read function:
void upload_memory(fstream &memory_file, vector<vector<vector<long double> > > &training_data)
{
memory_file.seekg(ios::beg);
int temp=0;
int sizeK, sizeJ, sizeI;
memory_file.read((char*)&sizeI, sizeof(int));
training_data.resize(sizeI);
for (int i=0; i < sizeI; i++)
{
memory_file.read((char*)&sizeJ, sizeof(int));
training_data[i].resize(sizeJ);
for (int j=0; j < sizeJ; j++)
{
memory_file.read((char*)&sizeK, sizeof(int));
training_data[i][j].resize(sizeK);
for (int k = 0; k < sizeK; k++)
{
memory_file.read((char*)&temp, sizeof(int));
training_data[i][j][k]=temp;
}
}
}
}
Since you're writing binary data (and apparently working under Windows) you really need to specify ios::binary when you open the fstream.
The problem is that you're writing the numerical values in binary form to a file interpreted as text by the stream processor. Either use a binary file (using ios::binary) or convert the numbers to strings before writing to file.
Check out the Boost.Serialization library at www.booost.org. It knows how to read and write STL collections to/from files. I don't know if it can handle nested containers, though.
You may also want to use Boost.Multiarray for your 3-dimensional data. If you're going to do matrix math on your data, then you might want to use Boost.uBlas.
As the other answers suggest using "ios::in | ios::out | ios::binary" instead of "ios::in | ios::out" which is correct, however I remember reading that the C++ stream specification while having the binary option was not designed for binary files at all. If using "ios::binary" doesn't help you would need to use the C function fopen(), fread(), fwrite(), and fclose() of stdio.h instead or, as another user suggests, the Boost::Serilization library.
Related
I'm having troubles doing a loop for a really basic console game that uses ASCII characters. I want to store the txt file (the map) into a 2d array and then I want to output the 2d array on the console.
You can see 2 loops under here, one for inputting the txt file into a 2d array and the other for outputting the 2d array on the screen.
void Level::load_level() {
Level gameLevel;
ifstream inFile;
gameLevel.map;
inFile.open("level1.txt");
if (inFile.fail()) {
perror("level1.txt");
system("PAUSE");
}
for (int i = 0; i < 20; i++) {
for (int j = 0; j < 74; j++) {
inFile >> map[i][j];
}
}
for (int i = 0; i < 20; i++) {
for (int j = 0; j < 74; j++) {
printf("%c", map[i][j]);
}
printf("\n");
}
inFile.close();
}
So, here is what my txt file looks like:
And here is what my console displays:
It seems like the data in the 2d array isn't store correctly, what should I do in order to keep one loop reading data from the file into a 2d array and another loop displaying the 2d array correctly? I don't know if I made that clear.
Thanks and sorry for my lack of skill(I'm a beginner).
Im trying to assign values to a 2d vector, this is the way that i defined the vector, and also its important to say that rows and columns, are ints previously defined
vector < vector <int>> vec(rows , vector <int> (columns,0));
i want to assign to this vector, each char of a pbm file, this file only have '1' and '0', so this is the way im reading it
char i;
FILE* fp;
fp = fopen("file.pbm", "r");
on this way im assigning values to the vector
for (int h=0; h<rows; h++){
for (int j=0; j<columns; j++){
while((i=fgetc(fp))!=EOF){
vec[h][j] = i;
}
}
}
but when i try to print all the vector content, this one, only have '0'
for (int h=0; h<rows; h++){
for (int j=0; j<columns; j++)
cout << vec[h][j];
cout <<endl;
}
fclose(fp);
If anyone could tell me where im failing when i make this assignment, thanks!
vec[h][j] = i;
for (int h=0; h<rows; h++){
for (int j=0; j<columns; j++){
while((i=fgetc(fp))!=EOF){
vec[h][j] = i;
}
}
}
The while loop runs through the entire file without ever incrementing h and j so you are reading the whole file into the first element. And you are doing this (rows*columns) times.
You'll need to redesign your code to read the code in correctly.
In my code i'm changing my array (int*) and then I want to compare it into the matlab results.
since my array is big 1200 X 1000 element. this takes forever to load it into matlab
i'm trying to copy the printed output file into matlab command line...
for (int i = 0; i < _roiY1; i++)
{
for (int j = 0; j < newWidth; j++)
{
channel_gr[i*newWidth + j] = clipLevel;
}
}
ofstream myfile;
myfile.open("C:\\Users\\gdarmon\\Desktop\\OpenCVcliptop.txt");
for (int i = 0; i < newHeight ; i++)
{
for (int j = 0; j < newWidth; j++)
{
myfile << channel_gr[i * newWidth + j] << ", ";
}
myfile<<";" <<endl;
}
is there a faster way to create a readable matrix data from c++? into matlab?
The simplest answer is that it's much quicker to transfer the data in binary form, rather than - as suggested in the question - rendering to text and having Matlab parse it back to binary. You can achieve this by using fwrite() at the C/C++ end, and fread() at the Matlab end.
int* my_data = ...;
int my_data_count = ...;
FILE* fid = fopen('my_data_file', 'wb');
fwrite((void*)my_data, sizeof(int), my_data_count, fid);
fclose(fid);
In Matlab:
fid = fopen('my_data_file', 'r');
my_data = fread(fid, inf, '*int32');
fclose(fid);
It's maybe worth noting that you can call C/C++ functions from within Matlab, so depending on what you are doing that may be an easier architecture (look up "mex files").
Don't write the output as text.
Write your matrix into your output file the way Matlab likes to read: big array of binary.
ofstream myfile;
myfile.open("C:\\Users\\gdarmon\\Desktop\\OpenCVcliptop.txt", ofstream::app::binary);
myfile.write((char*) channel_gr, newHeight*newWidth*sizeof(channel_gr[0]));
You may want to play some games on output to get the array ordered column-row rather than row-column because of the way matlab likes to see data. I remember orders of magnitude improvements in performance when writing mex file plug-ins for file readers, but it's been a while since I've done it.
I have 47 different files:
001_template.dat
...
047_template.dat
in a directory called /data. I need to compare each of these template files to three different query files, also in the directory. These are named:
001_AU01_query.dat
001_AU12_query.dat
001_AU17_query.dat.
I know how to get all of this to run, but I will have to cut and paste these 6 lines of code 46 more times and the program will get very long and confusing.
Is there a good way to loop over these files? Possibly by looping over the template files and then doing three queries for every template? I obviously have a similarity function and a sort function already defined, as well as inputFile. Here is the code I would like to convert: (not homework this is for a facial expression recognition project I have been working on)
int main()
{
vector<float> temp01;
vector<float> temp12;
vector<float> temp17;
temp01 = similar(inputFile("data/001_AU01_query.dat"), inputFile("data/001_template.dat"));
sortAndOutput(temp01);
temp12 = similar(inputFile("data/001_AU12_query.dat"), inputFile("data/001_template.dat"));
sortAndOutput(temp12);
temp17 = similar(inputFile("data/001_AU17_query.dat"), inputFile("data/001_template.dat"));
sortAndOutput(temp17);
}
Then I would go with creating the file names with sprintf into the loop:
char data[100];
char template[100];
char* datas[3] = {"%3d_AU01_query.dat", "%3d_AU12_query.dat", "%3d_AU17_query.dat"};
for(i=0; i<47; i++){
for{j=0; j<3; j++){
sprintf(template, "%03d_template.dat", i); // create the name of the template 1-47
sprintf(data, datas[j], i);
compare(template, data);
}
}
That should work as expected I think.
Use two arrays holding the names of files and templates and loop on them:
char* files[47] = {"file1", "file2", ...., "file47"};
char* templates[3] = {"template1", "template2", "template3"};
and loop on them:
for(i=0; i<47; i++){
for{j=0; j<3; j++){
compare(file[i],template[j]);
}
}
void work()
{
vector<float> temp;
char data[100];
char templates[100];
char* datas[3] = { "data/%03d_AU01_query.dat", "data/%03d_AU12_query.dat", "data/%03d_AU17_query.dat" };
for (int i = 1; i < 48; i++)
{
for(int j = 0; j < 3; j++)
{
sprintf_s(templates, "data/%03d_template.dat", i); // create the name of the template 1-47
sprintf_s(data, datas[j], i);
temp01 = similar(inputFile(data), inputFile(templates));
sortAndOutput(temp);
}
}
}
I have been asked to sort a file in-place using shell sort (and quick sort too, but I think that if I find the way to do one I will be able to do both of them). I have been thinking what could be helpful but I can't find a way to do it. I have the algorithm for an array, but I can't think a way to get it to work with a file.
Is there any way this can be done?
Edit:
With the help of the code posted by André Puel I was able to write some code that is working for the moment, here it is if you want to check it out:
#include <iostream>
#include <iomanip>
#include <fstream>
#include <cstdlib>
#include <sstream>
using namespace std;
int toNum(const string &s) {
stringstream ss(s);
int n;
ss >> n;
return n;
}
string toStr(int n) {
stringstream ss;
ss << n;
string s;
ss >> s;
return string(5 - s.size(),' ') + s;
}
int getNum(fstream &f,int pos) {
f.seekg(pos*5);
string s;
for(int i = 0; i < 5; ++i) s += f.get();
return toNum(s);
}
void putNum(fstream &f, int pos,int n) {
f.seekp(pos*5);
f.write(toStr(n).c_str(),5);
}
int main() {
fstream input("entrada1",fstream::in | fstream::out);
string aux;
getline(input,aux);
int n = aux.size() / 5,temp,j;
int gaps[] = {701,301,132,57,23,10,4,1};
int g = sizeof(gaps)/sizeof(gaps[0]);
for(int k = 0; k < g; ++k) {
for(int i = k; i < n; ++i) {
temp = getNum(input,i);
for(j = i; j >= k and getNum(input,j - k) > temp; j -= k) {
putNum(input,j,getNum(input,j - k));
}
putNum(input,j,temp);
}
}
input.close();
return 0;
}
When you open a file in C++ you have two pointers. The getter pointer and the putter pointer. They indicate where in the file you are writing and reading.
Using seekp, you may tell where you wanna write. Using tellp you know where you are going to write. Everytime you write something the putter pointer advances automatically.
The same goes to the getter pointer, the functions are seekg and tellg.
Using theses operations you may easily simulate an array. Let me show you some code:
class FileArray {
public:
FileArray(const char* path)
: file(path, std::fstream::app|std::fstream::binary)
{
file.seekg(0,std::fstream::end);
size = file.tellg();
}
void write(unsigned pos, char data) {
assert(pos < size );
file.tellp(pos);
file.put(data);
}
char read(unsigned pos) {
assert(pos < size);
file.seekg(pos);
return file.get();
}
private:
std::fstream file;
std::size_t size;
}
This is a naive way to deal with a file because you are supposing random access. Well, random access is true, but it may be slow. File streams works faster when you access data that are near each other (spacial locality).
Even though, it is a nice way to start dealing with your problem, you with get experienced with file IO and you will end figuring out ways to improve the performance for your specific problem. Lets keep the baby steps.
Other thing that I want you to note is that when you perform a write, the data is redirected to the fstream that will write to the file. I know that the kernel will try to cache this stuff, and optimize the speed, but still would be better if you had some kind of cache layer to avoid writing directly to the disk.
Finally, I supposed that you are dealing with chars (because it would be easier), but you can deal with other data types, you will just need to be careful about the indexing and the size of the data type. For example, long long type does have size of 8 bytes, if you want to access the first element in your file-array you will access the position 8*0, and you will have to read 8 bytes. If you want the 10th element, you will access the position 8*10 and again read 8 bytes of data to construct the long long value.