how to display text file in c++? - c++

I want to display the text file in my c++ program but nothing appears and the program just ended. I am using struct here. I previously used this kind of method, but now I am not sure why it isn't working. I hope someone could help me. Thanks a lot.
struct Records{
int ID;
string desc;
string supplier;
double price;
int quantity;
int rop;
string category;
string uom;
}record[50];
void inventory() {
int ID, quantity, rop;
string desc, supplier, category, uom;
double price;
ifstream file("sample inventory.txt");
if (file.fail()) {
cout << "Error opening records file." <<endl;
exit(1);
}
int i = 0;
while(! file.eof()){
file >> ID >> desc >> supplier >> price >> quantity >> rop >> category >> uom;
record[i].ID = ID;
record[i].desc = desc;
record[i].supplier = supplier;
record[i].price = price;
record[i].quantity = quantity;
record[i].rop = rop;
record[i].category = category;
record[i].uom = uom;
i++;
}
for (int a = 0; a < 15; a++) {
cout << "\n\t";
cout.width(10); cout << left << record[a].ID;
cout.width(10); cout << left << record[a].desc;
cout.width(10); cout << left << record[a].supplier;
cout.width(10); cout << left << record[a].price;
cout.width(10); cout << left << record[a].quantity;
cout.width(10); cout << left << record[a].rop;
cout.width(10); cout << left << record[a].category;
cout.width(10); cout << left << record[a].uom << endl;
}
file.close();
}
Here is the txt file:

Here are a couple of things you should consider.
Declare the variables as you need them. Don’t declare them at the top of your function. It makes the code more readable.
Use the file’s full path to avoid confusions. For instance "c:/temp/sample inventory.txt".
if ( ! file ) is shorter.
To read data in a loop, use the actual read as a condition while( file >> ID >>... ). This would have revealed the cause of your problem.
Read about the setw manipulator.
file's destructor will close the stream - you don't need to call close()
Your file format consists of a header and data. You do not read the header. You are trying to directly read the data. You try to match the header against various data types: strings, integers, floats; but the header is entirely made of words. Your attempt will invalidate the stream and all subsequent reading attempts will fail. So, first discard the header – you may use getline.
Some columns contain data consisting of more than one word. file >> supplier reads one word, not two or more. So you will get "Mongol", not "Mongol Inc." Your data format needs a separator between columns. Otherwise you won’t be able to tell where the column ends. If you add a separator, again, you may use getline to read fields.
The CATEGORY column is empty. Trying to read it will result in reading from a different column. Adding a separator will also solve the empty category column problem.
This is how your first rows will look like if you use comma as separator:
ID,PROD DESC,SUPPLIER,PRICE,QTY,ROP,CATEGORY,UOM
001,Pencil,Mongol Inc.,8,200,5,,pcs
A different format solution would be to define a string as a zero or more characters enclosed in quotes:
001 "Pencil" "Mongol Inc." 8 200 5 "" "pcs"
and take advantage of the quoted manipulator (note the empty category string):
const int max_records_count = 50;
Record records[max_records_count];
istream& read_record(istream& is, Record& r) // returns the read record in r
{
return is >> r.ID >> quoted(r.desc) >> quoted(r.supplier) >> r.price >> r.quantity >> r.rop >> quoted(r.category) >> quoted(r.uom);
}
istream& read_inventory(istream& is, int& i) // returns the number of read records in i
{
//...
for (i = 0; i < max_records_count && read_record(is, records[i]); ++i)
; // no operation
return is;
}

Unfortunately you text file is not a typical CSV file, delimited by some character like a comma or such. The entries in the lines seem to be separated by tabs. But this is a guess by me. Anyway. The structure of the source file makes it harder to read.
Additionally, the file has an header and while reading the first line andtry to read the word "ID" into an int variable, this conversion will fail. The failbit of the stream is set, and from then on all further access to any iostream function for this stream will do nothing any longer. It will ignore all your further requests to read something.
Additional difficulty is that you have spaces in data fields. But the extractor operator for formatted input >> will stop, if it sees a white space. So, maybe only read half of the field in a record.
Solution: You must first read the header file, then the data rows.
Next, you must know if the file is really tab separated. Sometimes tabs are converted to spaces. In that case, we would need to recreate the start position of a field in the a record.
In any case, you need to read a complete line, and after that split it in parts.
For the first solution approach, I assume tab separated fields.
One of many possible examples:
#include <iostream>
#include <string>
#include <fstream>
#include <sstream>
#include <vector>
#include <iomanip>
const std::string fileName{"r:\\sample inventory.txt"};
struct Record {
int ID;
std::string desc;
std::string supplier;
double price;
int quantity;
int rop;
std::string category;
std::string uom;
};
using Database = std::vector<Record>;
int main() {
// Open the source text file with inventory data and check, if it could be opened
if (std::ifstream ifs{ fileName }; ifs) {
// Here we will store all data
Database database{};
// Read the first header line and throw it away
std::string line{};
std::string header{};
if (std::getline(ifs, header)) {
// Now read all lines containing record data
while (std::getline(ifs, line)) {
// Now, we read a line and can split it into substrings. Assuming the tab as delimiter
// To be able to extract data from the textfile, we will put the line into a std::istrringstream
std::istringstream iss{ line };
// One Record
Record record{};
std::string field{};
// Read fields and put in record
if (std::getline(iss, field, '\t')) record.ID = std::stoi(field);
if (std::getline(iss, field, '\t')) record.desc = field;
if (std::getline(iss, field, '\t')) record.supplier = field;
if (std::getline(iss, field, '\t')) record.price = std::stod(field);
if (std::getline(iss, field, '\t')) record.quantity = std::stoi(field);
if (std::getline(iss, field, '\t')) record.rop = std::stoi(field);
if (std::getline(iss, field, '\t')) record.category = field;
if (std::getline(iss, field)) record.uom = field;
database.push_back(record);
}
// Now we read the complete database
// Show some debug output.
std::cout << "\n\nDatabase:\n\n\n";
// Show all records
for (const Record& r : database)
std::cout << std::left << std::setw(7) << r.ID << std::setw(20) << r.desc
<< std::setw(20) << r.supplier << std::setw(8) << r.price << std::setw(7)
<< r.quantity << std::setw(8) << r.rop << std::setw(20) << r.category << std::setw(8) << r.uom << '\n';
}
}
else std::cerr << "\nError: COuld not open source file '" << fileName << "'\n\n";
}
But to be honest, there are many assumptions. And tab handling is notoriously error prone.
So, let us make the next approach and extract the data according to their position in the header string. So, we will check, where each header string starts and use this information to later split a complete line into substrings.
We will use a list of Field Descriptors and search for their start position and width in the header line.
Example:
#include <iostream>
#include <string>
#include <fstream>
#include <sstream>
#include <vector>
#include <iomanip>
#include <array>
const std::string fileName{"r:\\sample inventory.txt"};
struct Record {
int ID;
std::string desc;
std::string supplier;
double price;
int quantity;
int rop;
std::string category;
std::string uom;
};
constexpr size_t NumberOfFieldsInRecord = 8u;
using Database = std::vector<Record>;
int main() {
// Open the source text file with inventory data and check, if it could be opened
if (std::ifstream ifs{ fileName }; ifs) {
// Here we will store all data
Database database{};
// Read the first header line and throw it away
std::string line{};
std::string header{};
if (std::getline(ifs, header)) {
// Analyse the header
// We have 8 elements in one record. We will store the positions of header items
std::array<size_t, NumberOfFieldsInRecord> startPosition{};
std::array<size_t, NumberOfFieldsInRecord> fieldWidth{};
const std::array<std::string, NumberOfFieldsInRecord> expectedHeaderNames{ "ID","PROD DESC","SUPPLIER","PRICE","QTY","ROP","CATEGORY","UOM"};
for (size_t k{}; k < NumberOfFieldsInRecord; ++k)
startPosition[k] = header.find(expectedHeaderNames[k]);
for (size_t k{ 1 }; k < NumberOfFieldsInRecord; ++k)
fieldWidth[k - 1] = startPosition[k] - startPosition[k - 1];
fieldWidth[NumberOfFieldsInRecord - 1] = header.length() - startPosition[NumberOfFieldsInRecord - 1];
// Now read all lines containing record data
while (std::getline(ifs, line)) {
// Now, we read a line and can split it into substrings. Based on poisition and field width
// To be able to extract data from the textfile, we will put the line into a std::istrringstream
std::istringstream iss{ line };
// One Record
Record record{};
std::string field{};
// Read fields and put in record
field = line.substr(startPosition[0], fieldWidth[0]); record.ID = std::stoi(field);
field = line.substr(startPosition[1], fieldWidth[1]); record.desc = field;
field = line.substr(startPosition[2], fieldWidth[2]); record.supplier = field;
field = line.substr(startPosition[3], fieldWidth[3]); record.price = std::stod(field);
field = line.substr(startPosition[4], fieldWidth[4]); record.quantity = std::stoi(field);
field = line.substr(startPosition[5], fieldWidth[5]); record.rop = std::stoi(field);
field = line.substr(startPosition[6], fieldWidth[6]); record.category = field;
field = line.substr(startPosition[7], fieldWidth[7]); record.uom = field;
database.push_back(record);
}
// Now we read the complete database
// Show some debug output.
std::cout << "\n\nDatabase:\n\n\n";
// Header
for (size_t k{}; k < NumberOfFieldsInRecord; ++k)
std::cout << std::left << std::setw(fieldWidth[k]) << expectedHeaderNames[k];
std::cout << '\n';
// Show all records
for (const Record& r : database)
std::cout << std::left << std::setw(fieldWidth[0]) << r.ID << std::setw(fieldWidth[1]) << r.desc
<< std::setw(fieldWidth[2]) << r.supplier << std::setw(fieldWidth[3]) << r.price << std::setw(fieldWidth[4])
<< r.quantity << std::setw(fieldWidth[5]) << r.rop << std::setw(fieldWidth[6]) << r.category << std::setw(fieldWidth[7]) << r.uom << '\n';
}
}
else std::cerr << "\nError: COuld not open source file '" << fileName << "'\n\n";
}
But this is still not all.
We should wrap all functions belonging to a record into the struct Record. And the same for the data base. And espcially we want to overwrite the extractor and the inserter operator. Then input and output will later be very simple.
We will save this for later . . .
If you can give more and better information regarding the structure of the source file, then I will update my answer.

Related

when reading a file using c++, How to ignore a line

I want to read custom file structure and output them, also i want to ignore the lines which are not in the right format. Like comments, titles, etc.,
I've tried this code but its stop looping when it meets a line which is out of the structure.
Here is the txt file.
1001 Promod Dinal IT-K20 42 42
1002 Sahan Navod BM-K11 65 28
day_02
1003 Kaushani Dilinika BM-K12 69 49
1004 Fathima Sahana QS-K14 73 43
int main()
{
ifstream thefile;
thefile.open("GameZone.txt");
int id;
char fName[30];
char lName[30];
char stream[30];
int score;
int time;
if (!thefile.is_open()) {
cout << "cant open the file" << endl;
}
else {
while (!thefile.eof()) {
if (thefile >> id >> fName >> lName >> stream >> score >> time) {
cout << id << " ," << fName << " ," << lName << " ," << stream << " ," << score << " ," << time << endl;
}else if(!(thefile >> id >> fName >> lName >> stream >> score >> time)){
cout << "skip the row" << endl;
continue;
}
}
}
return 0;
}
Output
1001 ,Promod ,Dinal ,IT-K20 ,42 ,42
1002 ,Sahan ,Navod ,BM-K11 ,65 ,28
Do not try to parse fields directly from the file. Instead, read lines from the file and attempt to parse those lines. Use the following algorithm:
Read a line from the file.
If you were not able to read a line, stop, you are done.
Try to parse the line into fields.
If you were not able to parse the line, go to step 1.
Process the fields.
Go to step 1.
If you are still having difficulty with the implementation, then a very simple implementation would simple read each line, create a stringstream from the line to allow you to attempt to parse your values from, then depending on the result of reading from the stringstream output your values in your (rather strange " ,") csv format, or simply go read the next line and try again.
You should be using std::string instead of char[] to holds your string data in C++. Either will work, but the latter is much more user-friendly and flexible. In either case, you will want to coordinate all the differing types of data that make up one record as a struct. This has many benefits if you are actually doing something more than just dumping your data to stdout. For example, you can store all of your data read in a std::vector of struct and then be able to further process your data (e.g. sort, push_back, or erase records) as needed or pass it to other functions for further processing.
A simple struct using int and std::string could be:
struct record_t { /* simple struct to coordinate data in single record */
int id, score, time;
std::string fname, lname, stream;
};
The reading and outputting of records in your csv format, can then be as simple as using a temporary struct to attempt to parse the line into, and if successful, output (or further use) the data as needed, e.g.
std::string line; /* string to hold line */
std:: ifstream fin (argv[1]); /* in stream for file */
while (getline (fin, line)) { /* read entire line into line */
std::stringstream ss (line); /* create stringstream from line */
record_t record; /* temp struct to read into */
if (ss >> record.id >> record.fname >> record.lname >>
record.stream >> record.score >> record.time)
/* if successful read from stringstream, output record */
std::cout << record.id << " ," << record.fname << " ,"
<< record.lname << " ," << record.stream << " ,"
<< record.score << " ," << record.time << '\n';
}
Putting it altogether in a short example that takes the file to be read as the first argument to the program could be:
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
struct record_t { /* simple struct to coordinate data in single record */
int id, score, time;
std::string fname, lname, stream;
};
int main (int argc, char **argv) {
if (argc < 2) { /* validate at least 1 argument provided */
std::cerr << "error: filename required.\n";
return 1;
}
std::string line; /* string to hold line */
std:: ifstream fin (argv[1]); /* in stream for file */
while (getline (fin, line)) { /* read entire line into line */
std::stringstream ss (line); /* create stringstream from line */
record_t record; /* temp struct to read into */
if (ss >> record.id >> record.fname >> record.lname >>
record.stream >> record.score >> record.time)
/* if successful read from stringstream, output record */
std::cout << record.id << " ," << record.fname << " ,"
<< record.lname << " ," << record.stream << " ,"
<< record.score << " ," << record.time << '\n';
}
}
(note: do not hard-code filenames or use magic numbers in your code)
Example Use/Output
Output in your rather odd " ," csv format:
$ ./bin/readrecords dat/records.txt
1001 ,Promod ,Dinal ,IT-K20 ,42 ,42
1002 ,Sahan ,Navod ,BM-K11 ,65 ,28
1003 ,Kaushani ,Dilinika ,BM-K12 ,69 ,49
1004 ,Fathima ,Sahana ,QS-K14 ,73 ,43
To make things slightly more useful, you can, instead of simply outputting the records directly, store all records in a std::vector<record_t> (vector of struct). This then opens the possibility of further processing your data. See if you can understand the changes made below on how each record is stored in a vector and then a Range-based for loop is used to loop over each record held in the vector to output your information.
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
#include <vector>
struct record_t { /* simple struct to coordinate data in single record */
int id, score, time;
std::string fname, lname, stream;
};
int main (int argc, char **argv) {
if (argc < 2) { /* validate at least 1 argument provided */
std::cerr << "error: filename required.\n";
return 1;
}
std::string line; /* string to hold line */
std:: ifstream fin (argv[1]); /* in stream for file */
std::vector<record_t> records; /* vector of records */
while (getline (fin, line)) { /* read entire line into line */
std::stringstream ss (line); /* create stringstream from line */
record_t record; /* temp struct to read into */
if (ss >> record.id >> record.fname >> record.lname >>
record.stream >> record.score >> record.time)
records.push_back(record); /* if good read, add to vector */
}
if (records.size() > 0) /* validate vector contains records */
for (auto& r : records) /* loop over all records */
std::cout << r.id << " ," << r.fname << " ," << r.lname << " ,"
<< r.stream << " ," << r.score << " ," << r.time << '\n';
else /* if no records read, throw error */
std::cerr << "error: no records read from file.\n";
}
Look things over and let me know if you have any further questions.

readline in C++?

I have a text file that I need to read in to variables in my code. For example lets say the .txt file looks like:
John
Town
12
Mike
Village
22
where there is a pattern of name then address then age of multiple people. I found that with (`)
string line;
ifstream myfile ("example.txt");
if (myfile.is_open())
{
while ( getline (myfile,line) )
{
cout << line << '\n';
}
myfile.close();
}
I can print out each line of the text file, but how could I assign the text to a variable?
I remember in Java you could do something along the lines of
while(there is a next line){
name = something.readline();
address = something.readline();
age = something.readline();
//do something with variables i.e construct new object then
//re-loop to construct new object with next set of data
}
the trick was that after readline() was called it would then move down a line in the text file and then next variable would be assign to the text below and so on. How can I recreate this in C++?
When I do stuff like this, I like to structure my data into records and write a function to read each record rather like this:
// logically grouped data
struct record
{
std::string name;
std::string address;
unsigned age;
};
// function to read in one record
// returns std:ostream& so that the while() loop can check
// the stream to make sure the read was successful.
// Takes record as a reference to pass the data back out
// of the function
std::istream& read(std::istream& is, record& r)
{
std::getline(is, r.name);
std::getline(is, r.address);
is >> r.age >> std::ws;
return is;
}
int main()
{
std::ifstream myfile("example.txt");
record r;
while(read(myfile, r)) // while the read was a success
{
// do something with record here
std::cout << " name: " << r.name << '\n';
std::cout << "address: " << r.address << '\n';
std::cout << " age: " << r.age << '\n';
std::cout << '\n';
}
}

Parsing a huge complicated CSV file using C++

I have a large CSV file which looks like this:
23456, The End is Near, A silly description that makes no sense, http://www.example.com, 45332, 5th July 1998 Sunday, 45.332
That's just one line of the CSV file. There are around 500k of these.
I want to parse this file using C++. The code I started out with is:
#include <iostream>
#include <fstream>
#include <string>
#include <sstream>
using namespace std;
int main()
{
// open the input csv file containing training data
ifstream inputFile("my.csv");
string line;
while (getline(inputFile, line, ','))
{
istringstream ss(line);
// declaring appropriate variables present in csv file
long unsigned id;
string url, title, description, datetaken;
float val1, val2;
ss >> id >> url >> title >> datetaken >> description >> val1 >> val2;
cout << url << endl;
}
inputFile.close();
}
The problem is that it's not printing out the correct values.
I suspect that it's not able to handle white spaces within a field. So what do you suggest I should do?
Thanks
In this example we have to parse the string using two getline. The first gets a line of cvs text getline(cin, line) useing default newline delimiter. The second getline(ss, line, ',') delimits using commas to separates the strings.
#include <iostream>
#include <sstream>
#include <string>
#include <vector>
float get_float(const std::string& s) {
std::stringstream ss(s);
float ret;
ss >> ret;
return ret;
}
int get_int(const std::string& s) {
std::stringstream ss(s);
int ret;
ss >> ret;
return ret;
}
int main() {
std::string line;
while (getline(cin, line)) {
std::stringstream ss(line);
std::vector<std::string> v;
std::string field;
while(getline(ss, field, ',')) {
std::cout << " " << field;
v.push_back(field);
}
int id = get_int(v[0]);
float f = get_float(v[6]);
std::cout << v[3] << std::endl;
}
}
Using std::istream to read std::strings using the overloaded insertion operator is not going to work well. The entire line is a string, so it won't pick up that there is a change in fields by default. A quick fix would be to split the line on commas and assign the values to the appropriate fields (instead of using std::istringstream).
NOTE: That is in addition to jrok's point about std::getline
Within the stated constraints, I think I'd do something like this:
#include <locale>
#include <iostream>
#include <sstream>
#include <string>
#include <vector>
#include <iterator>
// A ctype that classifies only comma and new-line as "white space":
struct field_reader : std::ctype<char> {
field_reader() : std::ctype<char>(get_table()) {}
static std::ctype_base::mask const* get_table() {
static std::vector<std::ctype_base::mask>
rc(table_size, std::ctype_base::mask());
rc[','] = std::ctype_base::space;
rc['\n'] = std::ctype_base::space;
return &rc[0];
}
};
// A struct to hold one record from the file:
struct record {
std::string key, name, desc, url, zip, date, number;
friend std::istream &operator>>(std::istream &is, record &r) {
return is >> r.key >> r.name >> r.desc >> r.url >> r.zip >> r.date >> r.number;
}
friend std::ostream &operator<<(std::ostream &os, record const &r) {
return os << "key: " << r.key
<< "\nname: " << r.name
<< "\ndesc: " << r.desc
<< "\nurl: " << r.url
<< "\nzip: " << r.zip
<< "\ndate: " << r.date
<< "\nnumber: " << r.number;
}
};
int main() {
std::stringstream input("23456, The End is Near, A silly description that makes no sense, http://www.example.com, 45332, 5th July 1998 Sunday, 45.332");
// use our ctype facet with the stream:
input.imbue(std::locale(std::locale(), new field_reader()));
// read in all our records:
std::istream_iterator<record> in(input), end;
std::vector<record> records{ in, end };
// show what we read:
std::copy(records.begin(), records.end(),
std::ostream_iterator<record>(std::cout, "\n"));
}
This is, beyond a doubt, longer than most of the others -- but it's all broken into small, mostly-reusable pieces. Once you have the other pieces in place, the code to read the data is trivial:
std::vector<record> records{ in, end };
One other point I find compelling: the first time the code compiled, it also ran correctly (and I find that quite routine for this style of programming).
I have just worked out this problem for myself and am willing to share! It may be a little overkill but it shows a working example of how Boost Tokenizer & vectors handle a big problem.
/*
* ALfred Haines Copyleft 2013
* convert csv to sql file
* csv2sql requires that each line is a unique record
*
* This example of file read and the Boost tokenizer
*
* In the spirit of COBOL I do not output until the end
* when all the print lines are ouput at once
* Special thanks to SBHacker for the code to handle linefeeds
*/
#include <sstream>
#include <boost/tokenizer.hpp>
#include <boost/iostreams/device/file.hpp>
#include <boost/iostreams/stream.hpp>
#include <boost/algorithm/string/replace.hpp>
#include <vector>
namespace io = boost::iostreams;
using boost::tokenizer;
using boost::escaped_list_separator;
typedef tokenizer<escaped_list_separator<char> > so_tokenizer;
using namespace std;
using namespace boost;
vector<string> parser( string );
int main()
{
vector<string> stuff ; // this is the data in a vector
string filename; // this is the input file
string c = ""; // this holds the print line
string sr ;
cout << "Enter filename: " ;
cin >> filename;
//filename = "drwho.csv";
int lastindex = filename.find_last_of("."); // find where the extension begins
string rawname = filename.substr(0, lastindex); // extract the raw name
stuff = parser( filename ); // this gets the data from the file
/** I ask if the user wants a new_index to be created */
cout << "\n\nMySql requires a unique ID field as a Primary Key \n" ;
cout << "If the first field is not unique (no dupicate entries) \nthan you should create a " ;
cout << "New index field for this data.\n" ;
cout << "Not Sure! try no first to maintain data integrity.\n" ;
string ni ;bool invalid_data = true;bool new_index = false ;
do {
cout<<"Should I create a New Index now? (y/n)"<<endl;
cin>>ni;
if ( ni == "y" || ni == "n" ) { invalid_data =false ; }
} while (invalid_data);
cout << "\n" ;
if (ni == "y" )
{
new_index = true ;
sr = rawname.c_str() ; sr.append("_id" ); // new_index field
}
// now make the sql code from the vector stuff
// Create table section
c.append("DROP TABLE IF EXISTS `");
c.append(rawname.c_str() );
c.append("`;");
c.append("\nCREATE TABLE IF NOT EXISTS `");
c.append(rawname.c_str() );
c.append( "` (");
c.append("\n");
if (new_index)
{
c.append( "`");
c.append(sr );
c.append( "` int(10) unsigned NOT NULL,");
c.append("\n");
}
string s = stuff[0];// it is assumed that line zero has fieldnames
int x =0 ; // used to determine if new index is printed
// boost tokenizer code from the Boost website -- tok holds the token
so_tokenizer tok(s, escaped_list_separator<char>('\\', ',', '\"'));
for(so_tokenizer::iterator beg=tok.begin(); beg!=tok.end(); ++beg)
{
x++; // keeps number of fields for later use to eliminate the comma on the last entry
if (x == 1 && new_index == false ) sr = static_cast<string> (*beg) ;
c.append( "`" );
c.append(*beg);
if (x == 1 && new_index == false )
{
c.append( "` int(10) unsigned NOT NULL,");
}
else
{
c.append("` text ,");
}
c.append("\n");
}
c.append("PRIMARY KEY (`");
c.append(sr );
c.append("`)" );
c.append("\n");
c.append( ") ENGINE=InnoDB DEFAULT CHARSET=latin1;");
c.append("\n");
c.append("\n");
// The Create table section is done
// Now make the Insert lines one per line is safer in case you need to split the sql file
for (int w=1; w < stuff.size(); ++w)
{
c.append("INSERT INTO `");
c.append(rawname.c_str() );
c.append("` VALUES ( ");
if (new_index)
{
string String = static_cast<ostringstream*>( &(ostringstream() << w) )->str();
c.append(String);
c.append(" , ");
}
int p = 1 ; // used to eliminate the comma on the last entry
// tokenizer code needs unique name -- stok holds this token
so_tokenizer stok(stuff[w], escaped_list_separator<char>('\\', ',', '\"'));
for(so_tokenizer::iterator beg=stok.begin(); beg!=stok.end(); ++beg)
{
c.append(" '");
string str = static_cast<string> (*beg) ;
boost::replace_all(str, "'", "\\'");
// boost::replace_all(str, "\n", " -- ");
c.append( str);
c.append("' ");
if ( p < x ) c.append(",") ;// we dont want a comma on the last entry
p++ ;
}
c.append( ");\n");
}
// now print the whole thing to an output file
string out_file = rawname.c_str() ;
out_file.append(".sql");
io::stream_buffer<io::file_sink> buf(out_file);
std::ostream out(&buf);
out << c ;
// let the user know that they are done
cout<< "Well if you got here then the data should be in the file " << out_file << "\n" ;
return 0;}
vector<string> parser( string filename )
{
typedef tokenizer< escaped_list_separator<char> > Tokenizer;
escaped_list_separator<char> sep('\\', ',', '\"');
vector<string> stuff ;
string data(filename);
ifstream in(filename.c_str());
string li;
string buffer;
bool inside_quotes(false);
size_t last_quote(0);
while (getline(in,buffer))
{
// --- deal with line breaks in quoted strings
last_quote = buffer.find_first_of('"');
while (last_quote != string::npos)
{
inside_quotes = !inside_quotes;
last_quote = buffer.find_first_of('"',last_quote+1);
}
li.append(buffer);
if (inside_quotes)
{
li.append("\n");
continue;
}
// ---
stuff.push_back(li);
li.clear(); // clear here, next check could fail
}
in.close();
//cout << stuff.size() << endl ;
return stuff ;
}
You are right to suspect that your code is not behaving as desired because the whitespace within the field values.
If you indeed have "simple" CSV where no field may contain a comma within the field value, then I would step away from the stream operators and perhaps C++ all together. The example program in the question merely re-orders fields. There is no need to actually interpret or convert the values into their appropriate types (unless validation was also a goal). Reordering alone is super easy to accomplish with awk. For example, the following command would reverse 3 fields found in a simple CSV file.
cat infile | awk -F, '{ print $3","$2","$1 }' > outfile
If the goal is really to use this code snippet as a launchpad for bigger and better ideas ... then I would tokenize the line by searching for commas. The std::string class has a built-in method to find the offsets specific characters. You can make this approach as elegant or inelegant as you want. The most elegant approaches end up looking something like the boost tokenization code.
The quick-and-dirty approach is to just to know your program has N fields and look for the positions of the corresponding N-1 commas. Once you have those positions, it is pretty straightforward to invoke std::string::substr to extract the fields of interest.

how to manipulate the txt file using C++ STL HOmework

I have a txt file that contains name, id number, mobilenumber, and location in comma separated line.
example
Robby, 7890,7788992356, 123 westminister
tom, 8820, 77882345, 124 kingston road
My task is to retrieve
Look up all of an employee's information by name.
Look up all of an employee's information by ID.
Add the information of an employee.
Update the information of an employee.
SO far I have read the file and stored the information in a vector. Code is shown below.
For tasks
1)Look up all of an employee's information by name. I will iterate in the vector and prints information containing the name . I will be able to do that
2) simialry in text file I will look for id and prints information about that.
BUT I am clueless about point 3 & 4.
I am posting my code below
void filter_text( vector<string> *words, string name)
{
vector<string>::iterator startIt = words->begin();
vector<string>::iterator endIt = words->end();
if( !name.size() )
std::cout << " no word to found for empty string ";
while( startIt != endIt)
{
string::size_type pos = 0;
while( (pos = (*startIt).find_first_of(name, pos) ) != string::npos)
std:cout <<" the name is " << *startIt<< end;
startIt++;
}
}
int main()
{
// to read a text file
std::string file_name;
std::cout << " please enter the file name to parse" ;
std::cin >> file_name;
//open text file for input
ifstream infile(file_name.c_str(), ios::in) ;
if(! infile)
{
std::cerr <<" failed to open file\n";
exit(-1);
}
vector<string> *lines_of_text = new vector<string>;
string textline;
while(getline(infile, textline, '\n'))
{
std::cout <<" line text:" << textline <<std::endl;
lines_of_text->push_back(textline);
}
filter_text( lines_of_text, "tony");
return 0;
}
#include <string>
#include <iostream>
#include <vector>
#include <stdexcept>
#include <fstream>
struct bird {
std::string name;
int weight;
int height;
};
bird& find_bird_by_name(std::vector<bird>& birds, const std::string& name) {
for(unsigned int i=0; i<birds.size(); ++i) {
if (birds[i].name == name)
return birds[i];
}
throw std::runtime_error("BIRD NOT FOUND");
}
bird& find_bird_by_weight(std::vector<bird>& birds, int weight) {
for(unsigned int i=0; i<birds.size(); ++i) {
if (birds[i].weight< weight)
return birds[i];
}
throw std::runtime_error("BIRD NOT FOUND");
}
int main() {
std::ifstream infile("birds.txt");
char comma;
bird newbird;
std::vector<bird> birds;
//load in all the birds
while (infile >> newbird.name >> comma >> newbird.weight >> comma >> newbird.height)
birds.push_back(newbird);
//find bird by name
bird& namebird = find_bird_by_name(birds, "Crow");
std::cout << "found " << namebird.name << '\n';
//find bird by weight
bird& weightbird = find_bird_by_weight(birds, 10);
std::cout << "found " << weightbird.name << '\n';
//add a bird
std::cout << "Bird name: ";
std::cin >> newbird.name;
std::cout << "Bird weight: ";
std::cin >> newbird.weight;
std::cout << "Bird height: ";
std::cin >> newbird.height;
birds.push_back(newbird);
//update a bird
bird& editbird = find_bird_by_name(birds, "Raven");
editbird.weight = 1000000;
return 0;
}
Obviously not employees, because that would make your homework too easy.
So, first off, I don't think you should store the information in a vector of strings. This kind of task totally calls for the use of a
struct employee {
int id;
std::string name;
std::string address;
//... more info
};
And storing instances of employees in an
std::vector<employee>
You see, using your strategy of storing the lines, searching for "westminster" would net me Robbie, as his line of text does include this substring, but his name isn't westminster at all. Storing the data in a vector of employee structs would eliminate this problem, and it'd make the whole thing a lot more, well, structured.
Of course you'd need to actually parse the file to get the info into the vector. I'd suggest using a strategy like:
while(getline(infile, textline, '\n')) {
std::stringstream l(textline);
getline(l,oneEmp.name, ','); //extract his name using getline
l >> oneEmp.id; //extract his id
//extract other fields from the stringstream as neccessary
employees.push_back(oneEmp);
}
As for adding information: when the user enters the data, just store it in your employees vector; and when you should need to update the file, you may simply overwrite the original data file with a new one by opening it for writing & dumping the data there (this is obviously a rather wasteful strategy, but it's fine for a school assignment (I suppose it's school assignment)).
Start by splitting the CSV line into separate fields and then populate a struct with this data
eg:
struct Employee
{
std::string name;
std::string id_number;
std::string mobilenumber;
std::string location;
};
std::vector<Employee> employees; // Note you dont need a pointer
Look at string methods find_first_of, substr and friends.

Example for file input to structure members?

I have the following structure:
struct productInfo
{
int item;
string details;
double cost;
};
I have a file that will input 10 different products that each contain an item, details, and cost. I have tried to input it using inFile.getline but it just doesn't work. Can anyone give me an example of how to do this? I would appreciate it.
Edit
The file contains 10 lines that look like this:
570314,SanDisk Sansa Clip 8 GB MP3 Player Black,55.99
Can you provide an example please.
Edit
Sorry guys, I am new to C++ and I don't really understand the suggestions. This is what I have tried.
void readFile(ifstream & inFile, productInfo products[])
{
inFile.ignore(LINE_LEN,'\n'); // The first line is not needed
for (int index = 0; index < 10; index++)
{
inFile.getline(products[index].item,SIZE,DELIMETER);
inFile.getline(products[index].details,SIZE,DELIMETER);
inFile.getline(products[index].cost,SIZE,DELIMETER);
}
}
This is another approach that uses fstream to read the file and getline() to read each line on the file. The parsing of the line itself was left out on purpose since other posts have already done that.
After each line is read and parsed into a productInfo, the application stores it on a vector, so all products could be accessed in memory.
#include <iostream>
#include <fstream>
#include <vector>
#include <iterator>
#include <string>
using namespace std;
struct productInfo
{
int item;
string details;
double cost;
};
int main()
{
vector<productInfo> product_list;
ifstream InFile("list.txt");
if (!InFile)
{
cerr << "Couldn´t open input file" << endl;
return -1;
}
string line;
while (getline(InFile, line))
{ // from here on, check the post: How to parse complex string with C++ ?
// https://stackoverflow.com/questions/2073054/how-to-parse-complex-string-with-c
// to know how to break the string using comma ',' as a token
cout << line << endl;
// productInfo new_product;
// new_product.item =
// new_product.details =
// new_product.cost =
// product_list.push_back(new_product);
}
// Loop the list printing each item
// for (int i = 0; i < product_list.size(); i++)
// cout << "Item #" << i << " number:" << product_list[i].item <<
// " details:" << product_list[i].details <<
// " cost:" << product_list[i].cost << endl;
}
EDIT: I decided to take a shot at parsing the line and wrote the code below. Some C++ folks might not like the strtok() method of handling things but there it is.
string line;
while (getline(InFile, line))
{
if (line.empty())
break;
//cout << "***** Parsing: " << line << " *****" << endl;
productInfo new_product;
// My favorite parsing method: strtok()
char *tmp = strtok(const_cast<char*>(line.c_str()), ",");
stringstream ss_item(tmp);
ss_item >> new_product.item;
//cout << "item: " << tmp << endl;
//cout << "item: " << new_product.item << endl;
tmp = strtok(NULL, ",");
new_product.details += tmp;
//cout << "details: " << tmp << endl;
//cout << "details: " << new_product.details << endl;
tmp = strtok(NULL, " ");
stringstream ss_cost(tmp);
ss_cost >> new_product.cost;
//cout << "cost: " << tmp << endl;
//cout << "cost: " << new_product.cost << endl;
product_list.push_back(new_product);
}
It depends on what's in the file? If it's text, you can use the redirect operator on a file input stream:
int i;
infile >> i;
If it's binary, you can just read it in to &your_struct.
You have to
0) Create a new instance of productInfo, pinfo;
1) read text (using getline) to the first comma (','), convert this string to an int, and put it into pinfo.item.
2) read text to the next comma and put it into pinfo.details;
3) read text to the endline, convert the string to a double, and put it into pinfo.cost.
Then just keep doing this until you reach the end of the file.
Here is how I would use getline. Note that I use it once to read from the input file, and then again to chop that line at ",".
ostream& operator>>(istream& is, productInfo& pi)
{
string line;
getline(is, line); // fetch one line of input
stringstream sline(line);
string item;
getline(sline, item, ',');
stringstream(item) >> pi.item; // convert string to int
getline(sline, item, ',');
pi.details = item; // string: no conversion necessary
getline(sline, item);
stringstream(item) >> pi.cost; // convert string to double
return is;
}
// usage:
// productInfo pi; ifstream inFile ("inputfile.txt"); inFile >> pi;
N.b.: This program is buggy if the input is
99999,"The Best Knife, Ever!",16.95