Reading/Writing several structs of unknown size to file C++ - c++

I am wanting to make a book database to record what books I have read. So far, I have a structure for a book entry.
struct Entry
{
string title;
string author;
int pages;
};
As you can see, the title and the author variables are of undetermined size. I would like to store several structures within one file, and then to read all those structures when I want to display the database.
How would I read/write several of these structures from a file? Would I have to have predetermined sizes? Please provide an example.

you could easily store it in a CSV-format the following way:
title_1,author_1,pages_1,title_2,author_2,pages_2,title_3,auth...
when reading back the file, then parse the file according to your separators and you have back your data.
edit:
as #Mark Setchell suggested you shouldn't use ',' as a separator, because a comma might also be part of the title itself. instead you should use a more rare character as separator, which isn't used very often in possible book titles. some examples are ';' '|' or '#' or even unprintable characters

Related

addition of data only in even position

just wanted to ask something about addition of data.
So i have this .txt file that i want to read in using c++. that one has no problem. I can read that using fstream. now that .txt file contains a data of...
Number of monitored events
Event-1:Weight-1:Event-2:Weight-2:Event-3:Weight-3:Event-4:Weight-4:
Event-5:Weight-5: ....:
those above information will have 4 pairs each row, but delimeted by a :
now my question is . is it possible to add up the values of all the weight? i can't seems to understand how to read only the weight part as it is all separated by the same delimeter.
It is possible using castings. You can use another delimiter for the weights and separate it using that token and cast it to integer and you just have to do is addition.
If u want to know how to break tokens, here is the link:
C++ Reading file Tokens

Include massive text file in C++ program

I have a comma delimited text file that has a few million entries. After every 23 entries there is a newline. I will add each full line as an instance of a vector, with the 23 fields as instances of a sub-vector. So, the first instance will be vec[0][0-22], followed by vec[1][0-22], etc.
This file is a part of my program and needs to be compiled with it. Meaning, I don't want to have to provide the file additionally and use ifstream to read the data from the separate file.
I already can sort the data using ifstream, but now I need to integrate the raw data into the program so that I can compile it all together.
I am unable to make this large comma-delimited-field text file into one long string and then separate it into fields because some of the fields have quotes within them, with commas between the quotes too.
example:
`19891656,PLANTAE,TRACHEOPHYTA,MAGNOLIOPSIDA,FABALES,FABACEAE,Zygia,ampla,(Benth.) Pittier,,,,,Pithecellobium amplum |Pithecolobium brevispicatum ,Jarendeua de Sapo,,,LC,,3.1,2012,stable,N
19891919,PLANTAE,TRACHEOPHYTA,MAGNOLIOPSIDA,FABALES,FABACEAE,Zygia,biflora,L.Rico,,,,,,,,,VU,B2ab(iii),3.1,2012,stable,N
2060,ANIMALIA,CHORDATA,MAMMALIA,CARNIVORA,OTARIIDAE,Arctocephalus,pusillus,"(Schreber, 1775)",,,,,Phoca pusilla,"Afro-Australian Fur Seal, Australian Fur Seal, Brown Fur Seal, Cape Fur Seal, South African Fur Seal",Arctocphale d'Afrique du Sud,,LC,,3.1,2015,increasing,N`
When my program runs it will source data from this mass of text, and it will not need to use ifstream with a path to an external file. How can I include this text file in my program? Is there a way to "include" text files? If I need to make a massive array of strings, how do I do this with quoted fields with commas between the quotes? I would be happy to clarify any part of this question which seems vague as I am really curious as to how I can make this work.
Technically this text file is a csv, but I am hesitant to include csv as a tag because I think people will think I am looking for a csv parsing solution.
You may want to write a script to convert each line of your data file into an initializer of a record struct with a trailing comma after each lins [if you don't want to use a terminator entry (see below) than except the last line]. This script may be your data type specific. Say,
12,Joe,,,YES -> MyType(12,"Joe",0,0,true),
Then #include the entire converted file in place of your data array/vector element initializers, for ex
MyType myData [] =
{
#include "my_data_file_converted"
MyType() //an optional terminal entry
};
Of course MyType should have constructor(s) accepting your initialization sequences.

C++ trying to read in malformed CSV with erroneous commas

I am trying to make a simple CSV file parser to transfer a large number of orders from an order system to an invoicing system. The issue is that the CSV which i am downloading has erroneous commas which are sometimes present in the name field and so this throws the whole process off.
The company INSISTS, which is really starting to piss me off, that they are simply copying data they receive into the CSV and so it's valid data.
Excel mostly seems to interpret this correctly or at least puts the data in the right field, my program however doesn't. I opened the CSV in notepad++ and there is no quotes around strings just raw string separated by commas.
This is currently how i am reading the file.
int main()
{
string t;
getline(cin, t);
string Output;
string path = "in.csv";
ifstream input(path);
vstring readout;
vstring contact, InvoiceNumber, InvoiceDate, DueDate, Description, Quantity, UnitAmount, AccountCode, TaxType, Currency, Allocator, test, Backup, AllocatorBackup;
vector<int> read, add, total;
if (input.is_open()) {
for (string line; getline(input, line); ) {
auto arr = explode(line, ',');
contact.push_back(arr[7]); // Source site is the customer in this instance.
InvoiceNumber.push_back(arr[0]); // OrderID will be invoice number
InvoiceDate.push_back(arr[1]); // Perchase date
DueDate.push_back(arr[1]); // Same as order date
Description.push_back(arr[0]);
Quantity.push_back(arr[0]);
UnitAmount.push_back(arr[10]); // The Total
AccountCode.push_back(arr[7]); // Will be set depending on other factors - But contains the site of perchase
Currency.push_back(arr[11]); // EUR/GBP
Allocator.push_back(arr[6]); // This will decide the VAT treatment normally.
AllocatorBackup.push_back(arr[5]); // This will decide VAT treatment if the column is off by one.
Backup.push_back(arr[12]);
TaxType = Currency;
}
}
return 0;
}
vstring explode(string const & s, char delim) {
vstring result;
istringstream q(s);
for (string token; getline(q, token, delim); ) {
result.push_back(move(token));
}
return result;
}
Vstring is a compiler macro i created to save me typing vector so often, so it's the same thing.
The issue is when i come across one of the fields with the comma in it (normally the name field which is [3]) it of cause pushes everything back by one so account code becomes [8] etc.. This is extremely troublesome as it's difficult to tell weather or not i am dealing with correct data in the next field or not in some cases.
So two questions:
1) Is there any simple way in which i could detect this anomaly and correct for it that i've missed? I of cause do try to check in my loop where i can if valid data is where it's expected to be, but this is becoming messy and does not cope with more than one comma.
2) Is the company correct in telling me that it's "Expected behavior" to allow commas entered by a customer to creep into this CSV without being processed or have they completely misunderstood the CSV "standard"?
Retired Ninja mentioned in the comments that one constraint would be to parse all fields either side of the 'problem field' first, and then put the remaining data into the problem field. This is the best approach if you know which field might contain corruption. If you don't know which field could be corrupted, you still have options though!
You know:
The number of fields that should be present
Something about the type of data in each of those fields.
If you codify the types of the fields (implement classes for different data types, so your vectors of strings would become vectors of OrderIDs or Dates or Counts or....), you can test different concatenations (joining adjacent fields that are separated by a comma) and score them according to how many of the fields pass some data validation. You then choose the best scoring interpretation of the data. This would build some data validation into the process, and make everything a bit more robust.
'csv' is not that well defined. There is the standard way, where ',' seperates the columns and '\n' the rows. Sometimes ' " ' is used to handle these symbols inside a field. But Excel includes them only if a Control Character is involved.
Here the definition from Wiki.
RFC 4180 formalized CSV. It defines the MIME type "text/csv", and CSV files that follow its rules should be very widely portable. Among its requirements:
-MS-DOS-style lines that end with (CR/LF) characters (optional for the
last line).
-An optional header record (there is no sure way to detect
whether it is present, so care is required when importing).
-Each record "should" contain the same number of comma-separated fields.
-Any field may be quoted (with double quotes).
-Fields containing a line-break, double-quote or commas should be quoted. (If > they are not, the file will likely be impossible to process correctly).
-A (double)quote character in a field must be represented by two (double) quote > characters.
Comma-separated values
Keep in mind that Excel has different settings on different systems/system language settings. It might be, that their Excel is parsing it correctly, but somewhere else it isn't.
For Example, in countries like Germany there is ';' used to seperate the columns. The decimal seperators differ as well.
1.5 << english
1,5 << german
Same goes for the thousand seperator.
1,000,000 << english
1.000.000 << german
or
1 000 000 << also german
Now, Excel also has different csv export settings like .csv(Seperated values), .csv(MACINTOSH) and .csv(MS-DOS) so I guess there can be differences too.
Now for your questions, in my opinion they are not clearly wrong with what they are doing with their files. But you should think about discussing about a (E)BNF with them. Here some Links:
BNF
EBNF
It is a grammar on which you decide on and with clear definitions the code should be no problem. I know customers can block something like this, because they don't want to have extra work, but it is simply the best solution. If you want ' " ' in your file, they should provide you somehow. I don't know how they copy their data, but it should also be some kind of program (I don't think they do this per hand?), so your code and their code should use the same (E)BNF which you decide on together with them.

Fortran 90: reading a generic string with enclosed some "/" characters

Hy everybody, I've found some problems in reading unformatted character strings in a simple file. When the first / is found, everything is missed after it.
This is the example of the text I would like to read: after the first 18 character blocks that are fixed (from #Mod to Flow[kW]), there is a list of chemical species' names, that are variables (in this case 5) within the program I'm writing.
#Mod ID Mod Name Type C. #Coll MF[kg/s] Pres.[Pa] Pres.[bar] Temp.[K] Temp.[C] Ent[kJ/kg K] Power[kW] RPM[rad/s] Heat Flow[kW] METHANE ETHANE PROPANE NITROGEN H2O
I would like to skip, after some formal checks, the first 18 blocks, then read the chemical species. To do the former, I created a character array with dimension of 18, each with a length of 20.
character(20), dimension(18) :: chapp
Then I would like to associate the 18 blocks to the character array
read(1,*) (chapp(i),i=1,18)
...but this is the result: from chapp(1) to chapp(7) are saved the right first 7 strings, but this is chapp(8)
chapp(8) = 'MF[kg '
and from here on, everything is leaved blank!
How could I overcome this reading problem?
The problem is due to your using list-directed input (the * as the format). List-directed input is useful for quick and dirty input, but it has its limitations and quirks.
You stumbled across a quirk: A slash (/) in the input terminates assignment of values to the input list for the READ statement. This is exactly the behavior that you described above.
This is not choice of the compiler writer, but is mandated by all relevant Fortran standards.
The solution is to use formatted input. There are several options for this:
If you know that your labels will always be in the same columns, you can use a format string like '(1X,A4,2X,A2,1X,A3,2X)' (this is not complete) to read in the individual labels. This is error-prone, and is also bad if the program that writes out the data changes format for some reason or other, or if the labes are edited by hand.
If you can control the program that writes the label, you can use tab characters to separate the individual labels (and also, later, the labels). Read in the whole line, split it into tab-separated substrings using INDEX and read in the individual fields using an (A) format. Don't use list-directed format, or you will get hit by the / quirk mentioned above. This has the advantage that your labels can also include spaces, and that the data can be imported from/to Excel rather easily. This is what I usually do in such cases.
Otherwise, you can read in the whole line and split on multiple spaces. A bit more complicated than splitting on single tab characters, but it may be the best option if you cannot control the data source. You cannot have labels containing spaces then.

How to parse text-based table in C++

I am trying to parse a table in the form of a text file using ifstream, and evaluating/manipulating each entry. However, I'm having trouble figuring out how to approach this because of omissions of particular items. Consider the following table:
NEW VER ID NAME
1 2a 4 "ITEM ONE" (2001)
1 7 "2 ITEM" (2002) {OCT}
1.1 10 "SOME ITEM 3" (2003)
1 12 "DIFFERENT ITEM 4" (2004)
1 a4 16 "ITEM5" (2005) {DEC}
As you can see, sometimes the "NEW" column has nothing in it. What I want to do is take note of the ID, the name, the year (in brackets), and note whether there are braces or not afterwards.
When I started doing this, I looked for a "split" function, but I realized that it would be a bit more complicated because of the aforementioned missing items and the titles becoming separated.
The one thing I can think of is reading each line word by word, keeping track of the latest number I saw. Once I hit a quotation mark, make note that the latest number I saw was an ID (if I used something like a split, the array position right before the quotation mark), then keep record of everything until the next quote (the title), then finally, start looking for brackets and braces for the other information. However, this seems really primitive and I'm looking for a better way to do this.
I'm doing this to sharpen my C++ skills and work with larger, existing datasets, so I'd like to use C++ if possible, but if another language (I'm looking at Perl or Python) makes this trivially easy, I could just learn how to interface a different language with C++. What I'm trying to do now is just sifting data anyways which will eventually become objects in C++, so I still have chances to improve my C++ skills.
EDIT: I also realize that this is possible to complete using only regex, but I'd like to try using different methods of file/string manipulation if possible.
If the column offsets are truly fixed (no tabs, just true space chars a la 0x20) I would read it a line at a time (string::getline) and break it down using the fixed offsets into a set of four strings (string::substr).
Then postprocess each 4-tuple of strings as required.
I would not hard-code the offsets, store them in a separate input file that describes the format of the input - like a table description in SQL Server or other DB.
Something like this:
Read the first line, find "ID", and store the index.
Read each data line using std::getline().
Create a substring from a data line, starting at the index you found "ID" in the header line. Use this to initialize a std::istringstream with.
Read the ID using iss >> an_int.
Search the first ". Search the second ". Search the ( and remember its index. Search the ) and remember that index, too. Create a substring from the characters in between those indexes and use it to initialize another std::istringstream with. Read the number from this stream.
Search for the braces.