The following is in a text file
To the north, is the entrance to Room 2.\nThere are six suspects in this room:\n\tAdam\n\tSofia\n\tLucas\n\tDaniel\n\tChris\n\tJack\n\tTiana.
This line is being read in and being stored.
I am trying to use printw() to output this with the lines and tabs, however it just prints out as is with the '\n' and the '\t'. What are some possible solutions to this?
Try using the printf() function. By checking the documentation they are the same functions, couldn't find the differences.
Related
Im working on a crude console based text editor in C++ kind of like nano. I've already figured out a basic way of inputting multiple lines of text and writing it to a file correctly (input ends when they enter a code: //end). However, at the moment the user is unable to move upwards (using arrow keys) and edit lines that they have entered. For some additional information I'm doing this with a getline() loop, writing files with ofstream, and am storing the users text in a string vector with each element being an entered line. How might I implement the ability to work with a body of text in such a way?
For advanced use, you need access to the console API.
For a simpler version, look at the primitive visual editors.
Editing a line consists of moving to the line, printing out the content, and then letting them insert or delete characters on that line.
Look at sed or ed or even vi.
I need to create a program that reads strings from two different files and write these strings on a new file. The thing is, it must alternate both files, meaning that it should write a line from one file, and then one line from the other, and so on.
I'm having a problem with my code, it writes the first line of the first file, and then it writes all lines from the second file.
Anyone knows how to solve this problem?
do {
getline(archivo1, sLinea);
archivoS << sLinea << endl;
getline(archivo2, sLinea2);
archivoS << sLinea2 << endl;
} while (!archivo1.eof() && !archivo2.eof());
The code looks correct and should work under normal circumstances. This might be a problem with the encoding of the second file, where the newline characters are not being recognised as such on your platform, which could result in the entire second file being interpreted as a single line by the C++ standard library.
Windows (CR+LF), Unix/Linux (LF), and Mac (CR) each have different conventions for newlines. Search about the carriage return and line feed characters across platforms to learn more about this topic.
To identify if this is the issue, try running the code on two separate copies of the first file to see if it produces the expected output?
If newline encoding is your issue, you will either need to convert the second file to use your platform's newline encoding (you can use a tool like Notepad++ to easily do this) or incorporate logic which controls for this into your program.
Check your second file. In all likelihood it does not contain the line delimiter "\n" , per line. There may be only one at the end
I have a data file that I am trying to input and the data is split into sections via a blank line. The data will be read in from a text file.
How do I make my code skip a blank line to read in the next piece of data? I am currently just in the planning stages of my application.
I'm a beginner so I'm not really sure how to go about this.
Can anyone advise a method on how to approach this?
I have just written it out and my code looks like this:
string ship2_id;
char ship2_journey_id[20];
float ship2_l;
int ship2_s;
getline(itinerary_file, ship2_id);
if (ship2_id = ' ')
{
itinerary_file.ignore(numeric_limits<streamsize>::max(), '\n');
}
else
getline(itinerary_file, ship2_id);
cout << ship2_id << endl;
Yes,
stream.ignore(max_number_of_chars_to_be_skipped, '\n');
I usually just use 1ul<<30 or similar for the first parameter, but
this could be a DoS vector if the input is untrusted and slow to skip those chars
the "pedant" value would read std::numeric_limits<std::stream_pos>::max() or similar
I don't what are you using to read the file, but, to search for a blank line, look for two "line breaks" together. Take in account that the "line breaker" character is different for some OS. In Windows, by default, there are two characters that are used together for a line break.
I'm trying to write a program that takes inputData from two files for a season of some sport (i.e.: football) and writes an output listing rankings each week. In the input file with the scores for each game, every week is separated by a line of '-' characters. I have an if, else loop set up where the program peeks at the first character of each line. If it sees a character other than '-', it reads normally. However, when it reads '-', the program will begin the output cycle.
The thing is, being that this is peek, I need to figure out how to get to the next line without creating new input and not cause a crash. All I can think of is using inStream.find( !'-' ); or inStream.seekg( !'-' );. Are there any other options I can use?
Also, for reference, the code is listed here: https://coderpad.io/475356. Look for line 80 for the problem area. Just don't make any edits please.
Thank you for your time.
P.S.: If anyone can find any other crashes, though, feel free to mention it.
How about just using ignore() to skip the line?
inStream.ignore(std::numeric_limits<std::streamsize>::max(), '\n');
Make sure you have:
#include <limits>
If you prefer not to have std::, just put using std::numeric_limits; at the top of your file, and then drop std:: from the expression above.
I am writing a C++ program which reads lines of text from a .txt file. Unfortunately the text file is generated by a twenty-something year old UNIX program and it contains a lot of bizarre formatting characters.
The first few lines of the file are plain, English text and these are read with no problems. However, whenever a line contains one or more of these strange characters mixed in with the text, that entire line is read as characters and the data is lost.
The really confusing part is that if I manually delete the first couple of lines so that the very first character in the file is one of these unusual characters, then everything in the file is read perfectly. The unusual characters obviously just display as little ascii squiggles -arrows, smiley faces etc, which is fine. It seems as though a decision is being made automatically, without my knowledge or consent, based on the first line read.
Based on some googling, I suspected that the issue might be with the locale, but according to the visual studio debugger, the locale property of the ifstream object is "C" in both scenarios.
The code which reads the data is as follows:
//Function to open file at location specified by inFilePath, load and process data
int OpenFile(const char* inFilePath)
{
string line;
ifstream codeFile;
//open text file
codeFile.open(inFilePath,ios::in);
//read file line by line
while ( codeFile.good() )
{
getline(codeFile,line);
//check non-zero length
if (line != "")
ProcessLine(&line[0]);
}
//close line
codeFile.close();
return 1;
}
If anyone has any suggestions as to what might be going on or how to fix it, they would be very welcome.
From reading about your issues it sounds like you are reading in binary data, which will cause getline() to throw out content or simply skip over the line.
You have a couple of choices:
If you simply need lines from the data file you can first sanitise them by removing all non-printable characters (that is the "official" name for those weird ascii characters). On UNIX a tool such as strings would help you with that process.
You can off course also do this programmatically in your code by simply reading in X amount of data, storing it in a string, and then removing those characters that fall outside of the standard ASCII character range. This will most likely cause you to lose any unicode that may be stored in the file.
You change your program to understand the format and basically write a parser that allows you to parse the document in a more sane way.
If you can, I would suggest trying solution number 1, simply to see if the results are sane and can still be used. You mention that this is medical data, do you per-chance know what file format this is? If you are trying to find out and have access to a unix/linux machine you can use the utility file and maybe it can give you a clue (worst case it will tell you it is simply data).
If possible try getting a "clean" file that you can post the hex dump of so that we can try to provide better help than that what we are currently providing. With clean I mean that there is no personally identifying information in the file.
For number 2, open the file in binary mode. You mentioned using Windows, binary and non-binary files in std::fstream objects are handled differently, whereas on UNIX systems this is not the case (on most systems, I'm sure I'll get a comment regarding the one system that doesn't match this description).
codeFile.open(inFilePath,ios::in);
would become
codeFile.open(inFilePath, ios::in | ios::binary);
Instead of getline() you will want to become intimately familiar with .read() which will allow unformatted operations on the ifstream.
Reading will be like this:
// This code has not been tested!
char input[1024];
codeFile.read(input, 1024);
int actual_read = codeFile.gcount();
// Here you can process input, up to a maximum of actual_read characters.
//ProcessLine() // We didn't necessarily read a line!
ProcessData(input, actual_read);
The other thing as mentioned is that you can change the locale for the current stream and change the separator it considers a new line, maybe this will fix your issue without requiring to use the unformatted operators:
imbue the stream with a new locale that only knows about the newline. This method may or may not let your getline() function without issues.