I'm having some trouble with replacing a portion of a file in binary mode. For some reason my seekp() line is not placing the file pointer at the desired position. Right now its appending the new contents to the end of the file instead of replacing the desired portion.
long int pos;
bool found = false;
fstream file(fileName, ios::binary|ios::out|ios::in);
file.read(reinterpret_cast<char *>(&record), sizeof(Person));
while (!file.eof())
{
if (record.getNumber() == number) {
pos=file.tellg();
found = true;
break;
}
// the record object is updated here
file.seekp(pos, ios::beg); //this is not placing the file pointer at the desired place
file.write(reinterpret_cast<const char *>(&record), sizeof(Person));
cout << "Record updated." << endl;
file.close();
Am I doing something wrong?
Thanks a lot in advance.
I don't see how your while() loop can work. In general, you should not test for eof() but instead test if a read operation worked.
The following code writes a record to a file (which must exist) and then overwrites it:
#include <iostream>
#include <fstream>
using namespace std;
struct P {
int n;
};
int main() {
fstream file( "afile.dat" , ios::binary|ios::out|ios::in);
P p;
p.n = 1;
file.write( (char*)&p, sizeof(p) );
p.n = 2;
int pos = 0;
file.seekp(pos, ios::beg);
file.write( (char*)&p, sizeof(p) );
}
while (!file.eof())
{
if (record.getNumber() == number) {
pos=file.tellg();
found = true;
break;
}
here -- you`re not updating number nor record -- so basically you go through all file and write in "some" location (pos isn't inited)
And Neil Butterworth is right (posted while i typed 8)) seems like you omitted smth
Related
When i read from a file string by string, >> operation gets first string but it starts with "i" . Assume that first string is "street", than it gets as "istreet".
Other strings are okay. I tried for different txt files. The result is same. First string starts with "i". What is the problem?
Here is my code :
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
using namespace std;
int cube(int x){ return (x*x*x);}
int main(){
int maxChar;
int lineLength=0;
int cost=0;
cout<<"Enter the max char per line... : ";
cin>>maxChar;
cout<<endl<<"Max char per line is : "<<maxChar<<endl;
fstream inFile("bla.txt",ios::in);
if (!inFile) {
cerr << "Unable to open file datafile.txt";
exit(1); // call system to stop
}
while(!inFile.eof()) {
string word;
inFile >> word;
cout<<word<<endl;
cout<<word.length()<<endl;
if(word.length()+lineLength<=maxChar){
lineLength +=(word.length()+1);
}
else {
cost+=cube(maxChar-(lineLength-1));
lineLength=(word.length()+1);
}
}
}
You're seeing a UTF-8 Byte Order Mark (BOM). It was added by the application that created the file.
To detect and ignore the marker you could try this (untested) function:
bool SkipBOM(std::istream & in)
{
char test[4] = {0};
in.read(test, 3);
if (strcmp(test, "\xEF\xBB\xBF") == 0)
return true;
in.seekg(0);
return false;
}
With reference to the excellent answer by Mark Ransom above, adding this code skips the BOM (Byte Order Mark) on an existing stream. Call it after opening a file.
// Skips the Byte Order Mark (BOM) that defines UTF-8 in some text files.
void SkipBOM(std::ifstream &in)
{
char test[3] = {0};
in.read(test, 3);
if ((unsigned char)test[0] == 0xEF &&
(unsigned char)test[1] == 0xBB &&
(unsigned char)test[2] == 0xBF)
{
return;
}
in.seekg(0);
}
To use:
ifstream in(path);
SkipBOM(in);
string line;
while (getline(in, line))
{
// Process lines of input here.
}
Here is another two ideas.
if you are the one who create the files, save they length along with them, and when reading them, just cut all the prefix with this simple calculation: trueFileLength - savedFileLength = numOfByesToCut
create your own prefix when saving the files, and when reading search for it and delete all what you found before.
I'm having trouble reading a number list from a .txt file to a dynamic array of type double. This first number in the list is the number of numbers to add to the array. After the first number, the numbers in the list all have decimals.
My header file:
#include <iostream>
#ifndef SORT
#define SORT
class Sort{
private:
double i;
double* darray; // da array
double j;
double size;
public:
Sort();
~Sort();
std::string getFileName(int, char**);
bool checkFileName(std::string);
void letsDoIt(std::string);
void getArray(std::string);
};
#endif
main.cpp:
#include <stdio.h>
#include <stdlib.h>
#include "main.h"
int main(int argc, char** argv)
{
Sort sort;
std::string cheese = sort.getFileName(argc, argv); //cheese is the file name
bool ean = sort.checkFileName(cheese); //pass in file name fo' da check
sort.letsDoIt(cheese); //starts the whole thing up
return 0;
}
impl.cpp:
#include <iostream>
#include <fstream>
#include <cstring>
#include <stdlib.h>
#include "main.h"
Sort::Sort(){
darray[0];
i = 0;
j = 0;
size = 0;
}
Sort::~Sort(){
std::cout << "Destroyed" << std::endl;
}
std::string Sort::getFileName(int argc, char* argv[]){
std::string fileIn = "";
for(int i = 1; i < argc;)//argc the number of arguements
{
fileIn += argv[i];//argv the array of arguements
if(++i != argc)
fileIn += " ";
}
return fileIn;
}
bool Sort::checkFileName(std::string userFile){
if(userFile.empty()){
std::cout<<"No user input"<<std::endl;
return false;
}
else{
std::ifstream tryread(userFile.c_str());
if (tryread.is_open()){
tryread.close();
return true;
}
else{
return false;
}
}
}
void Sort::letsDoIt(std::string file){
getArray(file);
}
void Sort::getArray(std::string file){
double n = 0;
int count = 0;
// create a file-reading object
std::ifstream fin;
fin.open(file.c_str()); // open a file
fin >> n; //first line of the file is the number of numbers to collect to the array
size = n;
std::cout << "size: " << size << std::endl;
darray = (double*)malloc(n * sizeof(double)); //allocate storage for the array
// read each line of the file
while (!fin.eof())
{
fin >> n;
if (count == 0){ //if count is 0, don't add to array
count++;
std::cout << "count++" << std::endl;
}
else {
darray[count - 1] = n; //array = line from file
count++;
}
std::cout << std::endl;
}
free((void*) darray);
}
I have to use malloc, but I think I may be using it incorrectly. I've read other posts but I am still having trouble understanding what is going on.
Thanks for the help!
Your use of malloc() is fine. Your reading is not doing what you want it to do.
Say I have the inputfile:
3
1.2
2.3
3.7
My array would be:
[0]: 2.3
[1]: 3.7
[2]: 0
This is because you are reading in the value 1.2 as if you were rereading the number of values.
When you have this line:
fin >> n; //first line of the file is the number of numbers to collect to the array
You are reading in the count, in this case 3, and advancing where in the file you will read from next. You are then attempting to reread that value but are getting the first entry instead.
I believe that replacing your while() {...} with the code below will do what you are looking for.
while (count != size && fin >> n)
{
darray[count++] = n; //array = line from file
std::cout << n << std::endl;
}
This should give you the correct values in the array:
[0]: 1.2
[1]: 2.3
[2]: 3.7
You appear to be writing the next exploitable program. You are mistakenly trusting the first line of the file to determine your buffer size, then reading an unlimited amount of data from the remainder of the file into a buffer that is not unlimited. This allows an evil input file to trash some other memory in your program, possibly allowing the creator of that file to take control of your computer. Oh noes!
Here's what you need to do to fix it:
Remember how much memory you allocated (you'll need it in step #2). Have a variable alleged_size or array_length that is separate from the one you use to read the rest of the data.
Don't allow count to run past the end of the array. Your loop should look more like this:
while ((count < alleged_size) && (cin >> n))
This both prevents array overrun and decides whether to process data based on whether it was parsed successfully, not whether you reached the end-of-file at some useless point in the past.
The less problematic bug is the one #bentank noticed, that you didn't realize that you kept your position in the file, which is after the first line, and shouldn't expect to hit that line within the loop.
In addition to this, you probably want to deallocate the memory in your destructor. Right now you throw the data away immediately after parsing it. Wouldn't other functions like to party on that data too?
I am trying to build a program that copies text from one .txt file to another and then takes the first letter of each word in the text and switches it to an uppercase letter. So far, I have only managed to copy the text with no luck or idea on the uppercase part. Any tips or help would be greatly appreciated. This is what I have so far:
int main()
{
std::ifstream fin("source.txt");
std::ofstream fout("target.txt");
fout<<fin.rdbuf(); //sends the text string to the file "target.txt"
system("pause");
return 0;
}
Try this, Take the file content to a string, then process it, and again write to the traget file.
int main()
{
std::ifstream fin("source.txt");
std::ofstream fout("target.txt");
// get pointer to associated buffer object
std::filebuf* pbuf = fin.rdbuf();
// get file size using buffer's members
std::size_t size = pbuf->pubseekoff (0,fin.end,fin.in);
pbuf->pubseekpos (0,fin.in);
// allocate memory to contain file data
char* buffer=new char[size];
// get file data
pbuf->sgetn (buffer,size);
fin.close();
locale loc;
string fileBuffer = buffer;
stringstream ss;
for (std::string::size_type i=0; i<fileBuffer.length(); ++i){
if(i==0)
ss << toupper(fileBuffer[i],loc);
else if (isspace(c))
ss << fileBuffer[i] << toupper(fileBuffer[++i],loc);
else
ss << fileBuffer[i];
}
string outString = ss.str();
fout << outString;
fout.close();
}
Instead of copying the entire file at once, you'll need to read part or all of it into a local "buffer" variable - perhaps using while (getline(in, my_string)), then you can simply iterate along the string capitalising letters that are either in position 0 or preceeded by a non-letter (you can use std::isalpha and std::toupper), then stream the string to out. If you have a go at that and get stuck, append your new code to the question and someone's sure to help you out....
I think for this copying the whole file is not going to let you edit it. You can use get() and put() to process the file one character at a time. Then figure out how to detect the start of a word and make it uppercase:
Something like this:
int main()
{
std::ifstream fin("source.txt");
std::ofstream fout("target.txt");
char c;
while(fin.get(c))
{
// figure out which chars are the start
// of words (previous char was a space)
// and then use std::toupper(c)
fout.put(c);
}
}
#include <stdio.h>
#include <ctype.h>
#include <string.h>
#include <stdlib.h>
int main() {
FILE* fpin;
FILE* fpout;
int counter = 0;
char currentCharacter;
char previousCharacter=' ';
fpin = fopen("source.txt", "r"); /* open for reading */
if (fpin == NULL)
{
printf("Fail to open source.txt!\n");
return 1;
}
fpout = fopen("target.txt", "w");/* open for writing */
if (fpout == NULL)
{
printf("Fail to open target.txt!\n");
return 1;
}
/* read a character from source.txt until END */
while((currentCharacter = fgetc(fpin)) != EOF)
{
/* find first letter of word */
if(!isalpha(previousCharacter) && previousCharacter != '-' && isalpha(currentCharacter))
{
currentCharacter = toupper(currentCharacter); /* lowercase to uppercase */
counter++; /* count number of words */
}
fputc(currentCharacter, fpout); /* put a character to target.txt */
/* printf("%c",currentCharacter); */
previousCharacter = currentCharacter; /* reset previous character */
}
printf("\nNumber of words = %d\n", counter);
fclose(fpin); /* close source.txt */
fclose(fpout); /* close target.txt */
return 0;
}
I have seen many posts but didn't find something like i want.
I am getting wrong output :
ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ...... // may be this is EOF character
Going into infinite loop.
My algorithm:
Go to end of file.
decrease position of pointer by 1 and read character by
character.
exit if we found our 10 lines or we reach beginning of file.
now i will scan the full file till EOF and print them //not implemented in code.
code:
#include<iostream>
#include<stdio.h>
#include<conio.h>
#include<stdlib.h>
#include<string.h>
using namespace std;
int main()
{
FILE *f1=fopen("input.txt","r");
FILE *f2=fopen("output.txt","w");
int i,j,pos;
int count=0;
char ch;
int begin=ftell(f1);
// GO TO END OF FILE
fseek(f1,0,SEEK_END);
int end = ftell(f1);
pos=ftell(f1);
while(count<10)
{
pos=ftell(f1);
// FILE IS LESS THAN 10 LINES
if(pos<begin)
break;
ch=fgetc(f1);
if(ch=='\n')
count++;
fputc(ch,f2);
fseek(f1,pos-1,end);
}
return 0;
}
UPD 1:
changed code: it has just 1 error now - if input has lines like
3enil
2enil
1enil
it prints 10 lines only
line1
line2
line3ÿine1
line2
line3ÿine1
line2
line3ÿine1
line2
line3ÿine1
line2
PS:
1. working on windows in notepad++
this is not homework
also i want to do it without using any more memory or use of STL.
i am practicing to improve my basic knowledge so please don't post about any functions (like tail -5 tc.)
please help to improve my code.
Comments in the code
#include <stdio.h>
#include <stdlib.h>
int main(void)
{
FILE *in, *out;
int count = 0;
long int pos;
char s[100];
in = fopen("input.txt", "r");
/* always check return of fopen */
if (in == NULL) {
perror("fopen");
exit(EXIT_FAILURE);
}
out = fopen("output.txt", "w");
if (out == NULL) {
perror("fopen");
exit(EXIT_FAILURE);
}
fseek(in, 0, SEEK_END);
pos = ftell(in);
/* Don't write each char on output.txt, just search for '\n' */
while (pos) {
fseek(in, --pos, SEEK_SET); /* seek from begin */
if (fgetc(in) == '\n') {
if (count++ == 10) break;
}
}
/* Write line by line, is faster than fputc for each char */
while (fgets(s, sizeof(s), in) != NULL) {
fprintf(out, "%s", s);
}
fclose(in);
fclose(out);
return 0;
}
There are a number of problems with your code. The most
important one is that you never check that any of the functions
succeeded. And saving the results an ftell in an int isn't
a very good idea either. Then there's the test pos < begin;
this can only occur if there was an error. And the fact that
you're putting the results of fgetc in a char (which results
in a loss of information). And the fact that the first read you
do is at the end of file, so will fail (and once a stream enters
an error state, it stays there). And the fact that you can't
reliably do arithmetic on the values returned by ftell (except
under Unix) if the file was opened in text mode.
Oh, and there is no "EOF character"; 'ÿ' is a perfectly valid
character (0xFF in Latin-1). Once you assign the return value
of fgetc to a char, you've lost any possibility to test for
end of file.
I might add that reading backwards one character at a time is
extremely inefficient. The usual solution would be to allocate
a sufficiently large buffer, then count the '\n' in it.
EDIT:
Just a quick bit of code to give the idea:
std::string
getLastLines( std::string const& filename, int lineCount )
{
size_t const granularity = 100 * lineCount;
std::ifstream source( filename.c_str(), std::ios_base::binary );
source.seekg( 0, std::ios_base::end );
size_t size = static_cast<size_t>( source.tellg() );
std::vector<char> buffer;
int newlineCount = 0;
while ( source
&& buffer.size() != size
&& newlineCount < lineCount ) {
buffer.resize( std::min( buffer.size() + granularity, size ) );
source.seekg( -static_cast<std::streamoff>( buffer.size() ),
std::ios_base::end );
source.read( buffer.data(), buffer.size() );
newlineCount = std::count( buffer.begin(), buffer.end(), '\n');
}
std::vector<char>::iterator start = buffer.begin();
while ( newlineCount > lineCount ) {
start = std::find( start, buffer.end(), '\n' ) + 1;
-- newlineCount;
}
std::vector<char>::iterator end = remove( start, buffer.end(), '\r' );
return std::string( start, end );
}
This is a bit weak in the error handling; in particular, you
probably want to distinguish the between the inability to open
a file and any other errors. (No other errors should occur,
but you never know.)
Also, this is purely Windows, and it supposes that the actual
file contains pure text, and doesn't contain any '\r' that
aren't part of a CRLF. (For Unix, just drop the next to the
last line.)
This can be done using circular array very efficiently.
No additional buffer is required.
void printlast_n_lines(char* fileName, int n){
const int k = n;
ifstream file(fileName);
string l[k];
int size = 0 ;
while(file.good()){
getline(file, l[size%k]); //this is just circular array
cout << l[size%k] << '\n';
size++;
}
//start of circular array & size of it
int start = size > k ? (size%k) : 0 ; //this get the start of last k lines
int count = min(k, size); // no of lines to print
for(int i = 0; i< count ; i++){
cout << l[(start+i)%k] << '\n' ; // start from in between and print from start due to remainder till all counts are covered
}
}
Please provide feedback.
int end = ftell(f1);
pos=ftell(f1);
this tells you the last point at file, so EOF.
When you read, you get the EOF error, and the ppointer wants to move 1 space forward...
So, i recomend decreasing the current position by one.
Or put the fseek(f1, -2,SEEK_CUR) at the beginning of the while loop to make up for the fread by 1 point and go 1 point back...
I believe, you are using fseek wrong. Check man fseek on the Google.
Try this:
fseek(f1, -2, SEEK_CUR);
//1 to neutrialize change from fgect
//and 1 to move backward
Also you should set position at the beginning to the last element:
fseek(f1, -1, SEEK_END).
You don't need end variable.
You should check return values of all functions (fgetc, fseek and ftell). It is good practise. I don't know if this code will work with empty files or sth similar.
Use :fseek(f1,-2,SEEK_CUR);to back
I write this code ,It can work ,you can try:
#include "stdio.h"
int main()
{
int count = 0;
char * fileName = "count.c";
char * outFileName = "out11.txt";
FILE * fpIn;
FILE * fpOut;
if((fpIn = fopen(fileName,"r")) == NULL )
printf(" file %s open error\n",fileName);
if((fpOut = fopen(outFileName,"w")) == NULL )
printf(" file %s open error\n",outFileName);
fseek(fpIn,0,SEEK_END);
while(count < 10)
{
fseek(fpIn,-2,SEEK_CUR);
if(ftell(fpIn)<0L)
break;
char now = fgetc(fpIn);
printf("%c",now);
fputc(now,fpOut);
if(now == '\n')
++count;
}
fclose(fpIn);
fclose(fpOut);
}
I would use two streams to print last n lines of the file:
This runs in O(lines) runtime and O(lines) space.
#include<bits/stdc++.h>
using namespace std;
int main(){
// read last n lines of a file
ifstream f("file.in");
ifstream g("file.in");
// move f stream n lines down.
int n;
cin >> n;
string line;
for(int i=0; i<k; ++i) getline(f,line);
// move f and g stream at the same pace.
for(; getline(f,line); ){
getline(g, line);
}
// g now has to go the last n lines.
for(; getline(g,line); )
cout << line << endl;
}
A solution with a O(lines) runtime and O(N) space is using a queue:
ifstream fin("file.in");
int k;
cin >> k;
queue<string> Q;
string line;
for(; getline(fin, line); ){
if(Q.size() == k){
Q.pop();
}
Q.push(line);
}
while(!Q.empty()){
cout << Q.front() << endl;
Q.pop();
}
Here is the solution in C++.
#include <iostream>
#include <string>
#include <exception>
#include <cstdlib>
int main(int argc, char *argv[])
{
auto& file = std::cin;
int n = 5;
if (argc > 1) {
try {
n = std::stoi(argv[1]);
} catch (std::exception& e) {
std::cout << "Error: argument must be an int" << std::endl;
std::exit(EXIT_FAILURE);
}
}
file.seekg(0, file.end);
n = n + 1; // Add one so the loop stops at the newline above
while (file.tellg() != 0 && n) {
file.seekg(-1, file.cur);
if (file.peek() == '\n')
n--;
}
if (file.peek() == '\n') // If we stop in the middle we will be at a newline
file.seekg(1, file.cur);
std::string line;
while (std::getline(file, line))
std::cout << line << std::endl;
std::exit(EXIT_SUCCESS);
}
Build:
$ g++ <SOURCE_NAME> -o last_n_lines
Run:
$ ./last_n_lines 10 < <SOME_FILE>
When i read from a file string by string, >> operation gets first string but it starts with "i" . Assume that first string is "street", than it gets as "istreet".
Other strings are okay. I tried for different txt files. The result is same. First string starts with "i". What is the problem?
Here is my code :
#include <iostream>
#include <fstream>
#include <string>
#include <vector>
using namespace std;
int cube(int x){ return (x*x*x);}
int main(){
int maxChar;
int lineLength=0;
int cost=0;
cout<<"Enter the max char per line... : ";
cin>>maxChar;
cout<<endl<<"Max char per line is : "<<maxChar<<endl;
fstream inFile("bla.txt",ios::in);
if (!inFile) {
cerr << "Unable to open file datafile.txt";
exit(1); // call system to stop
}
while(!inFile.eof()) {
string word;
inFile >> word;
cout<<word<<endl;
cout<<word.length()<<endl;
if(word.length()+lineLength<=maxChar){
lineLength +=(word.length()+1);
}
else {
cost+=cube(maxChar-(lineLength-1));
lineLength=(word.length()+1);
}
}
}
You're seeing a UTF-8 Byte Order Mark (BOM). It was added by the application that created the file.
To detect and ignore the marker you could try this (untested) function:
bool SkipBOM(std::istream & in)
{
char test[4] = {0};
in.read(test, 3);
if (strcmp(test, "\xEF\xBB\xBF") == 0)
return true;
in.seekg(0);
return false;
}
With reference to the excellent answer by Mark Ransom above, adding this code skips the BOM (Byte Order Mark) on an existing stream. Call it after opening a file.
// Skips the Byte Order Mark (BOM) that defines UTF-8 in some text files.
void SkipBOM(std::ifstream &in)
{
char test[3] = {0};
in.read(test, 3);
if ((unsigned char)test[0] == 0xEF &&
(unsigned char)test[1] == 0xBB &&
(unsigned char)test[2] == 0xBF)
{
return;
}
in.seekg(0);
}
To use:
ifstream in(path);
SkipBOM(in);
string line;
while (getline(in, line))
{
// Process lines of input here.
}
Here is another two ideas.
if you are the one who create the files, save they length along with them, and when reading them, just cut all the prefix with this simple calculation: trueFileLength - savedFileLength = numOfByesToCut
create your own prefix when saving the files, and when reading search for it and delete all what you found before.