I am attempting to compile and run a test C program in Xcode. This program reads 5 symbols from a text file and closes it. The program builds successfully, but when I try to run the program I get the error: GDB: Program received signal: "EXC_BAD_ACCESS" around fclose(in).
#include <iostream>
#include <unistd.h>
int main (int argc, const char * argv[])
{
bool b;
char inpath[PATH_MAX];
printf("Enter input file path :\r\n");
std::cin >> inpath;
FILE *in = fopen(inpath, "r+w");
char buf[5];
fread(&buf,sizeof(buf),5,in);
printf(buf);
fclose(in);
return 0;
}
What could be a cause of this?
Ah! sizeof(buf) will return 5, so you're asking for 25 bytes in a 5-byte buffer. This overwrites auto storage and clobbers in.
And, of course, note that fprint(buf) will be attempting to print a buffer with no terminating null, so it will print garbage beyond the end of what was read.
The line
fread(&buf,sizeof(buf),5,in);
is wrong: read carefully the man page of fread (and remember that sizeof(buf) would be the size of the whole buf array).
The line
printf(buf);
is wrong. Behavior is undefined if for instance buf would contain %d
You definitely should learn to use the debugger (and enable all warnings with your compiler).
fread(&buf,sizeof(buf),5,in);
this says that you want to read the buf 5 times, which is not correct.
The second and third parameters tell fread the size of each element you want to read and the number of elements.
Related
I have found the following strange behaviour on Visual Studio 2015 when reading a file for a large array of bytes. The file that I load is about 80 MB and is large enough.
#include <cstdio>
#include <vector>
int main() {
std::FILE* file;
errno_t error = _wfopen_s(&file, L"/User/account/Desktop/file.data", L"r");
const std::size_t n = 16384;
std::vector<unsigned char> v(n);
const std::size_t nb_bytes_read = std::fread(v.data(), sizeof(unsigned char), n, file);
// At this point error = 0 and nb_bytes_read = 3473
}
So I ask std::fread for 16384 bytes and it just gives me 3473 even though the file is large enough. Should it be considered as a bug? The standard does not seem to say so but this behavior is very weird to me.
Try to open the file in binary mode "rb" which is propably what you want anyway. Otherwise, on the Windows platform, the byte \0x1A terminates input. Also, line breaks like \r\n will be converted to \n which may also result in less bytes read than specified.
According to this reference, fread() will only return fewer than the requested number of bytes if EOF was reached or an error occurred. You can check for those with feof() and ferror().
I'm trying to open an exe file and place input taken from the user and replace existing data (overwriting it) of the same length at specific locations. I can do this with my code, but I'm seeing data corruption in other parts of my file. This is my first time with C++, I've tried looking at everything I could to help myself, but I'm at a loss. Only thing I can think is that its related to a null string char at the end of 'char test1[100];' (If I read the documentation right). But doesnt help my issue of resolving the issue. See linked image for example from Hex Viewer of Output and Original
#include <stdio.h>
#include <string.h>
int main(void)
{
FILE *key;
key=fopen ("Testfile.exe","r+b");
char test1[100];
char test2[100];
printf("Test data to input:");
fgets(test1, sizeof test1, stdin);
printf("Second test data to input:");
fgets(test2, sizeof test2, stdin);
fseek (key,24523,SEEK_SET); //file offset location to begin write
fwrite (test1,1,sizeof(test1),key);
fseek (key,24582,SEEK_SET); //file offset location to begin write
fwrite (test2,1,sizeof(test2),key);
fseek (key,24889,SEEK_SET); //file offset location to begin write
fwrite (test2,1,sizeof(test2),key);
fclose(key);
printf ("Finished");
return(0);
}
After my initial edits, I was still fighting with a Null Terminator being written at the end of my string (and thus affecting operation of the edited exe file). After a bit more reading this is my final solution that works as intended without any weird data being written. I used scanf ("%10s") to ensure only my string was being used and to get rid of any Null Terminator. Does anyone see anything majorly wrong here or improvements to be made? Eventually I'd like to implement string length checking to ensure proper length was input by user. Thanks for everyone's help.
#include <stdio.h>
#include <string.h>
int main(void)
{
FILE *key;
key=fopen ("test.exe","r+b");
char test1[10];
char test2[32];
printf("Input Test1 data:");
scanf ("%10s",test1); //only read 10 Chars
printf("Input test2 data:");
scanf ("%32s",test2); //only read 32 Chars
fseek (key,24523,SEEK_SET); //file offset location to begin write
fputs (test1,key);
fseek (key,24582,SEEK_SET); //file offset location to begin write
fputs (test2,key);
fseek (key,24889,SEEK_SET); //file offset location to begin write
fputs (test2,key);
fclose(key);
printf ("Finished");
return(0);
}
It looks like you're to write a string into the exe file but actually you're writing a string padded with garbage values up to a length of 100 bytes.
If you just want to write the string, replace fwrite with fputs.
sizeof(array) gives the allocated size of the static array (100 in this case) , not the string length. string length is done via strlen() which doesn't include the terminating NULL character.
You have two problems.
First: you're writing 100 byte buffers which have not been initialized except via fgets()... everything not put in there by fgets() is whatever happened to be in memory (on the stack in this case).
Second: you're writing 100 bytes with each write however your seek does not advance to at least 100 bytes later, meaning the second write() in this snippet partially overwrites the first.
I am trying to simulate race conditions in writing to a file. This is what I am doing.
Opening a.txt in append mode in process1
writing "hello world" in process1
prints the ftell in process1 which is 11
put process1 in sleep
open a.txt again in append mode in process2
writing "hello world" in process2 (this correctly appends to the end of the file)
prints the ftell in process2 which is 22 (correct)
writing "bye world" in process2 (this correctly appends to the end of the file).
process2 quits
process1 resumes, and prints its ftell value, which is 11.
writing "bye world" by process1 --- i assume as the ftell of process1 is 11, this should overwrite the file.
However, the write of process1 is writing to the end of the file and there is no contention in writing between the processes.
I am using fopen as fopen("./a.txt", "a+)
Can anyone tell why is this behavior and how can I simulate the race condition in writing to the file?
The code of process1:
#include <iostream>
#include <fstream>
#include <string>
#include <stdio.h>
#include "time.h"
using namespace std;
int main()
{
FILE *f1= fopen("./a.txt","a+");
cout<<"opened file1"<<endl;
string data ("hello world");
fwrite(data.c_str(), sizeof(char), data.size(), f1);
fflush(f1);
cout<<"file1 tell "<<ftell(f1)<<endl;
cout<<"wrote file1"<<endl;
sleep(3);
string data1 ("bye world");;
cout<<"wrote file1 end"<<endl;
cout<<"file1 2nd tell "<<ftell(f1)<<endl;
fwrite(data1.c_str(), sizeof(char), data1.size(), f1);
cout<<"file1 2nd tell "<<ftell(f1)<<endl;
fflush(f1);
return 0;
}
In process2, I have commented out the sleep statement.
I am using the following script to run:
./process1 &
sleep 2
./process2 &
Thanks for your time.
The writer code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BLOCKSIZE 1000000
int main(int argc, char **argv)
{
FILE *f = fopen("a.txt", "a+");
char *block = malloc(BLOCKSIZE);
if (argc < 2)
{
fprintf(stderr, "need argument\n");
}
memset(block, argv[1][0], BLOCKSIZE);
for(int i = 0; i < 3000; i++)
{
fwrite(block, sizeof(char), BLOCKSIZE, f);
}
fclose(f);
}
The reader function:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BLOCKSIZE 1000000
int main(int argc, char **argv)
{
FILE *f = fopen("a.txt", "r");
int c;
int oldc = 0;
int rl = 0;
while((c = fgetc(f)) != EOF)
{
if (c != oldc)
{
if (rl)
{
printf("Got %d of %c\n", rl, oldc);
}
oldc = c;
rl = 0;
}
rl++;
}
fclose(f);
}
I ran ./writefile A & ./writefile B then ./readfile
I got this:
Got 1000999424 of A
Got 999424 of B
Got 999424 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
As you can see, there are nice long runs of A and B, but they are not exactly 1000000 characters long, which is the size I wrote them. The whole file, after a trialrun with a smaller size in the first run is just short of 7GB.
For reference: Fedora Core 16, with my own compiled 3.7rc5 kernel, gcc 4.6.3, x86-64, and ext4 on top of lvm, AMD PhenomII quad core processor, 16GB of RAM
Writing in append mode is an atomic operation. This is why it doesn't break.
Now... how to break it?
Try memory mapping the file and writing in the memory from the two processes. I'm pretty sure this will break it.
I'm pretty sure you can't RELY on this behaviour, but it may well work reliably on some systems. Writing to the same file from two different processes is likely to cause problems sooner or later, if you "try hard enough". And sod's law says that that's exactly when your boss is checking if the software works, when your customer takes delivery of the system you've sold, or when you are finalizing your report that took ages to produce, or some other important time.
The behavior you're trying to break or see depends on which OS you are working on, as writing in a file is a system call.
On what you told us about the first file descriptor to not overwrite what the second process wrote, the fact you opened the file in append mode in both process may have actualized the ftell value before actually writing in it.
Did you try to do the same with the standard open and write functions? Might be interesting as well.
EDIT: The C++ Reference doc explains about the fopen append option here:
"append/update: Open a file for update (both for input and output) with all output operations writing data at the end of the file. Repositioning operations (fseek, fsetpos, rewind) affects the next input operations, but output operations move the position back to the end of file."
This explains the behavior you observed.
I am trying to use the system calls read() and write(). The following program creates a file and writes some data into it. Here is the code..
int main()
{
int fd;
open("student",O_CREAT,(mode_t)0600);
fd=open("student",O_WRONLY);
char data[128]="Hi nikhil, How are u?";
write(fd,data,128);
}
Upon the execution of the above program i got a file with name student created with size as 128 bytes.
int main()
{
int fd=open("student",O_WRONLY);
char data[128];
read(fd,data,128);
cout<<(char*)data<<endl;
}
But the output i get is junk characters....why is this so?
I wrote a small read program to read data from the file. Her is the code.
But the output
Don't read from a file that you've open in O_WRONLY mode!
Do yourself a favor and always check the return values of IO functions.
You should also always close file descriptors you've (successfully) opened. Might not matter for trivial code like this, but if you get into the habit of forgetting that, you'll end up writing code that leaks file descriptors, and that's a bad thing.
You're not checking whether read() returns an error. You should do so, because that's probably the case with the code in your question.
Since you're opening the file write-only in the first place, calling read() on it will result in an error. You should open the file for reading instead:
char data[128];
int fd = open("student", O_RDONLY);
if (fd != -1) {
if (read(fd, data, sizeof(data)) != -1) {
// Process data...
}
close(fd);
}
Well, one of the first things is that your data is not 128 bytes. Your data is the string: "Hi nikhil, How are u?", which is way less than 128 bytes. But you're writing 128 bytes from the array to the file. Everything after the initial string will be random junk from memory as the char array is only initialized with 21 bytes of data. So the next 107 bytes is junk.
I'm using the following code to try to read the results of a df command in Linux using popen.
#include <iostream> // file and std I/O functions
int main(int argc, char** argv) {
FILE* fp;
char * buffer;
long bufSize;
size_t ret_code;
fp = popen("df", "r");
if(fp == NULL) { // head off errors reading the results
std::cerr << "Could not execute command: df" << std::endl;
exit(1);
}
// get the size of the results
fseek(fp, 0, SEEK_END);
bufSize = ftell(fp);
rewind(fp);
// allocate the memory to contain the results
buffer = (char*)malloc( sizeof(char) * bufSize );
if(buffer == NULL) {
std::cerr << "Memory error." << std::endl;
exit(2);
}
// read the results into the buffer
ret_code = fread(buffer, 1, sizeof(buffer), fp);
if(ret_code != bufSize) {
std::cerr << "Error reading output." << std::endl;
exit(3);
}
// print the results
std::cout << buffer << std::endl;
// clean up
pclose(fp);
free(buffer);
return (EXIT_SUCCESS);
}
This code is giving me a "Memory error" with an exit status of '2', so I can see where it's failing, I just don't understand why.
I put this together from example code that I found on Ubuntu Forums and C++ Reference, so I'm not married to it. If anyone can suggest a better way to read the results of a system() call, I'm open to new ideas.
EDIT to the original: Okay, bufSize is coming up negative, and now I understand why. You can't randomly access a pipe, as I naively tried to do.
I can't be the first person to try to do this. Can someone give (or point me to) an example of how to read the results of a system() call into a variable in C++?
You're making this all too hard. popen(3) returns a regular old FILE * for a standard pipe file, which is to say, newline terminated records. You can read it with very high efficiency by using fgets(3) like so in C:
#include <stdio.h>
char bfr[BUFSIZ] ;
FILE * fp;
// ...
if((fp=popen("/bin/df", "r")) ==NULL) {
// error processing and return
}
// ...
while(fgets(bfr,BUFSIZ,fp) != NULL){
// process a line
}
In C++ it's even easier --
#include <cstdio>
#include <iostream>
#include <string>
FILE * fp ;
if((fp= popen("/bin/df","r")) == NULL) {
// error processing and exit
}
ifstream ins(fileno(fp)); // ifstream ctor using a file descriptor
string s;
while (! ins.eof()){
getline(ins,s);
// do something
}
There's some more error handling there, but that's the idea. The point is that you treat the FILE * from popen just like any FILE *, and read it line by line.
Why would std::malloc() fail?
The obvious reason is "because std::ftell() returned a negative signed number, which was then treated as a huge unsigned number".
According to the documentation, std::ftell() returns -1 on failure. One obvious reason it would fail is that you cannot seek in a pipe or FIFO.
There is no escape; you cannot know the length of the command output without reading it, and you can only read it once. You have to read it in chunks, either growing your buffer as needed or parsing on the fly.
But, of course, you can simply avoid the whole issue by directly using the system call df probably uses to get its information: statvfs().
(A note on terminology: "system call" in Unix and Linux generally refers to calling a kernel function from user-space code. Referring to it as "the results of a system() call" or "the results of a system(3) call" would be clearer, but it would probably be better to just say "capturing the output of a process.")
Anyway, you can read a process's output just like you can read any other file. Specifically:
You can start the process using pipe(), fork(), and exec(). This gives you a file descriptor, then you can use a loop to read() from the file descriptor into a buffer and close() the file descriptor once you're done. This is the lowest level option and gives you the most control.
You can start the process using popen(), as you're doing. This gives you a file stream. In a loop, you can read using from the stream into a temporary variable or buffer using fread(), fgets(), or fgetc(), as Zarawesome's answer demonstrates, then process that buffer or append it to a C++ string.
You can start the process using popen(), then use the nonstandard __gnu_cxx::stdio_filebuf to wrap that, then create an std::istream from the stdio_filebuf and treat it like any other C++ stream. This is the most C++-like approach. Here's part 1 and part 2 of an example of this approach.
I'm not sure you can fseek/ftell pipe streams like this.
Have you checked the value of bufSize ? One reason malloc be failing is for insanely sized buffers.
Thanks to everyone who took the time to answer. A co-worker pointed me to the ostringstream class. Here's some example code that does essentially what I was attempting to do in the original question.
#include <iostream> // cout
#include <sstream> // ostringstream
int main(int argc, char** argv) {
FILE* stream = popen( "df", "r" );
std::ostringstream output;
while( !feof( stream ) && !ferror( stream ))
{
char buf[128];
int bytesRead = fread( buf, 1, 128, stream );
output.write( buf, bytesRead );
}
std::string result = output.str();
std::cout << "<RESULT>" << std::endl << result << "</RESULT>" << std::endl;
return (0);
}
To answer the question in the update:
char buffer[1024];
char * line = NULL;
while ((line = fgets(buffer, sizeof buffer, fp)) != NULL) {
// parse one line of df's output here.
}
Would this be enough?
First thing to check is the value of bufSize - if that happens to be <= 0, chances are that malloc returns a NULL as you're trying to allocate a buffer of size 0 at that point.
Another workaround would be to ask malloc to provide you with a buffer of the size (bufSize + n) with n >= 1, which should work around this particular problem.
That aside, the code you posted is pure C, not C++, so including is overdoing it a little.
check your bufSize. ftell can return -1 on error, and this can lead to nonallocation by malloc with buffer having a NULL value.
The reason for the ftell to fail is, because of the popen. You cant search pipes.
Pipes are not random access. They're sequential, which means that once you read a byte, the pipe is not going to send it to you again. Which means, obviously, you can't rewind it.
If you just want to output the data back to the user, you can just do something like:
// your file opening code
while (!feof(fp))
{
char c = getc(fp);
std::cout << c;
}
This will pull bytes out of the df pipe, one by one, and pump them straight into the output.
Now if you want to access the df output as a whole, you can either pipe it into a file and read that file, or concatenate the output into a construct such as a C++ String.