Unable to correctly read bmp file - c++

I am trying to read certain information from a bmp file. Basically file type i.e B M in my bmp file. I start with first opening the file. Which is happening correctly. The first fread is however failing. Why is this happening?
#include<stdio.h>
#include<string.h>
#define SIZE 1
int main(void)
{
FILE* fd = NULL;
char buff[2];
unsigned int i=0,size=0,offset=0;
memset(buff,0,sizeof(buff));
fd = fopen("RIT.bmp","r+");
if(NULL == fd)
{
printf("\n fopen() Error!!!\n");
return 1;
}
printf("\n File opened successfully\n");
if(SIZE*2 != fread(buff,SIZE,2,fd))//to read the file type.(i. e. B M)
{
printf("\n first fread() failed\n");
return 1;
}
return 0;
}
Output
File opened successfully
first fread() failed
Press any key to continue . . .
Update
Yes the file is empty, due to some earlier error. That is why this error is coming.

Probably your file doesn't have enough(2 bytes) data. Its giving correct output when I checked with file> 2 bytes. Same is failing for empty file

From the man page: "Upon successful completion, fread() shall return the number of elements successfully read [...]."
That would be 2, not SIZE*2.
Although, at second thought, SIZE is 1, so while the program is error-prone, it is not actually wrong. In that case, the second part of the sentence applies: " ... which is less than nitems only if a read error or end-of-file is encountered.". And as others said, check the global errno if the file is long enough. Maybe it's time for a new SSD.

Related

zlib error -3 while decompressing archive: Incorrect data check

I am writing a C++ library that also decompresses zlib files. For all of the files, the last call to gzread() (or at least one of the last calls) gives error -3 (Z_DATA_ERROR) with message "incorrect data check". As I have not created the files myself I am not entirely sure what is wrong.
I found this answer and if I do
gzip -dc < myfile.gz > myfile.decomp
gzip: invalid compressed data--crc error
on the command line the contents of myfile.decomp seems to be correct. There is still the crc error printed in this case, however, which may or may not be the same problem. My code, pasted below, should be straightforward, but I am not sure how to get the same behavior in code as on the command line above.
How can I achieve the same behavior in code as on the command line?
std::vector<char> decompress(const std::string &path)
{
gzFile inFileZ = gzopen(path.c_str(), "rb");
if (inFileZ == NULL)
{
printf("Error: gzopen() failed for file %s.\n", path.c_str());
return {};
}
constexpr size_t bufSize = 8192;
char unzipBuffer[bufSize];
int unzippedBytes = bufSize;
std::vector<char> unzippedData;
unzippedData.reserve(1048576); // 1 MiB is enough in most cases.
while (unzippedBytes == bufSize)
{
unzippedBytes = gzread(inFileZ, unzipBuffer, bufSize);
if (unzippedBytes == -1)
{
// Here the error is -3 / "incorrect data check" for (one of) the last block(s)
// in the file. The bytes can be correctly decompressed, as demonstrated on the
// command line, but how can this be achieved in code?
int errnum;
const char *err = gzerror(inFileZ, &errnum);
printf(err, "%s\n");
break;
}
if (unzippedBytes > 0)
{
unzippedData.insert(unzippedData.end(), unzipBuffer, unzipBuffer + unzippedBytes);
}
}
gzclose(inFileZ);
return unzippedData;
}
First off, the whole point of the CRC is to detect corrupted data. If the CRC is bad, then you should be going back to where this file came from and getting the data not corrupted. If the CRC is bad, discard the input and report an error.
You are not clear on the "behavior" you are trying to reproduce, but if you're trying to recover as much data as possible from a corrupted gzip file, then you will need to use zlib's inflate functions to decompress the file. int ret = inflateInit2(&strm, 31); will initialize the zlib stream to process a gzip file.

Why in this case the stdin fd is not ready

According to Linux Programmer's Manual, poll can wait for one of a set of file descriptors to become ready to perform I/O.
According to my understanding, if I add POLLIN to events, poll will return with a > 0 integer, when there is at least one fd which is ready to be read.
Consider the following code, In this code, I want the program echos my input immediately after I typed the character \n.
int main(){
char buffer[maxn];
while (true) {
struct pollfd pfd[1];
std::memset(pfd, 0, sizeof pfd);
pfd[0].fd = STDIN_FILENO;
pfd[0].events = POLLIN;
int ret = poll(pfd, 1, 1000);
if (ret < 0) {
}
else if (ret == 0) {
}
else {
if ((pfd[0].revents & POLLIN) == POLLIN) {
int n;
n = fscanf(stdin, "%s", &buffer);
if(n > 0){
printf("data from stdin: %s\n", buffer);
}
}else if((pfd[1].revents & POLLHUP) == POLLHUP){
break;
}
}
}
}
When I type
aa bb cc dd
I thought fscanf hasn't retrieved all data from stdin, because it only reads aa. So when the loop restarts, stdin's fd should still be ready. As a consequence, (pfd[0].revents & POLLIN) == POLLIN still stands, so I thought we can see the following output
data from stdin: aa
data from stdin: bb
data from stdin: cc
data from stdin: dd
However, actually only the first line is printed. I got strange here, I think this is similar with epoll's Edge-triggered mode. However, poll is level-triggered.
So can you explain why this happens with fscanf?
Polling works at the file descriptor level while fscanf works at the higher file handle level.
At the higher level, the C runtime library is free to cache the input stream in such a way that it would affect what you can see at the lower level.
For example (and this is probably what's happening here), the first time you fscanf your word aa, the entire line is read from the file descriptor and cached, before that first word is handed back to you.
A subsequent fscanf (with no intervening poll) would first check the cache to get the next word and, if it weren't there, it would go back to the file descriptor to get more input.
Unfortunately, the fact that you're checking for a poll event before doing this is causing problems. As far as the file descriptor level goes, the entire line has been read by your first fscanf so no further input is available - poll will therefore wait until such information does become available.
You can see this in action if you change:
n = fscanf(stdin, "%s", buffer);
into:
n = read(STDIN_FILENO, buffer, 3);
and change the printf to:
printf("data from stdin: %*.*s\n", n, n, buffer);
In that case, you do get the output you expect as soon as you press the ENTER key:
data from stdin: aa
data from stdin: bb
data from stdin: cc
data from stdin: dd
Just keep in mind that sample code is reading up to three characters (like aa<space>) rather than a word. It's more to illustrate what the problem is rather than give you the solution (to match your question "Can you explain why this happens?").
The solution is not to mix descriptor and handle based I/O when the caching of the latter can affect the former.

Size error on read file

RESOLVED
I'm trying to make a simple file loader.
I aim to get the text from a shader file (plain text file) into a char* that I will compile later.
I've tried this function:
char* load_shader(char* pURL)
{
FILE *shaderFile;
char* pShader;
// File opening
fopen_s( &shaderFile, pURL, "r" );
if ( shaderFile == NULL )
return "FILE_ER";
// File size
fseek (shaderFile , 0 , SEEK_END);
int lSize = ftell (shaderFile);
rewind (shaderFile);
// Allocating size to store the content
pShader = (char*) malloc (sizeof(char) * lSize);
if (pShader == NULL)
{
fputs ("Memory error", stderr);
return "MEM_ER";
}
// copy the file into the buffer:
int result = fread (pShader, sizeof(char), lSize, shaderFile);
if (result != lSize)
{
// size of file 106/113
cout << "size of file " << result << "/" << lSize << endl;
fputs ("Reading error", stderr);
return "READ_ER";
}
// Terminate
fclose (shaderFile);
return 0;
}
But as you can see in the code I have a strange size difference at the end of the process which makes my function crash.
I must say I'm quite a beginner in C so I might have missed some subtilities regarding the memory allocation, types, pointers...
How can I solve this size issue?
*EDIT 1:
First, I shouldn't return 0 at the end but pShader; that seemed to be what crashed the program.
Then, I change the type of reult to size_t, and added a end character to pShader, adding pShdaer[result] = '/0'; after its declaration so I can display it correctly.
Finally, as #JamesKanze suggested, I turned fopen_s into fopen as the previous was not usefull in my case.
First, for this sort of raw access, you're probably better off
using the system level functions: CreateFile or open,
ReadFile or read and CloseHandle or close, with
GetFileSize or stat to get the size. Using FILE* or
std::filebuf will only introduce an additional level of
buffering and processing, for no gain in your case.
As to what you are seeing: there is no guarantee that an ftell
will return anything exploitable as a numeric value; it could
very well be just a magic cookie. On most current systems, it
is a byte offset into the physical file, but on any non-Unix
system, the offset into the physical file will not map directly
to the logical file you are reading unless you open the file in
binary mode. If you use "rb" to open the file, you'll
probably see the same values. (Theoretically, you could get
extra 0's at the end of the file, but practically, the OS's
where that happened are either extinct, or only used on legacy
mainframes.)
EDIT:
Since the answer stating this has been deleted: you should loop
on the fread until it returns 0 (setting errno to 0 before
each call, and checking it after the return to see whether the
function returned because of an error or because it reached the
end of file). Having said this: if you're on one of the usual
Windows or Unix systems, and the file is local to the machine,
and not too big, fread will read it all in one go. The
difference in size you are seeing (given the numerical values
you posted) is almost certainly due to the fact that the two
byte Windows line endings are being mapped to a single '\n'
character. To avoid this, you must open in binary mode;
alternatively, if you really are dealing with text (and want
this mapping), you can just ignore the extra bytes in your
buffer, setting the '\0' terminator after the last byte
actually read.

Read and write in c++

I am trying to use the system calls read() and write(). The following program creates a file and writes some data into it. Here is the code..
int main()
{
int fd;
open("student",O_CREAT,(mode_t)0600);
fd=open("student",O_WRONLY);
char data[128]="Hi nikhil, How are u?";
write(fd,data,128);
}
Upon the execution of the above program i got a file with name student created with size as 128 bytes.
int main()
{
int fd=open("student",O_WRONLY);
char data[128];
read(fd,data,128);
cout<<(char*)data<<endl;
}
But the output i get is junk characters....why is this so?
I wrote a small read program to read data from the file. Her is the code.
But the output
Don't read from a file that you've open in O_WRONLY mode!
Do yourself a favor and always check the return values of IO functions.
You should also always close file descriptors you've (successfully) opened. Might not matter for trivial code like this, but if you get into the habit of forgetting that, you'll end up writing code that leaks file descriptors, and that's a bad thing.
You're not checking whether read() returns an error. You should do so, because that's probably the case with the code in your question.
Since you're opening the file write-only in the first place, calling read() on it will result in an error. You should open the file for reading instead:
char data[128];
int fd = open("student", O_RDONLY);
if (fd != -1) {
if (read(fd, data, sizeof(data)) != -1) {
// Process data...
}
close(fd);
}
Well, one of the first things is that your data is not 128 bytes. Your data is the string: "Hi nikhil, How are u?", which is way less than 128 bytes. But you're writing 128 bytes from the array to the file. Everything after the initial string will be random junk from memory as the char array is only initialized with 21 bytes of data. So the next 107 bytes is junk.

How do I read the results of a system() call in C++?

I'm using the following code to try to read the results of a df command in Linux using popen.
#include <iostream> // file and std I/O functions
int main(int argc, char** argv) {
FILE* fp;
char * buffer;
long bufSize;
size_t ret_code;
fp = popen("df", "r");
if(fp == NULL) { // head off errors reading the results
std::cerr << "Could not execute command: df" << std::endl;
exit(1);
}
// get the size of the results
fseek(fp, 0, SEEK_END);
bufSize = ftell(fp);
rewind(fp);
// allocate the memory to contain the results
buffer = (char*)malloc( sizeof(char) * bufSize );
if(buffer == NULL) {
std::cerr << "Memory error." << std::endl;
exit(2);
}
// read the results into the buffer
ret_code = fread(buffer, 1, sizeof(buffer), fp);
if(ret_code != bufSize) {
std::cerr << "Error reading output." << std::endl;
exit(3);
}
// print the results
std::cout << buffer << std::endl;
// clean up
pclose(fp);
free(buffer);
return (EXIT_SUCCESS);
}
This code is giving me a "Memory error" with an exit status of '2', so I can see where it's failing, I just don't understand why.
I put this together from example code that I found on Ubuntu Forums and C++ Reference, so I'm not married to it. If anyone can suggest a better way to read the results of a system() call, I'm open to new ideas.
EDIT to the original: Okay, bufSize is coming up negative, and now I understand why. You can't randomly access a pipe, as I naively tried to do.
I can't be the first person to try to do this. Can someone give (or point me to) an example of how to read the results of a system() call into a variable in C++?
You're making this all too hard. popen(3) returns a regular old FILE * for a standard pipe file, which is to say, newline terminated records. You can read it with very high efficiency by using fgets(3) like so in C:
#include <stdio.h>
char bfr[BUFSIZ] ;
FILE * fp;
// ...
if((fp=popen("/bin/df", "r")) ==NULL) {
// error processing and return
}
// ...
while(fgets(bfr,BUFSIZ,fp) != NULL){
// process a line
}
In C++ it's even easier --
#include <cstdio>
#include <iostream>
#include <string>
FILE * fp ;
if((fp= popen("/bin/df","r")) == NULL) {
// error processing and exit
}
ifstream ins(fileno(fp)); // ifstream ctor using a file descriptor
string s;
while (! ins.eof()){
getline(ins,s);
// do something
}
There's some more error handling there, but that's the idea. The point is that you treat the FILE * from popen just like any FILE *, and read it line by line.
Why would std::malloc() fail?
The obvious reason is "because std::ftell() returned a negative signed number, which was then treated as a huge unsigned number".
According to the documentation, std::ftell() returns -1 on failure. One obvious reason it would fail is that you cannot seek in a pipe or FIFO.
There is no escape; you cannot know the length of the command output without reading it, and you can only read it once. You have to read it in chunks, either growing your buffer as needed or parsing on the fly.
But, of course, you can simply avoid the whole issue by directly using the system call df probably uses to get its information: statvfs().
(A note on terminology: "system call" in Unix and Linux generally refers to calling a kernel function from user-space code. Referring to it as "the results of a system() call" or "the results of a system(3) call" would be clearer, but it would probably be better to just say "capturing the output of a process.")
Anyway, you can read a process's output just like you can read any other file. Specifically:
You can start the process using pipe(), fork(), and exec(). This gives you a file descriptor, then you can use a loop to read() from the file descriptor into a buffer and close() the file descriptor once you're done. This is the lowest level option and gives you the most control.
You can start the process using popen(), as you're doing. This gives you a file stream. In a loop, you can read using from the stream into a temporary variable or buffer using fread(), fgets(), or fgetc(), as Zarawesome's answer demonstrates, then process that buffer or append it to a C++ string.
You can start the process using popen(), then use the nonstandard __gnu_cxx::stdio_filebuf to wrap that, then create an std::istream from the stdio_filebuf and treat it like any other C++ stream. This is the most C++-like approach. Here's part 1 and part 2 of an example of this approach.
I'm not sure you can fseek/ftell pipe streams like this.
Have you checked the value of bufSize ? One reason malloc be failing is for insanely sized buffers.
Thanks to everyone who took the time to answer. A co-worker pointed me to the ostringstream class. Here's some example code that does essentially what I was attempting to do in the original question.
#include <iostream> // cout
#include <sstream> // ostringstream
int main(int argc, char** argv) {
FILE* stream = popen( "df", "r" );
std::ostringstream output;
while( !feof( stream ) && !ferror( stream ))
{
char buf[128];
int bytesRead = fread( buf, 1, 128, stream );
output.write( buf, bytesRead );
}
std::string result = output.str();
std::cout << "<RESULT>" << std::endl << result << "</RESULT>" << std::endl;
return (0);
}
To answer the question in the update:
char buffer[1024];
char * line = NULL;
while ((line = fgets(buffer, sizeof buffer, fp)) != NULL) {
// parse one line of df's output here.
}
Would this be enough?
First thing to check is the value of bufSize - if that happens to be <= 0, chances are that malloc returns a NULL as you're trying to allocate a buffer of size 0 at that point.
Another workaround would be to ask malloc to provide you with a buffer of the size (bufSize + n) with n >= 1, which should work around this particular problem.
That aside, the code you posted is pure C, not C++, so including is overdoing it a little.
check your bufSize. ftell can return -1 on error, and this can lead to nonallocation by malloc with buffer having a NULL value.
The reason for the ftell to fail is, because of the popen. You cant search pipes.
Pipes are not random access. They're sequential, which means that once you read a byte, the pipe is not going to send it to you again. Which means, obviously, you can't rewind it.
If you just want to output the data back to the user, you can just do something like:
// your file opening code
while (!feof(fp))
{
char c = getc(fp);
std::cout << c;
}
This will pull bytes out of the df pipe, one by one, and pump them straight into the output.
Now if you want to access the df output as a whole, you can either pipe it into a file and read that file, or concatenate the output into a construct such as a C++ String.