Save byte pointer to a file - c++

I'm using ProtonSDK (www.protonsdk.com)
Im trying to use this function
byte *pData = GetFileManager()->Get("texturefile.rttex", &fileSizeBytes, true, true);
I want to save the output to a file (Decompressed RTPACK) but it doesn't want to.
I am fairly new to C++ but I have a background with PHP

You can write it to a file with the C++ standard library.
http://www.cplusplus.com/doc/tutorial/files/
Or the C standard library.
Write data pData is pointing to.
#include <stdio.h>
#include <stdlib.h>
/* Proton SDK header */
/* header with byte type (not part of C standard library) */
int main(int argc, char** argv)
{
int fileSizeBytes;
FILE* fp;
byte *pData = GetFileManager()->Get(GetSavePath()+"test.txt", &fileSizeBytes, true);
fp = fopen("put_filename_here", "w");
if(fp == NULL) {
fprintf("Cannot open file!\n");
exit(-1);
}
fwrite(pData, fileSizeBytes, sizeof(byte), fp);
fclose(fp);
return 0;
}
Write pData itself to the file.
fwrite(&pData, 1, sizeof(pData), fp);

Related

fwrite() fails without writing anything to existing file

Sorry for this basic problem guys, but I'm having an issue just getting fwrite() to work properly?
#include <iostream>
#include <fstream>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
using namespace std;
int main() {
FILE* fd = NULL;
fd = fopen("out","rw");
if (fd == NULL) {
printf("Open failed\n");
return -1;
}
int error = 0;
printf("Attempting write ... \n");
char buff[] = {"hello?\n"};
if( (error = fwrite(buff, 1, 7, fd)) != 7 ) {
printf("fwrite() failed with code %d \n", error);
return -1;
}
fclose(fd);
return 0;
}
This code fails - fwrite() just returns 0 when it should return 7 instead for seven 1b characters written to the file. The file does exist in the same directory; I've tried this with the full file path instead; I've chmod'd the output file out to 777 in case that was the issue (it wasn't); fread() and fseek() both work as expected but I've taken them out for brevity.
What am I doing wrong here? Any help is appreciated.
fopen do not have "rw" mode, and you should open the file in binary mode since you fwrite to it.
What you want is "wb", "w+b" or "r+b"
fd = fopen("out","rw");
"rw" is not one of the valid open modes.
See the fopen(3) manual page for more information.
You did not specify your platform, or C library. On Linux, this fopen() call fails.

How to receive a JPEG image over serial port

So I am trying to send a jpeg image (4Kb) from a raspberry pi to my Mac wirelessly using Xbee Series 1. I have an image on the raspberry pi and can read it into binary format. I've used this binary format to save it into another image file and it creates a copy of the image correctly. That tells me that I am reading it correctly. So I am trying to send that data over a serial port (to be transferred by the xbee's) to my Mac. Side note, Xbee's can only transmit I think 80 bytes of data per packet or something. I don't know how that affects what I'm doing though.
My problem is, I do not know how to read the data and properly store it into a jpeg file itself. Most of the Read() functions I have found require you to enter a length to read and I don't know how to tell how long it is since its just a serial stream coming in.
Here is my code to send the jpeg.
#include "xSerial.hpp"
#include <iostream>
#include <cstdlib>
using namespace std;
int copy_file( const char* srcfilename, const char* dstfilename );
int main(){
copy_file("tylerUseThisImage.jpeg", "copyImage.jpeg");
return 0;
}
int copy_file( const char* srcfilename, const char* dstfilename )
{
long len;
char* buf = NULL;
FILE* fp = NULL;
// Open the source file
fp = fopen( srcfilename, "rb" );
if (!fp) return 0;
// Get its length (in bytes)
if (fseek( fp, 0, SEEK_END ) != 0) // This should typically succeed
{ // (beware the 2Gb limitation, though)
fclose( fp );
return 0;
}
len = ftell( fp );
std::cout << len;
rewind( fp );
// Get a buffer big enough to hold it entirely
buf = (char*)malloc( len );
if (!buf)
{
fclose( fp );
return 0;
}
// Read the entire file into the buffer
if (!fread( buf, len, 1, fp ))
{
free( buf );
fclose( fp );
return 0;
}
fclose( fp );
// Open the destination file
fp = fopen( dstfilename, "wb" );
if (!fp)
{
free( buf );
return 0;
}
// this is where I send data in but over serial port.
//serialWrite() is just the standard write() being used
int fd;
fd = xserialOpen("/dev/ttyUSB0", 9600);
serialWrite(fd, buf, len);
//This is where the file gets copied to another file as a test
// Write the entire buffer to file
if (!fwrite( buf, len, 1, fp ))
{
free( buf );
fclose( fp );
return 0;
}
// All done -- return success
fclose( fp );
free( buf );
return 1;
}
On the receive side I know I need to open up the serial port to read and use some sort of read() but I don't know how that is done. Using a serial library it has some functions to check if serial data is available and return the number of characters available to read.
One question about the number of characters available to read, will that number grow as the serial stream comes over or will it immediately tell the entire length of the data to be read?
But finally, I know after I open the serial port, I need read the data into a buffer and then write that buffer to a file but I have not had any luck. This is what I have tried thus far.
// Loop, getting and printing characters
char temp;
bool readComplete = false;
int bytesRead = 0;
fp = fopen("copyImage11.jpeg", "rwb");
for (;;)
{
if(xserialDataAvail(fd) > 0)
{
bytesRead = serialRead(fd, buf, len);
readComplete = true;
}
if (readComplete)
{
if (!fwrite(buf, bytesRead, 1, fp))
{
free(buf);
fclose(fp);
return 0;
}
fclose(fp);
free(buf);
return 1;
}
}
I don't get errors with my code, it just doesnt create the jpeg file correctly. Maybe I'm not transmitting it right, or maybe I'm not reading/writing to file correctly. Any help would be appreciated. Thanks everyone you rock!
If you are defining your own protocol, then you need to have a method for sending the length first.
I would recommend testing your code by sending short blocks of ascii text to confirm your i/o. Once that is working you can use the ascii to set up the transfer; ie send the length, and have your receiver ready for an expected block.

segmentation fault while copying a file

I have below simple code , but when I compile and run with GCC on unix, I got segmentation error. Is it because file naming or copying one file to others. Any help appreciated..
#include <iostream>
#include <stdio.h>
using namespace std;
void copy(char *infile, char *outfile) {
FILE *ifp; /* file pointer for the input file */
FILE *ofp; /* file pointer for the output file */
int c; /* character read */
/* open i n f i l e for reading */
ifp = fopen (infile , "r" );
/* open out f i l e for writing */
ofp = fopen(outfile, "w");
/* copy */
while ( (c = fgetc(ifp)) != EOF) /* read a character */
fputc (c, ofp); /* write a character */
/* close the files */
fclose(ifp);
fclose(ofp);
}
main()
{
copy("A.txt","B.txt");
}
The code which you have posted is correct
ifp = fopen (infile , "r" ); //will return NULL if file not there
while ( (c = fgetc(ifp)) != EOF)
The moment you are using , Here is a possibility if you do not have A.txt file in your current directory then you will get segmentation fault.
IF A.txt does not exist, the value of ifp will be NULL (0). Then, this function call will segfault.
fgetc(ifp)
So, change your code to check for NULL on the file opens (each file), for example:
ifp = fopen (infile , "r" );
if (ifp == NULL) {
printf("Could not open %s\n", infile);
exit(-2);
}
You may have to add this include also at the top of your file:
#include <stdlib.h>
Use copy(const char* infile, const char* outfile) in arguments to avoid unnecessary warnings.
Also your files may not be in the current directory in which you are executing code. So give complete path to your file.

Rendering files from C++ Node.js addon

I would like to render files in node.js from C++ addon.
I want to apply some file processing and render the output to the browser via node.js
Here is my C++ Code
std::ifstream in(filename, std::ios::binary);
in.seekg (0, in.end);
int length = in.tellg();
in.seekg (0, in.beg);
char * buffer = new char [length];
in.read (buffer,length);
in.close();
return buffer;
Following is the V8 code to add bindings for node.js, here buffer is the output from the above c++ code.
Local<Function> cb = Local<Function>::Cast(args[1]);
const unsigned argc = 1;
Local<Value> argv[argc] = {Local<Value>::New(String::New(buffer))};
cb->Call(Context::GetCurrent()->Global(), argc, argv);
This code works well for normal text files. I'm getting problem when reading text files which are having unicode characters.
For eg,
Original text file
test start
Billél
last
When receiving in node, I will get
test start
Bill�l
last
Similarly when reading a jpg, png files the output file is different than the original file.
Please help.
I was having problems with this as well. I found an implementation in the V8 examples from Google. The example I found that properly handles UTF8 encoded files is found here:
https://code.google.com/p/v8/source/browse/trunk/samples/shell.cc#218
I adapted the source to this:
const char* ReadFile(const char* fileName, int* fileSize)
{
// reference to c-string version of file
char *fileBuffer = 0;
// attempt to open the file
FILE* fd = fopen(fileName, "rb");
// clear file size
*fileSize = 0;
// file was valid
if(fd != 0)
{
// get size of file
fseek(fd, 0, SEEK_END);
*fileSize = ftell(fd);
rewind(fd);
// allocate file buffer for file contents
fileBuffer = (char*)malloc(*fileSize + 1);
fileBuffer[*fileSize] = 0;
// copy file contents
for (int charCount = 0; charCount < *fileSize;)
{
int charRead = static_cast<int>(fread(&fileBuffer[charCount], 1, *fileSize - charCount, fd));
charCount += charRead;
}
// close the file
fclose(fd);
}
return fileBuffer;
}
Also, make sure when you create a V8 string that you create a String::Utf8Value.
String::Utf8Value v8Utf8String(...);
Then to use the String::Utf8Value as a char* use the following function:
https://code.google.com/p/v8/source/browse/trunk/samples/shell.cc#91

C++ fwrite doesn't write to text file, have no idea why?

I have this code that basically reads from file and creates new file and write the content from the source to the destination file. It reads the buffer and creates the file, but fwrite
doesn't write the content to the newly created file, I have no idea why.
here is the code. (I have to use only this with _sopen, its part of legacy code)
#include <stdio.h>
#include <stdlib.h>
#include <io.h>
#include <fcntl.h>
#include <string>
#include <share.h>
#include <sys\stat.h>
int main () {
std::string szSource = "H:\\cpp\\test1.txt";
FILE* pfFile;
int iFileId = _sopen(szSource.c_str(),_O_RDONLY, _SH_DENYNO, _S_IREAD);
if (iFileId >= 0)
pfFile = fdopen(iFileId, "r");
//read file content to buffer
char * buffer;
size_t result;
long lSize;
// obtain file size:
fseek (pfFile , 0 , SEEK_END);
lSize = ftell (pfFile);
fseek(pfFile, 0, SEEK_SET);
// buffer = (char*) malloc (sizeof(char)*lSize);
buffer = (char*) malloc (sizeof(char)*lSize);
if (buffer == NULL)
{
return false;
}
// copy the file into the buffer:
result = fread (buffer,lSize,1,pfFile);
std::string szdes = "H:\\cpp\\test_des.txt";
FILE* pDesfFile;
int iFileId2 = _sopen(szdes.c_str(),_O_CREAT,_SH_DENYNO,_S_IREAD | _S_IWRITE);
if (iFileId2 >= 0)
pDesfFile = fdopen(iFileId2, "w+");
size_t f = fwrite (buffer , 1, sizeof(buffer),pDesfFile );
printf("Error code: %d\n",ferror(pDesfFile));
fclose (pDesfFile);
return 0;
}
You can make main file and try it see if its working for you .
Thanks
Change your code to the following and then report your results:
int main () {
std::string szSource = "H:\\cpp\\test1.txt";
int iFileId = _sopen(szSource.c_str(),_O_RDONLY, _SH_DENYNO, _S_IREAD);
if (iFileId >= 0)
{
FILE* pfFile;
if ((pfFile = fdopen(iFileId, "r")) != (FILE *)NULL)
{
//read file content to buffer
char * buffer;
size_t result;
long lSize;
// obtain file size:
fseek (pfFile , 0 , SEEK_END);
lSize = ftell (pfFile);
fseek(pfFile, 0, SEEK_SET);
if ((buffer = (char*) malloc (lSize)) == NULL)
return false;
// copy the file into the buffer:
result = fread (buffer,(size_t)lSize,1,pfFile);
fclose(pfFile);
std::string szdes = "H:\\cpp\\test_des.txt";
FILE* pDesfFile;
int iFileId2 = _sopen(szdes.c_str(),_O_CREAT,_SH_DENYNO,_S_IREAD | _S_IWRITE);
if (iFileId2 >= 0)
{
if ((pDesfFile = fdopen(iFileId2, "w+")) != (FILE *)NULL)
{
size_t f = fwrite (buffer, (size_t)lSize, 1, pDesfFile);
printf ("elements written <%d>\n", f);
if (f == 0)
printf("Error code: %d\n",ferror(pDesfFile));
fclose (pDesfFile);
}
}
}
}
return 0;
}
[edit]
for other posters, to show the usage/results of fwrite - what is the output of the following?
#include <stdio.h>
int main (int argc, char **argv) {
FILE *fp = fopen ("f.kdt", "w+");
printf ("wrote %d\n", fwrite ("asdf", 4, 1, fp));
fclose (fp);
}
[/edit]
sizeof(buffer) is the size of the pointer, i.e. 4 and not the number of items in the buffer
If buffer is an array then sizeof(buffer) would potentially work as it returns the number of bytes in the array.
The third parameter to fwrite is sizeof(buffer) which is 4 bytes (a pointer). You need to pass in the number of bytes to write instead (lSize).
Update: It also looks like you're missing the flag indicating the file should be Read/Write: _O_RDWR
This is working for me...
std::string szdes = "C:\\temp\\test_des.txt";
FILE* pDesfFile;
int iFileId2;
err = _sopen_s(&iFileId2, szdes.c_str(), _O_CREAT|_O_BINARY|_O_RDWR, _SH_DENYNO, _S_IREAD | _S_IWRITE);
if (iFileId2 >= 0)
pDesfFile = _fdopen(iFileId2, "w+");
size_t f = fwrite (buffer , 1, lSize, pDesfFile );
fclose (pDesfFile);
Since I can't find info about _sopen, I can only look at man open. It reports:
int open(const char *pathname, int flags);
int open(const char *pathname, int flags, mode_t mode);
Your call _sopen(szdes.c_str(),_O_CREAT,_SH_DENYNO,_S_IREAD | _S_IWRITE); doesn't match either one of those, you seem to have flags and 'something' and modes / what is SH_DENY?
What is the result of man _sopen?
Finally, shouldn't you close the file descriptor from _sopen after you fclose the file pointer?
Your final lines should look like this, btw :
if (iFileId2 >= 0)
{
pDesfFile = fdopen(iFileId2, "w+");
size_t f = fwrite (buffer , 1, sizeof(buffer),pDesfFile ); //<-- the f returns me 4
fclose (pDesfFile);
}
Since you currently write the file regardless of whether or not the fdopen after the O_CREAT succeeded. You also do the same thing at the top, you process the read (and the write) regardless of the success of the fdopen of the RDONLY file :(
You are using a mixture of C and C++. That is confusing.
The sizeof operator does not do what you expect it to do.
Looks like #PJL and #jschroedl found the real problem, but also in general:
Documentation for fwrite states:
fwrite returns the number of full items actually written, which may be less than count if an error occurs. Also, if an error occurs, the file-position indicator cannot be determined.
So if the return value is less than the count passed, use ferror to find out what happened.
The ferror routine (implemented both as a function and as a macro) tests for a reading or writing error on the file associated with stream. If an error has occurred, the error indicator for the stream remains set until the stream is closed or rewound, or until clearerr is called against it.