I have a server and needs to feed data from clients to a library; however, that library only supports reading files (it uses open to access the file).
Since the data can get pretty big, I rather not write it out to a temporary file, read it in with the library then delete it afterwards. Instead I would like to do something similar to a ramdisk where there's a file in which the content is actually in memory.
However, there can be multiple clients sending over large data, I don't think constantly calling mount and umount to create a ramdisk for each client is efficient. Is there a way for me to mount an existing memory buffer as a file without writing to disk?
The library does not support taking in a file descriptor nor FILE*. It will only accept a path which it feeds directly to open
I do have the library's source code and attempted to add in a function that uses fmemopen; however, fmemopen returns a FILE* with no file descriptor. The internals of the library works only with file descriptors and it is too complex to change/add support to use FILE*
I looked at mmap, but it appears to be no different than writing out the data to a file
Using mount requires sudo access and I prefer not to run the application as sudo
bool IS_EXITING = false;
ssize_t getDataSize( int clientFD ) { /* ... */}
void handleClient( int clientFD ) {
// Read in messages to get actual data size
ssize_t dataSize = getDataSize( clientFD );
auto* buffer = new char[ dataSize ];
// Read in all the data from the client
ssize_t bytesRead = 0;
while( bytesRead < dataSize ) {
int numRead = read( clientFD, buffer + bytesRead, dataSize - bytesRead );
bytesRead += numRead;
// Error handle if numRead is <= 0
if ( numRead <= 0 ) { /* ... */ }
}
// Mount the buffer and get a file path... How to do this
std::string filePath = mountBuffer( buffer );
// Library call to read the data
readData( filePath );
delete[ ] buffer;
}
void runServer( int socket )
while( !IS_EXITING ) {
auto clientFD = accept( socket, nullptr, nullptr );
// Error handle if clientFD <= 0
if ( clientFD <= 0 ) { /* ... */ }
std::thread clientThread( handleClient, clientFD );
clientThread.detach( );
}
}
Use /dev/fd. Get the file descriptor of the socket, and append that to /dev/fd/ to get the filename.
If the data is in a memory buffer, you could create a thread that writes to a pipe. Use the file descriptor of the read end of the pipe with /dev/fd.
Related
So I am trying to send a jpeg image (4Kb) from a raspberry pi to my Mac wirelessly using Xbee Series 1. I have an image on the raspberry pi and can read it into binary format. I've used this binary format to save it into another image file and it creates a copy of the image correctly. That tells me that I am reading it correctly. So I am trying to send that data over a serial port (to be transferred by the xbee's) to my Mac. Side note, Xbee's can only transmit I think 80 bytes of data per packet or something. I don't know how that affects what I'm doing though.
My problem is, I do not know how to read the data and properly store it into a jpeg file itself. Most of the Read() functions I have found require you to enter a length to read and I don't know how to tell how long it is since its just a serial stream coming in.
Here is my code to send the jpeg.
#include "xSerial.hpp"
#include <iostream>
#include <cstdlib>
using namespace std;
int copy_file( const char* srcfilename, const char* dstfilename );
int main(){
copy_file("tylerUseThisImage.jpeg", "copyImage.jpeg");
return 0;
}
int copy_file( const char* srcfilename, const char* dstfilename )
{
long len;
char* buf = NULL;
FILE* fp = NULL;
// Open the source file
fp = fopen( srcfilename, "rb" );
if (!fp) return 0;
// Get its length (in bytes)
if (fseek( fp, 0, SEEK_END ) != 0) // This should typically succeed
{ // (beware the 2Gb limitation, though)
fclose( fp );
return 0;
}
len = ftell( fp );
std::cout << len;
rewind( fp );
// Get a buffer big enough to hold it entirely
buf = (char*)malloc( len );
if (!buf)
{
fclose( fp );
return 0;
}
// Read the entire file into the buffer
if (!fread( buf, len, 1, fp ))
{
free( buf );
fclose( fp );
return 0;
}
fclose( fp );
// Open the destination file
fp = fopen( dstfilename, "wb" );
if (!fp)
{
free( buf );
return 0;
}
// this is where I send data in but over serial port.
//serialWrite() is just the standard write() being used
int fd;
fd = xserialOpen("/dev/ttyUSB0", 9600);
serialWrite(fd, buf, len);
//This is where the file gets copied to another file as a test
// Write the entire buffer to file
if (!fwrite( buf, len, 1, fp ))
{
free( buf );
fclose( fp );
return 0;
}
// All done -- return success
fclose( fp );
free( buf );
return 1;
}
On the receive side I know I need to open up the serial port to read and use some sort of read() but I don't know how that is done. Using a serial library it has some functions to check if serial data is available and return the number of characters available to read.
One question about the number of characters available to read, will that number grow as the serial stream comes over or will it immediately tell the entire length of the data to be read?
But finally, I know after I open the serial port, I need read the data into a buffer and then write that buffer to a file but I have not had any luck. This is what I have tried thus far.
// Loop, getting and printing characters
char temp;
bool readComplete = false;
int bytesRead = 0;
fp = fopen("copyImage11.jpeg", "rwb");
for (;;)
{
if(xserialDataAvail(fd) > 0)
{
bytesRead = serialRead(fd, buf, len);
readComplete = true;
}
if (readComplete)
{
if (!fwrite(buf, bytesRead, 1, fp))
{
free(buf);
fclose(fp);
return 0;
}
fclose(fp);
free(buf);
return 1;
}
}
I don't get errors with my code, it just doesnt create the jpeg file correctly. Maybe I'm not transmitting it right, or maybe I'm not reading/writing to file correctly. Any help would be appreciated. Thanks everyone you rock!
If you are defining your own protocol, then you need to have a method for sending the length first.
I would recommend testing your code by sending short blocks of ascii text to confirm your i/o. Once that is working you can use the ascii to set up the transfer; ie send the length, and have your receiver ready for an expected block.
I am developing a distributed system where in a server will distribute a huge task to clients who would process them and return the result.
Server has to accept huge files with size of the order of 20Gb.
Server has to split this file into smaller pieces and send the path to the clients who in turn would scp the file and process them.
I am using read and write to perform file splitting which is performing ridiculously slow.
Code
//fildes - Source File handle
//offset - The point from which the split to be made
//buffersize - How much to split
//This functions is called in a for loop
void chunkFile(int fildes, char* filePath, int client_id, unsigned long long* offset, int buffersize)
{
unsigned char* buffer = (unsigned char*) malloc( buffersize * sizeof(unsigned char) );
char* clientFileName = (char*)malloc( 1024 );
/* prepare client file name */
sprintf( clientFileName, "%s%d.txt",filePath, client_id);
ssize_t readcount = 0;
if( (readcount = pread64( fildes, buffer, buffersize, *offset ) ) < 0 )
{
/* error reading file */
printf("error reading file \n");
}
else
{
*offset = *offset + readcount;
//printf("Read %ud bytes\n And offset becomes %llu\n", readcount, *offset);
int clnfildes = open( clientFileName, O_CREAT | O_TRUNC | O_WRONLY , 0777);
if( clnfildes < 0 )
{
/* error opening client file */
}
else
{
if( write( clnfildes, buffer, readcount ) != readcount )
{
/* eror writing client file */
}
else
{
close( clnfildes );
}
}
}
free( buffer );
return;
}
Is there any faster way to split files?
Is there any way client can access its chunk in the file without using scp (read without transfer)?
I am using C++. I am ready to use other languages if they can perform faster.
You can place the file in the reach of a webserver and then use curl from the clients
curl --range 10000-20000 http://the.server.ip/file.dat > result
would get 10000 bytes (from 10000 to 20000)
If the file is highly redundant and the network is slow probably using compression could help speeding up the transfer a lot. For example executing
nc -l -p 12345 | gunzip > chunk
on the client and then executing
dd skip=10000 count=10000 if=bigfile bs=1 | gzip | nc client.ip.address 12345
on the server you can transfer a section doing a gzip compression on the fly without the need of creating intermediate files.
EDIT
A single command to get a section of a file from a server using compression over the network is
ssh server 'dd skip=10000 count=10000 bs=1 if=bigfile | gzip' | gunzip > chunk
Is rsync over SSH with the --partial an option?
Then you might not need to split the files either since you can just continue if the transfer is interrupted.
Are the file split sizes known in advance or are they split along some marker in the file?
You can deposit file onto NFS shared device, and client can mount that device in RO-mode. Thereafter, client can open file, and use mmap() or pread() for read it's slice (piece of file). By this way, to client, will be transferred just needed part of the file.
As title says I'm getting this error while trying to open file for binary writing(mode doesnt seem matter).
My app uses libev to handle sockets(non blocking/epoll backend) and while parsing client packets i want at some point where i receive fileupload message to start writing down to disk data i get from server.
I couldn't google anything about EAGAIN(Resource temporarily unavailable) message and file opening..
These are methods I've tried:
fopen( ... ) returns EAGAIN
using ofstream/fstream's open(...) by creating them on heap(new) returns EAGAIN
using ofstream/fstream's open(...) staticly as class member (ofstream m_ofFile;) works, but strangly compiler generates code which calls ofstream destructor and closes file before exiting class method im calling .open from. Now that contradicts with my C++ knowledge where for class members which are class types, destructors are called right before class owner's..
edit:
#Joachim
You're right, I'm not acually getting this error..(method #1. gonna test method #2 again soon). File opens regulary and i get regular FILE*. That happens in Init(...) function of my class, but then when I call OnFileChunk later on m_hFile is 0 and therefor i cant write to it. Here is complete class code:
class CFileTransferCS
{
wstring m_wszfile;
wstring m_wszLocalUserFolderPath;
int m_nChunkIndex;
int m_nWrittenBytes;
int m_nFileSize;
FILE* m_hFile;
CFileTransferCS( const CFileTransferCS& c ){}
CFileTransferCS& operator=( const CFileTransferCS& c ){}
public:
CFileTransferCS( );
CFileTransferCS( wstring file, uint32_t size );
void OnFileChunk( char* FileChunk, int size );
void Init( wstring file, uint32_t size );
void SetLocalUserLocalPath( wstring path );
};
CFileTransferCS::CFileTransferCS( )
{
m_hFile = NULL;
m_wszLocalUserFolderPath = L"";
m_nChunkIndex = 0;
m_nWrittenBytes = 0;
}
CFileTransferCS::CFileTransferCS( wstring file, uint32_t size )
{
m_nChunkIndex = 0;
m_nWrittenBytes = 0;
m_wszfile = file;
m_nFileSize = size;
wstring wszFullFilePath = m_wszLocalUserFolderPath + m_wszfile.substr( m_wszfile.find_last_of(L"\\") + 1 );
// string fp = string( file.begin(),file.end() );
string fp ="test.bin"; //for testing purposes
this->m_hFile = fopen(fp.c_str(),"wb");
printf("fp: %s hFile %d\n",fp.c_str(),this->m_hFile); //everything's fine here...
if(!this->m_hFile)
{
perror ("cant open file ");
}
}
void CFileTransferCS::SetLocalUserLocalPath( wstring path )
{
m_wszLocalUserFolderPath = path;
}
void CFileTransferCS::Init( wstring file, uint32_t size )
{
// If previous transfer session got interrupted for whatever reason
// close and delete old file and open new one
if( this->m_hFile )
{
printf("init CS transfer: deleting old file///\n");
fclose( this->m_hFile );
string fp = string( file.begin(),file.end() );
if( remove( fp.c_str() ))
{
//cant delete file...
}
}
CFileTransferCS( file, size );
}
void CFileTransferCS::OnFileChunk( char* FileChunk, int size )
{
for (;;)
{
printf("ofc: hFile %d\n",this->m_hFile); //m_hFile is 0 here...
if( !this->m_hFile )
{
// m_pofFile->open("kurac.txt",fstream::out);
printf("file not opened!\n");
break;
}
int nBytesWritten = fwrite( FileChunk, 1, size, this->m_hFile );
if( !nBytesWritten )
{
perror("file write!!\n");
break;
}
m_nWrittenBytes+=size;
if( m_nWrittenBytes == m_nFileSize )
{
fclose( m_hFile );
printf("file uplaod transfer finished!!!\n");
}
break;
}
printf("CFileTransferCS::OnFileChunk size: %d m_nWrittenBytes: %d m_nFileSize: %d\n",size,m_nWrittenBytes,m_nFileSize);
}
final edit:
I got it.. Calling explicitly CFileTransferCS( wstring file, uint32_t size ) constructor made problems.. Calling constructor like this explicitly caused that this pointer in it wasnt original one(that Init function was using) so when i was opening file from it and saving handle to m_hFile, i was doing it in some other object(now im not sure if CFileTransferCS(..) call allocated memory for CFileTransferCS object or it corrupted some other part of memory randomly.. will check it out with IDA later on )
Thanks to everyone and my apologies.
Regards, Mike –
#MikeJacksons answer:
Calling explicitly CFileTransferCS( wstring file, uint32_t size ) constructor made problems. Calling constructor like this explicitly caused that this pointer in it wasnt original one(that Init function was using) so when i was opening file from it and saving handle to m_hFile, i was doing it in some other object(now im not sure if CFileTransferCS(..) call allocated memory for CFileTransferCS object or it corrupted some other part of memory randomly.. will check it out with IDA later on ) Thanks everyone and my apologies.
Removed: CFileTransferCS( file, size );
(No need to appologize Mike, looks like you did a great job hunting down the bug).
I am trying to write a c++ code which read from a file (any type) and write the file data (binary data) on the socket , so the receiver must take this data and create a file , i should see the same data with the same format , the problem is the data is still binary and written to the file as binary data !
if a tested the code without sending on a network , it will work well !
any idea ?
thanks in advance .
note , i am using Ubuntu 11.10 if it affects this issue ..
Here is the code, on the client side:
filer=fopen("a.doc","rb");
fseek (filer , 0 , SEEK_END);
long size;
size = ftell (filer);
rewind (filer);
buffer = (char*) malloc (sizeof(char)*size);
numr=fread(buffer,1,size,filer);
fclose(filer); //some socket code
char buffer2[size];
strcpy(buffer2 , buffer);
n = write(sockfd,buffer2,size);
and for the server side :
n = read(sock,buffer,length);
FILE * filew;
int numw;
filew=fopen("acopy.doc","wb");
numw=fwrite(buffer,1,len,filew);
fclose(filew);
First thing is that you'll need to loop. The calls to read and write will not always be the full buffer. Disclaimer that I couldn't test this here
Ex:
numr=fread(buffer,1,size,filer);
fclose(filer); //some socket code
char buffer2[size];
strcpy(buffer2 , buffer);
n = write(sockfd,buffer2,size);
to
char buffer2[size];
while ((numr=fread(buffer,1,size,filer)) != 0)
{
strcpy(buffer2 , buffer);
n = 0;
while ((n = write(sockfd,buffer2+n,numr-n)) != 0)
;
}
fclose(filer); //some socket code
filer = NULL;
Likewise on the server side
n = read(sock,buffer,length);
FILE * filew;
int numw;
filew=fopen("acopy.doc","wb");
numw=fwrite(buffer,1,len,filew);
fclose(filew);
to
FILE * filew;
filew=fopen("acopy.doc","wb");
int numw = 0;
while ((n = read(sock,buffer,length)) != 0)
{
while ((numw=fwrite(buffer+numw,1,n-numw,filew) != 0)
;
}
fclose(filew);
I am using inotify to monitor a local file, for example "/root/temp" using
inotify_add_watch(fd, "/root/temp", mask).
When this file is deleted, the program will be blocked by read(fd, buf, bufSize) function. Even if I create a new "/root/temp" file, the program is still block by read function. I am wondering if inotify can detect that the monitored file is created and the read function can get something from fd so that read will not be blocked forever.
Here is my code:
uint32_t mask = IN_ALL_EVENTS;
int fd = inotify_init();
int wd = inotify_add_watch(fd, "/root/temp", mask);
char *buf = new char[1000];
int nbytes = read(fd, buf, 500);
I monitored all events.
The problem is that read is a blocking operation by default.
If you don't want it to block, use select or poll before read. For example:
struct pollfd pfd = { fd, POLLIN, 0 };
int ret = poll(&pfd, 1, 50); // timeout of 50ms
if (ret < 0) {
fprintf(stderr, "poll failed: %s\n", strerror(errno));
} else if (ret == 0) {
// Timeout with no events, move on.
} else {
// Process the new event.
struct inotify_event event;
int nbytes = read(fd, &event, sizeof(event));
// Do what you need...
}
Note: untested code.
In order to see a new file created, you need to watch the directory, not the file. Watching a file should see when it is deleted (IN_DELETE_SELF) but may not spot if a new file is created with the same name.
You should probably watch the directory for IN_CREATE | IN_MOVED_TO to see newly created files (or files moved in from another place).
Some editors and other tools (e.g. rsync) may create a file under a different name, then rename it.