I am trying to write a c++ code which read from a file (any type) and write the file data (binary data) on the socket , so the receiver must take this data and create a file , i should see the same data with the same format , the problem is the data is still binary and written to the file as binary data !
if a tested the code without sending on a network , it will work well !
any idea ?
thanks in advance .
note , i am using Ubuntu 11.10 if it affects this issue ..
Here is the code, on the client side:
filer=fopen("a.doc","rb");
fseek (filer , 0 , SEEK_END);
long size;
size = ftell (filer);
rewind (filer);
buffer = (char*) malloc (sizeof(char)*size);
numr=fread(buffer,1,size,filer);
fclose(filer); //some socket code
char buffer2[size];
strcpy(buffer2 , buffer);
n = write(sockfd,buffer2,size);
and for the server side :
n = read(sock,buffer,length);
FILE * filew;
int numw;
filew=fopen("acopy.doc","wb");
numw=fwrite(buffer,1,len,filew);
fclose(filew);
First thing is that you'll need to loop. The calls to read and write will not always be the full buffer. Disclaimer that I couldn't test this here
Ex:
numr=fread(buffer,1,size,filer);
fclose(filer); //some socket code
char buffer2[size];
strcpy(buffer2 , buffer);
n = write(sockfd,buffer2,size);
to
char buffer2[size];
while ((numr=fread(buffer,1,size,filer)) != 0)
{
strcpy(buffer2 , buffer);
n = 0;
while ((n = write(sockfd,buffer2+n,numr-n)) != 0)
;
}
fclose(filer); //some socket code
filer = NULL;
Likewise on the server side
n = read(sock,buffer,length);
FILE * filew;
int numw;
filew=fopen("acopy.doc","wb");
numw=fwrite(buffer,1,len,filew);
fclose(filew);
to
FILE * filew;
filew=fopen("acopy.doc","wb");
int numw = 0;
while ((n = read(sock,buffer,length)) != 0)
{
while ((numw=fwrite(buffer+numw,1,n-numw,filew) != 0)
;
}
fclose(filew);
Related
I have a server and needs to feed data from clients to a library; however, that library only supports reading files (it uses open to access the file).
Since the data can get pretty big, I rather not write it out to a temporary file, read it in with the library then delete it afterwards. Instead I would like to do something similar to a ramdisk where there's a file in which the content is actually in memory.
However, there can be multiple clients sending over large data, I don't think constantly calling mount and umount to create a ramdisk for each client is efficient. Is there a way for me to mount an existing memory buffer as a file without writing to disk?
The library does not support taking in a file descriptor nor FILE*. It will only accept a path which it feeds directly to open
I do have the library's source code and attempted to add in a function that uses fmemopen; however, fmemopen returns a FILE* with no file descriptor. The internals of the library works only with file descriptors and it is too complex to change/add support to use FILE*
I looked at mmap, but it appears to be no different than writing out the data to a file
Using mount requires sudo access and I prefer not to run the application as sudo
bool IS_EXITING = false;
ssize_t getDataSize( int clientFD ) { /* ... */}
void handleClient( int clientFD ) {
// Read in messages to get actual data size
ssize_t dataSize = getDataSize( clientFD );
auto* buffer = new char[ dataSize ];
// Read in all the data from the client
ssize_t bytesRead = 0;
while( bytesRead < dataSize ) {
int numRead = read( clientFD, buffer + bytesRead, dataSize - bytesRead );
bytesRead += numRead;
// Error handle if numRead is <= 0
if ( numRead <= 0 ) { /* ... */ }
}
// Mount the buffer and get a file path... How to do this
std::string filePath = mountBuffer( buffer );
// Library call to read the data
readData( filePath );
delete[ ] buffer;
}
void runServer( int socket )
while( !IS_EXITING ) {
auto clientFD = accept( socket, nullptr, nullptr );
// Error handle if clientFD <= 0
if ( clientFD <= 0 ) { /* ... */ }
std::thread clientThread( handleClient, clientFD );
clientThread.detach( );
}
}
Use /dev/fd. Get the file descriptor of the socket, and append that to /dev/fd/ to get the filename.
If the data is in a memory buffer, you could create a thread that writes to a pipe. Use the file descriptor of the read end of the pipe with /dev/fd.
I cobbled together a simple C++ app that dumps HID keycodes from /dev/input/event[x] into a named pipe on Linux. It logs to the console fine but when I read the named pipe from my node.js app, it randomly misses data events.
Relevant C++ code:
int fd;
char * myfifo = "/tmp/testfifo";
mkfifo(myfifo, 0660);
fd = open(myfifo, O_WRONLY);
while (1){
value = ev[0].value;
if (value != ' ' && ev[1].value == 1 && ev[1].type == 1) {
string s = to_string(ev[1].code);
char const *sop = (s + "\n").c_str();
cout << sop;
write(fd, sop, sizeof(sop));
}
}
Relevant node.js code:
var fifo = '/tmp/testfifo';
var fd = fs.openSync(fifo, 'r+');
fs.createReadStream(null, {fd:fd}).on('data', function (d) {
console.log(d);
});
I'm guessing my method for reading the named pipe is flawed since the C++ output looks good but I know almost nothing about C++ so am not sure if I'm flushing the pipe properly on the C++ side or there is some sort of read throttle I need to tweak on the node.js side. Any ideas?
A couple of errors:
Statement char const *sop = (s + "\n").c_str(); produces a dangling reference because the temporary string produced by (s + "\n") gets destroyed after the statement has been evaluated.
write(fd, sop, sizeof(sop)); writes sizeof(char const*) bytes, whereas it should write strlen(sop) bytes.
A fix:
std::string sop = s + "\n";
write(fd, sop.data(), sop.size());
let's say I need to send, for instance, five images from a client to a server over a socket and that I want to do it at once (not sending one and waiting for an ACK).
Questions:
I'd like to know if there are some best practices or guidelines for delimiting the end of each one.
What would be the safest approach for detecting the delimiters and processing each image once in the server? (In C/C++ if possible)
Thanks in advance!
Since images are binary data, it would be difficult to come up with delimiter that cannot be contained in the image. (And ultimately confusing the receiving side)
I would advice you to create a header that would be placed at the beginning of the transmission, or at the beginning of each image.
An example:
struct Header
{
uint32_t ImageLength;
// char ImageName[128];
} __attribute__(packed);
The sender should prepend this before each image and fill in the length correctly. The receiver would then know when the image ends and would expect another Header structure at that position.
The attribute(packed) is a safety, that makes sure the header will have the same alignment even if you compile server and client with different GCC versions. It's recomended in cases where structures are interpreted by different processes.
Data Stream:
Header
Image Data
Header
Image Data
Header
Image Data
...
You can use these function to send files (from client in java) to a server (in C). The idea is to send 4 bytes which indicates the file's size followed by the file content, when all files have been sent, send 4 bytes (all set to 0 zero) to indicate the end of the transfer.
// Compile with Microsoft Visual Studio 2008
// path, if not empty, must be ended with a path separator '/'
// for example: "C:/MyImages/"
int receiveFiles(SOCKET sck, const char *pathDir)
{
int fd;
long fSize=0;
char buffer[8 * 1024];
char filename[MAX_PATH];
int count=0;
// keep on receiving until we get the appropiate signal
// or the socket has an error
while (true)
{
if (recv(sck, buffer, 4, 0) != 4)
{
// socket is closed or has an error
// return what we've received so far
return count;
}
fSize = (int) ((buffer[0] & 0xff) << 24) |
(int) ((buffer[1] & 0xff) << 16) |
(int) ((buffer[2] & 0xff) << 8) |
(int) (buffer[3] & 0xff);
if (fSize == 0)
{
// received final signal
return count;
}
sprintf(filename, "%sIMAGE_%d.img", pathDir, count+1);
fd = _creat(filename, _S_IREAD | _S_IWRITE);
int iReads;
int iRet;
int iLeft=fSize;
while (iLeft > 0)
{
if (iLeft > sizeof(buffer)) iReads = sizeof(buffer);
else iReads=iLeft;
if ((iRet=recv(sck, buffer, iReads, 0)) <= 0)
{
_close(fd);
// you may delete the file or leave it to inspect
// _unlink(filename);
return count; // socket is closed or has an error
}
iLeft-=iRet;
_write(fd, buffer, iRet);
}
count++;
_close(fd);
}
}
The client part
/**
* Send a file to a connected socket.
* <p>
* First it send the file size if 4 bytes then the file's content.
* </p>
* <p>
* Note: File size is limited to a 32bit signed integer, 2GB
* </p>
*
* #param os
* OutputStream of the connected socket
* #param fileName
* The complete file's path of the image to send
* #throws Exception
* #see {#link receiveFile} for an example on how to receive the file from the other side.
*
*/
public void sendFile(OutputStream os, String fileName) throws Exception
{
// File to send
File myFile = new File(fileName);
int fSize = (int) myFile.length();
if (fSize == 0) return; // No empty files
if (fSize < myFile.length())
{
System.out.println("File is too big'");
throw new IOException("File is too big.");
}
// Send the file's size
byte[] bSize = new byte[4];
bSize[0] = (byte) ((fSize & 0xff000000) >> 24);
bSize[1] = (byte) ((fSize & 0x00ff0000) >> 16);
bSize[2] = (byte) ((fSize & 0x0000ff00) >> 8);
bSize[3] = (byte) (fSize & 0x000000ff);
// 4 bytes containing the file size
os.write(bSize, 0, 4);
// In case of memory limitations set this to false
boolean noMemoryLimitation = true;
FileInputStream fis = new FileInputStream(myFile);
BufferedInputStream bis = new BufferedInputStream(fis);
try
{
if (noMemoryLimitation)
{
// Use to send the whole file in one chunk
byte[] outBuffer = new byte[fSize];
int bRead = bis.read(outBuffer, 0, outBuffer.length);
os.write(outBuffer, 0, bRead);
}
else
{
// Use to send in a small buffer, several chunks
int bRead = 0;
byte[] outBuffer = new byte[8 * 1024];
while ((bRead = bis.read(outBuffer, 0, outBuffer.length)) > 0)
{
os.write(outBuffer, 0, bRead);
}
}
os.flush();
}
finally
{
bis.close();
}
}
To send the files from the client:
try
{
// The file name must be a fully qualified path
sendFile(mySocket.getOutputStream(), "C:/MyImages/orange.png");
sendFile(mySocket.getOutputStream(), "C:/MyImages/lemmon.png");
sendFile(mySocket.getOutputStream(), "C:/MyImages/apple.png");
sendFile(mySocket.getOutputStream(), "C:/MyImages/papaya.png");
// send the end of the transmition
byte[] buff = new byte[4];
buff[0]=0x00;
buff[1]=0x00;
buff[2]=0x00;
buff[3]=0x00;
mySocket.getOutputStream().write(buff, 0, 4);
}
catch (Exception e)
{
e.printStackTrace();
}
If you cannot easily send a header containing the length, use some likely delimiter. If the images are not compressed and consist of bitmap-stype data, maybe 0xFF/0XFFFF/0xFFFFFFF as fully-saturated luminance values are usually rare?
Use an escape-sequence to eliminate any instances of the delimiter that turn up inside your data.
This does mean iterating all the data at both ends, but depending on your data flows, and what is being done anyway, it may be a useful solution :(
I have this code that basically reads from file and creates new file and write the content from the source to the destination file. It reads the buffer and creates the file, but fwrite
doesn't write the content to the newly created file, I have no idea why.
here is the code. (I have to use only this with _sopen, its part of legacy code)
#include <stdio.h>
#include <stdlib.h>
#include <io.h>
#include <fcntl.h>
#include <string>
#include <share.h>
#include <sys\stat.h>
int main () {
std::string szSource = "H:\\cpp\\test1.txt";
FILE* pfFile;
int iFileId = _sopen(szSource.c_str(),_O_RDONLY, _SH_DENYNO, _S_IREAD);
if (iFileId >= 0)
pfFile = fdopen(iFileId, "r");
//read file content to buffer
char * buffer;
size_t result;
long lSize;
// obtain file size:
fseek (pfFile , 0 , SEEK_END);
lSize = ftell (pfFile);
fseek(pfFile, 0, SEEK_SET);
// buffer = (char*) malloc (sizeof(char)*lSize);
buffer = (char*) malloc (sizeof(char)*lSize);
if (buffer == NULL)
{
return false;
}
// copy the file into the buffer:
result = fread (buffer,lSize,1,pfFile);
std::string szdes = "H:\\cpp\\test_des.txt";
FILE* pDesfFile;
int iFileId2 = _sopen(szdes.c_str(),_O_CREAT,_SH_DENYNO,_S_IREAD | _S_IWRITE);
if (iFileId2 >= 0)
pDesfFile = fdopen(iFileId2, "w+");
size_t f = fwrite (buffer , 1, sizeof(buffer),pDesfFile );
printf("Error code: %d\n",ferror(pDesfFile));
fclose (pDesfFile);
return 0;
}
You can make main file and try it see if its working for you .
Thanks
Change your code to the following and then report your results:
int main () {
std::string szSource = "H:\\cpp\\test1.txt";
int iFileId = _sopen(szSource.c_str(),_O_RDONLY, _SH_DENYNO, _S_IREAD);
if (iFileId >= 0)
{
FILE* pfFile;
if ((pfFile = fdopen(iFileId, "r")) != (FILE *)NULL)
{
//read file content to buffer
char * buffer;
size_t result;
long lSize;
// obtain file size:
fseek (pfFile , 0 , SEEK_END);
lSize = ftell (pfFile);
fseek(pfFile, 0, SEEK_SET);
if ((buffer = (char*) malloc (lSize)) == NULL)
return false;
// copy the file into the buffer:
result = fread (buffer,(size_t)lSize,1,pfFile);
fclose(pfFile);
std::string szdes = "H:\\cpp\\test_des.txt";
FILE* pDesfFile;
int iFileId2 = _sopen(szdes.c_str(),_O_CREAT,_SH_DENYNO,_S_IREAD | _S_IWRITE);
if (iFileId2 >= 0)
{
if ((pDesfFile = fdopen(iFileId2, "w+")) != (FILE *)NULL)
{
size_t f = fwrite (buffer, (size_t)lSize, 1, pDesfFile);
printf ("elements written <%d>\n", f);
if (f == 0)
printf("Error code: %d\n",ferror(pDesfFile));
fclose (pDesfFile);
}
}
}
}
return 0;
}
[edit]
for other posters, to show the usage/results of fwrite - what is the output of the following?
#include <stdio.h>
int main (int argc, char **argv) {
FILE *fp = fopen ("f.kdt", "w+");
printf ("wrote %d\n", fwrite ("asdf", 4, 1, fp));
fclose (fp);
}
[/edit]
sizeof(buffer) is the size of the pointer, i.e. 4 and not the number of items in the buffer
If buffer is an array then sizeof(buffer) would potentially work as it returns the number of bytes in the array.
The third parameter to fwrite is sizeof(buffer) which is 4 bytes (a pointer). You need to pass in the number of bytes to write instead (lSize).
Update: It also looks like you're missing the flag indicating the file should be Read/Write: _O_RDWR
This is working for me...
std::string szdes = "C:\\temp\\test_des.txt";
FILE* pDesfFile;
int iFileId2;
err = _sopen_s(&iFileId2, szdes.c_str(), _O_CREAT|_O_BINARY|_O_RDWR, _SH_DENYNO, _S_IREAD | _S_IWRITE);
if (iFileId2 >= 0)
pDesfFile = _fdopen(iFileId2, "w+");
size_t f = fwrite (buffer , 1, lSize, pDesfFile );
fclose (pDesfFile);
Since I can't find info about _sopen, I can only look at man open. It reports:
int open(const char *pathname, int flags);
int open(const char *pathname, int flags, mode_t mode);
Your call _sopen(szdes.c_str(),_O_CREAT,_SH_DENYNO,_S_IREAD | _S_IWRITE); doesn't match either one of those, you seem to have flags and 'something' and modes / what is SH_DENY?
What is the result of man _sopen?
Finally, shouldn't you close the file descriptor from _sopen after you fclose the file pointer?
Your final lines should look like this, btw :
if (iFileId2 >= 0)
{
pDesfFile = fdopen(iFileId2, "w+");
size_t f = fwrite (buffer , 1, sizeof(buffer),pDesfFile ); //<-- the f returns me 4
fclose (pDesfFile);
}
Since you currently write the file regardless of whether or not the fdopen after the O_CREAT succeeded. You also do the same thing at the top, you process the read (and the write) regardless of the success of the fdopen of the RDONLY file :(
You are using a mixture of C and C++. That is confusing.
The sizeof operator does not do what you expect it to do.
Looks like #PJL and #jschroedl found the real problem, but also in general:
Documentation for fwrite states:
fwrite returns the number of full items actually written, which may be less than count if an error occurs. Also, if an error occurs, the file-position indicator cannot be determined.
So if the return value is less than the count passed, use ferror to find out what happened.
The ferror routine (implemented both as a function and as a macro) tests for a reading or writing error on the file associated with stream. If an error has occurred, the error indicator for the stream remains set until the stream is closed or rewound, or until clearerr is called against it.
I'm writing Qt-based client application. It connects to remote server using QTcpSocket. Before sending any actual data it needs to send login info, which is zlib-compressed json.
As far as I know from server sources, to make everything work I need to send X bytes of compressed data following 4 bytes with length of uncompressed data.
Uncompressing on server-side looks like this:
/* look at first 32 bits of buffer, which contains uncompressed len */
unc_len = le32toh(*((uint32_t *)buf));
if (unc_len > CLI_MAX_MSG)
return NULL;
/* alloc buffer for uncompressed data */
obj_unc = malloc(unc_len + 1);
if (!obj_unc)
return NULL;
/* decompress buffer (excluding first 32 bits) */
comp_p = buf + 4;
if (uncompress(obj_unc, &dest_len, comp_p, buflen - 4) != Z_OK)
goto out;
if (dest_len != unc_len)
goto out;
memcpy(obj_unc + unc_len, &zero, 1); /* null terminate */
I'm compressing json using Qt built-in zlib (I've just downloaded headers and placed it in mingw's include folder):
char json[] = "{\"version\":1,\"user\":\"test\"}";
char pass[] = "test";
std::auto_ptr<Bytef> message(new Bytef[ // allocate memory for:
sizeof(ubbp_header) // + msg header
+ sizeof(uLongf) // + uncompressed data size
+ strlen(json) // + compressed data itself
+ 64 // + reserve (if compressed size > uncompressed size)
+ SHA256_DIGEST_LENGTH]);//+ SHA256 digest
uLongf unc_len = strlen(json);
uLongf enc_len = strlen(json) + 64;
// header goes first, so server will determine that we want to login
Bytef* pHdr = message.get();
// after that: uncompressed data length and data itself
Bytef* pLen = pHdr + sizeof(ubbp_header);
Bytef* pDat = pLen + sizeof(uLongf);
// hash of compressed message updated with user pass
Bytef* pSha;
if (Z_OK != compress(pLen, &enc_len, (Bytef*)json, unc_len))
{
qDebug("Compression failed.");
return false;
}
Complete function code here: http://pastebin.com/hMY2C4n5
Even though server correctly recieves uncompressed length, uncompress() returning Z_BUF_ERROR.
P.S.: I'm actually writing pushpool's client to figure out how it's binary protocol works. I've asked this question on official bitcoin forum, but no luck there. http://forum.bitcoin.org/index.php?topic=24257.0
Turns out it was server-side bug. More details in bitcoin forum thread.