I want to write a C++ code to get a file from server B via server A using password less sftp.The file on server B is infact being copied (via sftp ) from another server C. I was able to retreive the file from server B , however even if the file was still being copied, I was still able to get the file(incomplete file as it was still being transferred to server B from server C). I want to put a check if the file is being copied then i should not get it using sftp and wait till it it is completely moved. As far as i know sftp prompt does not support lot of commands. Can somebody please give me some inputs on how can i achieve this?
A traditional way to do this is to transfer the (big) "paydata" file, and a (small / empty) "flag" file after that. You (on the receiving end) wait until the flag file exists. If it does, the transfer of the paydata file has finished; delete the flag file, and do whatever you do with the paydata file.
Related
I'm in the process of trying to copy an hdf5 binary file on a local machine to a remote computing blade. I am using libssh to copy the desired directory or files out after they are generated by my Qt application. Using libssh I am able to open an ssh_session, authenticate it, open a channel and send remote commands.
for (QStringList::iterator it = ipList.begin(); it != ipList.end(); ++it)
{
ssh_session my_session = new ssh_new();
QString ip_address = *it;
ssh_options_set(my_session, SSH_OPTIONS_HOST, ip_address.toStdString().c_str());
// Connect...Authenticate using public key....
QString command = QString("rm -r %2; cp -r %1 %1; cp /local/file.txt /remote/file.txt").arg(local_dir, remote_dir);
execute_remote_command(my_session, command.toStdString().c_str());
// Open channel and execute command
ssh_disconnect(my_session);
ssh_free(my_session);
}
This command is being executed for each individual computing blade. In between each of the calls I am closing and opening an ssh session to the next blade. The files make it out the blades but they appear to be corrupt. They are the exact same file size. I haven't figured out a way to compare the individual bytes to see just how corrupt they are, any tips there would be appreciated as well.
When I run my ssh copy commands in a separate test terminal program the files appear to make it intact and are readable on the blades. The issue only seems to occur when the files are moved from within the Qt GUI program.
EDIT: So delving a little bit deeper into what is wrong, it appears that the file on the remote server is not the same size. It appears to be missing a large portion of the bytes. On top of that when I examine what is there byte by byte with the local version of the file, almost all of the bytes differ.
Turns out the answer was that the HDF5 writer wasn't being closed properly before the SSH commands were being called. I fixed the problem by dynamically allocating the custom H5 class that someone else wrote and made sure to delete it before the SSH commands were called. Turns out whoever wrote the HDF5 read and write class didn't handle file opening and closing properly and didn't provide functions to do so.
Below is an example of what I am talking about.
HDF5writer_class *hdf5_writer = new HDF5writer_class();
hdf5_writer->create_file("/local/machine/hdf5_file.h5");
// ... add the data to the file
delete hdf5_writer;
// Open SSH Session and run the copy commands
Long story short, make sure the file you are writing is closed and released for use before you try to copy it.
Well this time I'm trying to write a program in C which recover deleted files from a disk, it could be an external disk, I have an idea than i had used before on linux, it is to open the disk as a kind of file and scaning the Headers and file footers of everything within the disk, the point is I'm not sure if there's allow on windows to open a disk as an File, basiclly I have the logic how to develope this program, but I'm not sure how to implement it on windows, anybody can give me a hand with this?.
The code I used on linux to open a disk as a file was:
Edit: That was a sample of what I was using guys, it's just to give you an idea of what I was doing, the correct syntax I used was the next:
direccion = ui->linea->text().toLatin1().constData();
f = fopen(direccion,"rb");
I used QT creator on linux, and direccion variable was a TextField value which contained the file path of the disk through a button function that open a QFileDialog...
could I use it in windows as well?
Thank you before hand..
"The code I used on linux to open a disk as a file was:"
File *f = fopen("E:\", "rb");
I seriously doubt you ever got this code working on any linux system (or windows either).
You'll need to escape the backslash path delimiter, if it's presented in any string literal:
FILE* f = fopen("E:\\", "rb");
// ^
Also all that filesystem path style you are presenting to access a particular disk, is about accessing a windows file path/disk.
No linux file system has notion about drive characters, and the file path delimiter value used is '/', not '\\'.
To recover deleted files, you can't use fopen or fstream::open because the file was deleted. Check the return value from the function or test the stream state.
The way to recover deleted files is:
Get the Master File Table as raw data.
Search for the record containing a string similar to the deleted
filename.
Change the entry in the Master File Table to "undeleted".
Write the Master File Table back to the drive.
The above usually requires platform specific API, which is different on Linux and Windows platforms.
I am trying to get the content of a file using WinHTTP in C++. The file is a XML File and is generated by a executable on a server.
The code for init, connect and even read a file on the specified server address is working.
// Connect to internet.
m_hInternet = InternetOpen(L"HTTPRIP",INTERNET_OPEN_TYPE_PRECONFIG,NULL,NULL,0);
// Check if worked.
if( !m_hInternet )
return;
// Connect to selected URL.
m_hUrl = InternetOpenUrlA(m_hInternet, strUrl.c_str(), NULL, 0, INTERNET_FLAG_PRAGMA_NOCACHE | INTERNET_FLAG_RESYNCHRONIZE, 0);
// Check if worked.
if( !m_hUrl )
return;
if( InternetReadFile(m_hUrl, buf, BUFFER_SIZE, &bytesread) && bytesread != 0 )
{
// Put into std::string.
strData = std::string(buf,buf+bytesread);
}
Now I want to update the file (same address). The server update the file at 50Hz and I want my code to be able to ReadFile only if it has been updated by the server. Can InternetReadFile do that kind of thing? Maybe with a FLAG but I didn't find a thing on MSDN.
Thanks for your help.
There is no way in the HTTP protocol for you directly do that, hence there is no such function in WinHTTP. The easiest solution might be to download the file and see if it's changed, if the file is relatively small, or if the file is large, let the server which writes the file, also write a timestamp, checksum or counter increment file next to it.
Then your code would download the checksum file, see if it's changed, and in that case download the original file.
Or another solution would be to put a timestamp or similar data in the beginning of the XML file, and stop downloading the file if the timestamp (or checksum) is not updated. (This comes with its own drawbacks of course, you may have to write your own parser.)
If HTTP server has a page with info (e.g. timestamp) on this file (no matters that a file is generated; the page may be generated too), you may examine this page.
As you know that server updates the file with (nearly) constant speed, your app may just use the timer.
P.S. I doubt if there's really a sense in reading some file 50 times every second.
I am trying to write a client/server program in C++ with Visual Studio 2008. So far the project runs does the following:
Run the webserver on cmd prompt - webserver 8080
open web browser - localhost 8080
to open local html file - localhost:8080/demo.html
But now... let's say the client requests for a gif file, then the server should send gif file.
client request for txt file, then the server should send .txt file. Similarly for .html and .xbm files.
I don't know how to do it.. Any help greatly appreciated.
On UNIX systems you'd use the file command: it uses a set of known "magic number" which are used to identify different file types. anda few heuristics to address the remaining files. Most file formats have some sort of identifier embedded, often in the first couple of bytes. Especially text files normally don't have a magic number but use only printable characters instead (with UTF8 and UTF16 being popular, classifying text files became a bit harder).
Once the file type is determined, you'd just set ghe corresponding HTTP header(s).
okay, because we're in the same class, I'll give you a clue :)
In the header part, put some if-else like this:
if(strcmp(type,"html")==0){
(void) sprintf(buff,"Content-Type:text/html\r\n");
(void) send(conn,buff,strlen(buff),0);
}
else if(strcmp(type,"gif")==0){
(void) sprintf(buff,"Content-Type:image/gif\r\n");
(void) send(conn,buff,strlen(buff),0);
}
Got it? And by the way, you need to get the extension (check path using endsWith function), compare the extension with file type then give out the right header. Test it with gif file :) I have it works already :) Going to submit now. Remember to vote up for me :)
I am working on an NaCl plugin for Chrome, and trying to download a URL resource file locally, into the temporary cache of Chrome, but without success.
Here is how I proceed:
// I indicate that I want the resource to be downloaded to a file
m_URLRequestInfo.SetStreamToFile( true );
// I open the request:
m_URLLoader.Open( m_URLRequestInfo, m_CCFactory.NewCallback( &MyClass::OnOpen ) );
...
// My callback (OnOpen) is eventually called.
// I then check to make sure the status code is 200 using this call:
m_URLLoader.GetResponseInfo().GetStatusCode()
// Then I ask to download the whole file:
m_URLLoader.FinishStreamingToFile( m_CCFactory.NewOptionalCallback( &MyClass::OnFileDownloaded ) );
...
// My other callback (OnFileDownloaded) gets eventually called,
// and again the status code is 200.
// Then I query the FileRef using this call:
pp::FileRef l_FileRef = m_URLLoader.GetResponseInfo().GetBodyAsFileRef();
The returned pp::FileRef seems to be fine, but pp::FileRef::GetFileSystemType() returns PP_FILESYSTEMTYPE_EXTERNAL, and then the call to pp::FileRef::GetPath() fails (it returns an UNDEFINED pp::Var).
So from this point, I am lost. I don't know what else I should do to get a valid pp::FileRef that points to a local file in the browser's cache. My final goal is to open this local file (an image file in my case) using a standard system file IO like fopen().
Thanks for any light !
Is there a reason you can't use the nacl_io library instead? With it you can write something like this:
#include <stdio.h>
#include <sys/mount.h>
// Mount an HTTP filesystem that reads files from "http://example.com/path/...".
mount("http://example.com/path/", "/mnt/http", "httpfs", 0, "");
// Performs a URL Request of "http://example.com/path/my_image.png".
FILE* file = fopen("/mnt/http/my_image.png", "r");
...
Take a look a the nacl_io demo in the SDK. It is located at $NACL_SDK_ROOT/examples/demo/nacl_io.
After reading more thoroughly the documentation and running more tests, I finally figured out what I was doing wrong.
When we call pp::URLLoader::FinishStreamingToFile, then the file is downloaded in Browser's cache, but it cannot be opened/read using regular stdio services like fopen, fread etc. We need to use the pp::FileIO class services to open the obtained pp::FileRef and read the content of the file.
So here is what I did to successfully load and read a file that was downloaded for me by the Browser. Basically, I continued to use the C++ PPAPI services.
(1) upon callback from m_URLLoader->FinishStreamingToFile, we then call m_FileIO->Open to open the downloaded file using the obtained FileRef;
(2) upon callback from m_FileIO->Open, we then call m_FileIO->Query to obtain the size of the downloaded file (and some other file attributes);
(3) upon callback from pp::FileIO::Query, we then check the file attribute type (e.g. not a folder), allocate a memory buffer large enough to hold the whole file content, and start to call pp::FileIO::Read to obtain the file's content;
(4) upon callback from pp::FileIO::Read, if the obtained nResult argument is 0, then we reached the EOF and we finished reading the file content into our memory buffer; if the obtained nResult > 0, then it indicates the number of successfully read bytes, and we call m_FileIO->Read again to continue reading bytes, and storing them at a different offset location into our memory buffer; if the obtained nResult < 0 then an error occurred and we must terminate the reading process.
Many steps, and many callbacks to manage, but in the end this works smoothly.