copying binary files to remote location in C++ - c++

I'm in the process of trying to copy an hdf5 binary file on a local machine to a remote computing blade. I am using libssh to copy the desired directory or files out after they are generated by my Qt application. Using libssh I am able to open an ssh_session, authenticate it, open a channel and send remote commands.
for (QStringList::iterator it = ipList.begin(); it != ipList.end(); ++it)
{
ssh_session my_session = new ssh_new();
QString ip_address = *it;
ssh_options_set(my_session, SSH_OPTIONS_HOST, ip_address.toStdString().c_str());
// Connect...Authenticate using public key....
QString command = QString("rm -r %2; cp -r %1 %1; cp /local/file.txt /remote/file.txt").arg(local_dir, remote_dir);
execute_remote_command(my_session, command.toStdString().c_str());
// Open channel and execute command
ssh_disconnect(my_session);
ssh_free(my_session);
}
This command is being executed for each individual computing blade. In between each of the calls I am closing and opening an ssh session to the next blade. The files make it out the blades but they appear to be corrupt. They are the exact same file size. I haven't figured out a way to compare the individual bytes to see just how corrupt they are, any tips there would be appreciated as well.
When I run my ssh copy commands in a separate test terminal program the files appear to make it intact and are readable on the blades. The issue only seems to occur when the files are moved from within the Qt GUI program.
EDIT: So delving a little bit deeper into what is wrong, it appears that the file on the remote server is not the same size. It appears to be missing a large portion of the bytes. On top of that when I examine what is there byte by byte with the local version of the file, almost all of the bytes differ.

Turns out the answer was that the HDF5 writer wasn't being closed properly before the SSH commands were being called. I fixed the problem by dynamically allocating the custom H5 class that someone else wrote and made sure to delete it before the SSH commands were called. Turns out whoever wrote the HDF5 read and write class didn't handle file opening and closing properly and didn't provide functions to do so.
Below is an example of what I am talking about.
HDF5writer_class *hdf5_writer = new HDF5writer_class();
hdf5_writer->create_file("/local/machine/hdf5_file.h5");
// ... add the data to the file
delete hdf5_writer;
// Open SSH Session and run the copy commands
Long story short, make sure the file you are writing is closed and released for use before you try to copy it.

Related

ARM Embedded Linux (AM335x), Text File Contents Deleted After Power Off

Kernel: 3.12.30-AM335x-PD15.2.1 (by PHYTEC)
My application requires editing a text file on run time and using it contents next time it powers on. So I created a text file in which I write a simple text, "Disable" or "Enable" using the program I have written with QT C++.
What I realized is, after the program writes the simple text, if I use the command "reboot" on bash, and wait for the program to reboot before I power off the the system (by plugging off its cable), "cat TextFile.txt" command yields "Enable" or "Disable", whichever the program has last written correctly.
However if I don't do a reboot and power the system off right away, and then power on again, the text file remains but the contents are deleted, so "cat TextFile.txt" yields nothing.
I tried to do the same manually, using the below methods:
Method 1:
echo Disable > TextFile.txt
reboot
.....wait for it to reboot
cat TextFile.txt
The results is "Disable".
Method 2:
echo Disable > TextFile.txt
.. power off by plugging off the cable
.. power on the system
cat TextFile.txt
No resulting text..
I simply don't want to have to reboot the system for the files to be saved. So I would be happy with executing commands within my QT C++ program to save everything without a reboot; but I do not know the operating system very well, therefore I do not know what is it that I should do to be able to do this.
This is my code my by the way:
QFile file(filename);
// Trying to open in WriteOnly and Text mode
if(!file.open(QFile::WriteOnly |
QFile::Text))
{
qDebug() << " Could not open file for writing";
}
// To write text, we use operator<<(),
// which is overloaded to take
// a QTextStream on the left
// and data types (including QString) on the right
QTextStream out(&file);
out << "Enable";
file.flush();
file.close();
As your experiement on the shell has shown this is not strictly a c++ or Qt matter, the file is just not written to disk right away.
The system setup is likely using delayed writing to optimize disk access times, i.e. first writing into in-memory buffers and writing to actual disk every once in a while.
You might want to tune that if you have other programs that write files and expect power loss as a realistic scenario.
Now, for the Qt program in question, you could try using QSaveFile instead of QFile, its commit() asks the system to actually sync to disk.

Will File I/O In Current Working Directory Ever Fail?

On my home Linux laptop, I like to write wrapper programs and GUI helpers for things I use frequently. However, I don't like Bash scripting very much, so I do a lot of stuff in C++. However, a lot of times, this requires me to use the system() function from the cstdlib.
This system() command is awesome, but I wanted a way to call system() and receive the stdout/stderror. The system() command only returns the return code from the command. So, in a Bash script, one can do:
myVar=$(ls -a | grep 'search string')
echo $myVar
and myVar will output whatever the stdout was for the command. So I began writing a wrapper class that will add a pipe-to-file to the end of the command, open the file, read all of the piped stdout, and return it as either one long string or as a vector of strings. The intricacies of the class are not really relevant here (I don't think anyway), but the above example would be done like this:
SystemCommand systemCommand;
systemCommand.setCommand("ls -a | grep \'search string\' ");
systemCommand.execute();
std::cout << systemCommand.outputAsString() << std::endl;
Behind the scenes, when systemCommand.execute() is called, the class ensures that the command will properly pipe all stdout/stderr to a randomly generated filename, in the current working directory. So for example, the above command would end up being
"ls -a | grep 'search string' >> 1452-24566.txt 2>&1".
The class then goes attempts to open and read from that file, using ifstream:
std::ifstream readFromFile;
readFromFile.open(_outputFilename);
if (readFromFile.is_open()) {
//Read all contents of file into class member vector
...
readFromFile.close();
//Remove temporary file
...
} else {
//Handle read failure
}
So here is my main question will std::ifstream ever fail to open a recently created file in the current working directory? If so, what would be a way to make it more robust (specifically on Linux)?
A side/secondary question: Is there a very simplified way to achieve what I'm trying to achieve without using file pipes? Perhaps some stuff available in unistd.h? Thanks for your time.
So here is my main question will std::ifstream ever fail to open a recently created file in the current working directory?
Yes.
Mount a USB thumb drive (or some other removable media)
cd to the mount
Execute your program. While it's executing, remove the drive.
Watch the IO error happen.
There's a ton of other reasons too. Filesystem corruption, hitting the file descriptor limit, etc.
If so, what would be a way to make it more robust (specifically on Linux)?
Make temporary files in /tmp, whose entire purpose is for temporary files. Or don't create a file at all, and use pipes for communication instead (Like what popen does, like harmic suggested). Even so, there are no guarantees; try to gracefully handle errors.

Check status of file during sftp

I want to write a C++ code to get a file from server B via server A using password less sftp.The file on server B is infact being copied (via sftp ) from another server C. I was able to retreive the file from server B , however even if the file was still being copied, I was still able to get the file(incomplete file as it was still being transferred to server B from server C). I want to put a check if the file is being copied then i should not get it using sftp and wait till it it is completely moved. As far as i know sftp prompt does not support lot of commands. Can somebody please give me some inputs on how can i achieve this?
A traditional way to do this is to transfer the (big) "paydata" file, and a (small / empty) "flag" file after that. You (on the receiving end) wait until the flag file exists. If it does, the transfer of the paydata file has finished; delete the flag file, and do whatever you do with the paydata file.

Is it safe to use a QTemporaryFile with a QProcess?

I have to read a script from the user and call a QProcess passing that script as a file.
For example, the user insert this, say, Python script
import sys
print(sys.copyright)
and I have to put that script in a file, and call the python interpreter using that file.
I thought to use a QTemporaryFile, because that file will serve just when launching the process, and I have no need to keep it open.
The question is: is it safe to open a QTemporaryFile, write something in it, pass that file to a process (which will continue indefinitely) and then destroy the temporary file? What if the process will need that file again? What if the process keep the file open?
I reckon that, if kept open by the process, no problem will arise: probably the QTemporaryFile will unlink the path, but data will still be accessible in memory.
But what if the process will try to open the file again?
Here a snippet as example (wrote on the fly)
QString script = QInputDialog::getText(blah);
QTemporaryFile tmp;
if (tmp.open()) {
tmp.write(script.toUtf8());
QStringList params;
params << tmp.fileName();
QProcess *proc = new QProcess("/usr/bin/python3");
proc->start(params);
}
As I understand it, in the case of the 'autoRemove' flag (which is on by default), the QTemporaryFile will exist so long as the instance of QTemporaryFile exists. Therefore, in the code you originally presented, when tmp goes out of scope, the file will be removed. Calling open / close on the object will not delete the file.
You could dynamically allocate the file with QTemporaryFile* pTmp = new QTemporaryFile and then delete it later, if you know when the python script has finished with it.
Ouch, I just noted the autoRemove flag in the QTemporaryFile. I guess this could be a solution: if set to false, the file will not be removed from the disk, so the process is free to reuse the file - I think.
Temporary files should be stored in system's default location, so I guess that the files are not removed until a reboot (at least, I believe Linux works this way).
This is just an idea, but I will wait for other answers or confirmations.

Receiving a Sharing Violation Opening a File Code 32

I have been trying the following piece of code that does not work. What I am trying to do is to start executing my exe (one that I created a simple dialog based application using VC6.0) then from inside this application modify its own contents stored on the hard drive.
So there is a running copy of the exe and from this running copy it will open the disk copy into a buffer. Once loaded into a buffer then begin a search for a string. Once the string is found it will be replaced with another string which may not be the same size as the original.
Right now I am having an issue of not being able to open the file on disk for reading/writing. GetLastError returns the following error "ERROR_SHARING_VIOLATION The process cannot access the file because it is being used by another process.".
So what I did I renamed the file on disk to another name (essential same name except for the extension). Same error again about sharing violation. I am not sure why I am getting this sharing violation error code of 32. Any suggestions would be appreciated. I'll ask my second part of the question in another thread.
FILE * pFile;
pFile = fopen ("Test.exe","rb");
if (pFile != NULL)
{
// do something like search for a string
}
else
{
// fopen failed.
int value = GetLastError(); // returns 32
exit(1);
}
Read the Windows part of the File Locking wikipedia entry: you can't modify files that are currently executing.
You can rename and copy them, but you can't change them. So what you are trying to do is simply not possible. (Renaming the file doesn't unlock it at all, it's still the same file after the rename, so still not modifiable.)
You could copy your executable, modify that copy, then run that though.