python win32com shell.SHFileOperation - any way to get the files that were actually deleted? - python-2.7

In the code I maintain I run across:
from win32com.shell import shell, shellcon
# ...
result,nAborted,mapping = shell.SHFileOperation(
(parent,operation,source,target,flags,None,None))
In Python27\Lib\site-packages\win32comext\shell\ (note win32comext) I just have a shell.pyd binary.
What is the return value of shell.SHFileOperation for a deletion (operation=FO_DELETE in the call above) ? Where is the code for the shell.pyd ?
Can I get the list of files actually deleted from this return value or do I have to manually check afterwards ?
EDIT: accepted answer answers Q1 - having a look at the source of pywin32-219\com\win32comext\shell\src\shell.cpp I see that static PyObject *PySHFileOperation() delegates to SHFileOperation which does not seem to return any info on which files failed to be deleted - so I guess answer to Q2 is "no".

ActiveState Python help contains SHFileOperation description:
shell.SHFileOperation
int, int = SHFileOperation(operation)
Copies, moves, renames, or deletes a file system object.
Parameters
operation : SHFILEOPSTRUCT
Defines the operation to perform.
Return Value
The result is a tuple containing int result of the
function itself, and the result of the fAnyOperationsAborted member
after the operation. If Flags contains FOF_WANTMAPPINGHANDLE, returned
tuple will have a 3rd member containing a sequence of 2-tuples with
the old and new file names of renamed files. This will only have any
content if FOF_RENAMEONCOLLISION was specified, and some filename
conflicts actually occurred.
Source code can be downloaded here: http://sourceforge.net/projects/pywin32/files/pywin32/Build%20219/ (pywin32-219.zip)
Just unpack and go to .\pywin32-219\com\win32comext\shell\src\

Related

create a unique temporary directory [duplicate]

This question already has answers here:
How to create a temporary directory in C++?
(6 answers)
Closed 3 years ago.
I'm trying to create a unique temporary directory in the system temp folder and have been reading about the security and file creation issues of tmpnam().
I've written the below code and was wondering whether it would satisfy these issues, is my use of the tmpnam() function correct and the throwing of the filesystem_error? Should I be adding checks for other things (e.g. temp_directory_path, which also throws an exception)?
// Unique temporary directory path in the system temporary directory path.
std::filesystem::path tmp_dir_path {std::filesystem::temp_directory_path() /= std::tmpnam(nullptr)};
// Attempt to create the directory.
if (std::filesystem::create_directories(tmp_dir_path)) {
// Directory successfully created.
return tmp_dir_path;
} else {
// Directory could not be created.
throw std::filesystem_error("directory could not be created.");
}
From cppreference.com:
Although the names generated by std::tmpnam are difficult to guess, it is possible that a file with that name is created by another process between the moment std::tmpnam returns and the moment this program attempts to use the returned name to create a file.
The problem is not with how you use it, but the fact that you do.
For example, in your code sample, if a malicious user successfully guesses and creates the directory right in between the first and second line, it might deny service (DOS) from your application, which might be critical, or not.
Instead, there is a way to do this without races on POSIX-compliant systems:
For files see mkstemp(3) for more information
For dirs see mkdtemp(3) for more information.
Your code is fine. Because you try to create the directory the OS will arbitrate between your process and another process trying to create the same file so, if you win, you own the file and if you lose you get an error.
I wrote a similar function recently. Whether you throw an exception or not depends on how you want to use this function. You could, for example simply return either an open or closed std::fstream and use std::fstream::is_open as a measure of success or return an empty pathname on failure.
Looking up std::filesystem::create_directories it will throw its own exception if you don't supply a std::error_code parameter so you don't need to throw your own exception:
std::filesystem::path tmp_dir_path {std::filesystem::temp_directory_path() /= std::tmpnam(nullptr)};
// Attempt to create the directory.
std::filesystem::create_directories(tmp_dir_path));
// If that failed an exception will have been thrown
// so no need to check or throw your own
// Directory successfully created.
return tmp_dir_path;

C++: Rename instead of Delete & Copy when using Sync

Currently I have the following part code in my Sync:
...
int index = file.find(remoteDir);
if(index >= 0){
file.erase(index, remoteDir.size());
file.insert(index, localDir);
}
...
// Uses PUT command on the file
Now I want to do the following instead:
If a file is the same as before, except for a rename, don't use the PUT command, but use the Rename command instead
TL;DR: Is there a way to check whether a file is the same as before except for a rename that occurred? So a way to compare both files (with different names) to see if they are the same?
check the md5sum, if it is different then the file is modified.
md5 check sum of a renamed file will remain same. Any change in content of file will give a different value.
I first tried to use Renjith method with md5, but I couldn't get it working (maybe it's because my C++ is for windows instead of Linux, I dunno.)
So instead I wrote my own function that does the following:
First check if the file is the exact same size (if this isn't the case we can just return false for the function instead of continuing).
If the sizes do match, continue checking the file-buffer per BUFFER_SIZE (in my case this is 1024). If the entire buffer of the file matches, return true.
PS: Make sure to close any open streams before returning.. My mistake here was that I had the code to close one stream after the return-statement (so it was never called), and therefore I had errno 13 when trying to rename the file.

Datalog require field `unlock'

While compiling an OCaml application I get the following error:
File "/tmp/ocamlpp466ee0", line 308, characters 34-233:
Error: Signature mismatch:
...
The field `unlock' is required but not provided
The field `lock' is required but not provided
Command exited with code 2.
My guess is that the error is releated with the OCaml library Datalog (I've installed the version 0.3 from here) because the line 308 in the file is /tmp/ocamlpp466ee0 the first one in the following code
module Logic = Datalog.Logic.Make(struct
type t = atom
let equal = eq_atom
let hash = hash_atom
let to_string a = Utils.sprintf "%a" pp_atom a
let of_string s = atom_of_json (Json.from_string s)
end)
I would really appreciate if someone could help me to know what I am doing wrong.
Moreover, I would like to undestand why the file /tmp/ocamlpp466ee0 is generated each time I execute 'make'? I tried to understand by reading the Makefile but I did not succeed.
I think that something have changed in Datalog library and in some version > 0.3 functor Datalog.Logic.Make requires module argument with values lock and unlock declared. So, it's version problem.
About temporary file. As you can see, its name consists of ocaml literal, pp which means preprocessor and some number. Preprocessors in OCaml usually work this way: they read input source file and write output source files. That's why some temporary files are created.

when to call _findclose?

Note: Since problem is solved, I've added comments to my original posts.
According to "http://msdn.microsoft.com/en-us/library/6tkkkc1y%28v=vs.90%29.aspx", it stated as this:
*You must call _findclose after you are finished using either the _findfirst or _findnext function (or any variants). This frees up resources used by these functions in your application.*
--comment: it is vague, but what microsoft is trying to say is: some users just need to find the first file(they don't need to call _findnext), then call _findclose; some users called _findnext (they MUST have already called _findfirst), after finished using that, call _findclose. Actually _findnext can be called multiple times, while _findclose is only responsible to a handle, which is created by _findfirst.
And following is a piece of code that is widely used to list files in the directory. -- comment: it is correct.
For example, if there are 2 files and 1 directory in the directory, then:
.
..
ddd
file1.txt
file2.txt
_findfirst is called once. the handle's corresponding fileinfo is system directory "." (is that right?)
--comment: no. the handle is a group of files+directories, the fileinfo is acting as the "cursor". (fileinfo always contained the "name" field, I bet the implementation of _findnext is using the "name" to find the next in a group of files+directories specified by the handle)
_findnext is called 4 times. (the first argument is always the handle corresponding to ".", is that right?)
--comment: yes + no. The first argument is always the same handle; the handle is NOT corresponding to any fileinfo, but to a group of them.
My questions are:
Does "_findclose" be called ONCE is enough?
*--comment:* yes.
if _findnext will not change the handle value, how can it "remember" where to start to find the next file(or directory)? (sorry, maybe I was thinking in the "linked list" pattern.)
*--comment:* I bet is using fileinfo's name field. Just as in Windows Explorer, we sort the contents in a folder, given a file name, we can know their position in the list, so we can "find next".
Are there any harm to call _findclose more times than needed? (like crash or something)
*--comment:* a stupid question. Sorry!
Or is the following code wrong at all? If yes, what's the correct way to implement it?
--It is correct code.
// List the files in the directory
intptr_t file;
_finddata_t filedata;
file = _findfirst(desc.c_str(),&filedata);
if (file != -1)
{
do
{
cout << filedata.name << endl;
// Or put the file name in a vector here
} while (_findnext(file,&filedata) == 0);
}
else
{
cout << "No described files found" << endl;
}
_findclose(file);
I asked this because I've met an issue that an application is freezing a directory which can not be deleted if the process is alive. However, I can guaranteed that "_findclose" is called on every return value from "_findfirst". If I add "_findclose" after calling "_findnext", then will fix the issue perfectly. How can you help me to explain it?
--comment: pardon. don't use "guarantee" too easy. That's where the bug is.
Note: I don't have problem to understand what is a handle, like open a file, read/write/read/write..., close the file handle. I just find the documentations describing these three APIs are vague.
--comment: go to improve your english.
Thank you in advance.
Your calls to _findclose should match with your calls to _findfirst -- i.e., each time you call _findfirst, you should have a matching call to _findclose.
In the code above, since you have only one call to _findfirst, it's correct to have only one call to _findclose.
If you were doing a recursive search of subdirectories, then you'd end up with multiple calls to _findfirst as you descend the hierarchy, and matching calls to _findclose as you finish and ascend back up the hierarchy.
You only need to call _findclose once, when you are finished.
On Windows, a directory may be locked if it is the current directory of your process. Try calling _chdir.
If that doesn't work... are you opening any of the files in the directory you're searching? An open file may lock the directory as well.
It may be useful to let Process Explorer get a look at your app. It can tell you for sure what handle you have left open.

qsettings different results

I am using QSettings to try and figure out if an INI is valid.(using status() to check) I made a purposefully invalid INI file and loaded it in. The first time the code is called, it returns invalid, but every time after that, it returns valid. Is this a bug in my code?
It's a Qt bug caused by some global state. Note that the difference in results happens whether or not you call delete on your QSettings object, which you should. Here's a brief summary of what happens on the first run:
The result code is set to NoError.
A global cache is checked to see if your file is present
Your file isn't present the first time, so it's parsed on qsettings.cpp line 1530 (Qt-4.6.2)
Parsing results in an error and the result code is set (see qsettings.cpp line 1552).
The error result code is returned.
And the second run is different:
The result code is set to NoError.
A global cache is checked, your file is present.
The file size and timestamp are checked to see if the file has changed (see qsettings.cpp line 1424).
The result code is returned, which happens to be NoError -- the file was assumed to have been parsed correctly.
Checked your code, you need to delete the file object before returning.
Apart from that, your code uses the QSettings::QSettings(fileName, format) c'tor to open an ini-file. That call ends in the function QConfFile::fromName (implemented in qsettings.cpp). As I read it (there are a few macros and such that I decided not to follow), the file is not re-opened if the file already is open (i.e. you have not deleted the object since the last time). Thus the status will be ok the second time around.