catch change in security permission of file without blocking - python-2.7

I want to catch change in security permission of file without blocking till the change is made, like an event that popup when the change is made.
I also don't want to install any third party modules or softwares.
The main requirement of this is to be in some of win32 modules or built-in modules.
I'm currently watching for security change by this function:
import win32con, win32file
def security_watcher():
specific_file = "specific.ini"
path_to_watch = "."
FILE_LIST_DIRECTORY = 0x0001
hDir = win32file.CreateFile(path_to_watch,
FILE_LIST_DIRECTORY,
win32con.FILE_SHARE_READ |
win32con.FILE_SHARE_WRITE |
win32con.FILE_SHARE_DELETE,
None,
win32con.OPEN_EXISTING,
win32con.FILE_FLAG_BACKUP_SEMANTICS,
None)
results = win32file.ReadDirectoryChangesW(hDir,
1024,
False,
win32con.FILE_NOTIFY_CHANGE_SECURITY,
None,
None)
print results
for action, file_name in results:
if file_name == specific_file:
# wake another function to do something about that
Note: I need it non blocking because I use this function in GUI Application and it freezes the GUI.

If you don't mind (or can't avoid) adding some threading overhead, you can launch a separate process or thread that waits on the blocking call to win32.ReadDirectoryChangesW() that you already have. When it receives a change, it writes the result to a pipe shared with the main thread of your GUI.
Your GUI can periodically execute a non-blocking read at the appropriate point in your code (presumably, where you now call win32file.ReadDirectoryChangesW()). Do make sure to wait a bit between reads, or your app will spend 100% of its time on non-blocking reads.
You can see in this answer how to set up a non-blocking read on a pipe in an OS-independent way. The final bit will look like this:
try:
results = q.get_nowait()
except Empty:
pass
else:
for line in results.splitlines():
# Parse and use your result format
...
if file_name == specific_file:
...

Related

count all events in stream with babeltrace API

I have a LTTNg trace, which i am parsing using babeltrace API. So i was wondering if I could count all events in trace (or stream) without iterating over them. What functions from publilc API I can use to do that ?
The very nature of CTF makes it impossible to count the event records of a given packet in constant time. The packet's context could include an event record count field somehow, but it's not specified, so generic tools would not use it.
Thus the only way to count events is to iterate the event records, unfortunately. The easiest way is to count the number of lines that the text format of the babeltrace(1) tool prints:
babeltrace /path/to/ctf/trace/directory | wc --lines
This works as long as there's one line per printed event record, which is the case unless an event record contains a string field which has a newline (currently not escaped in the text output).
You may also wish to consider discarded event records. They are not printed to the standard output by babeltrace(1), but the tool prints a message including the count to the standard error when they are detected.
There's no way with the current babeltrace(1) tool to only print the event records which belong to the packets of a given data stream. If you need this, what I suggest is that you remove all the data stream files except the one for which you need an event record count, and run the command above again.
Also consider the Babeltrace Python bindings, for example (not tested):
import babeltrace
def count_ctf_event_records(path):
trace_collection = babeltrace.TraceCollection()
trace_collection.add_trace(path, 'ctf')
return sum(1 for event in trace_collection.events)
if __name__ == '__main__':
import sys
print(count_ctf_event_records(sys.argv[1]))
Saved as count.py, you can try this:
python3 count.py /path/to/ctf/trace/directory
Counting the event records of a specific data stream with the Python bindings is left as an exercise for the reader.
Having said this, I don't know if the Python bindings approach is faster than the babeltrace(1) one.

Program Seems to Terminate Early

I get the feeling this is one of those really simple problems where there's something I just don't understand about the language. But I'm trying to learn Elixir, and my program isn't running all the way through. I've got a minimal example here.
defmodule Foo do
def run(0) do
IO.puts("0")
end
def run(n) do
IO.puts(to_string n)
run(n - 1)
end
def go do
run(100)
end
end
# Foo.go
# spawn &Foo.go/0
Now, if I uncomment the Foo.go line at the bottom and run it with elixir minimal.exs, then I get the intended output, which is all of the numbers from 100 down to 0. If I uncomment only the spawn &Foo.go/0 line, I consistently get no output at all.
However, if I uncomment both lines and run the program, I get the numbers from 100 to 0 (from the first line), then the first few numbers (usually about 100 to 96 or so) before the program terminates for some reason. So I really don't know what's causing the process to terminate at a random point.
It's worth pointing out that the reason this confusion arose for me was that I was trying to use mix to compile a larger project, when the program seemed to get started, do a small part of its work, and then terminate because mix apparently stops running after a bit. So I'm not sure what the idiomatic way to run an Elixir program is either, given that mix seems to terminate it after a short while anyway.
spawn/1 will create a new process to run the function. While they are not the same, you can sort of think of an Erlang / Elixir process as a thread in most other languages.
So, when you start your program, the "main" process gets to doing some work. In your case, it creates a new process (lets call it "Process A") to output the numbers from 100 down to 0. However, the problem is that spawn/1 does not block. Meaning that the "main" process will keep executing and not wait for "Process A" to return.
So what is happening is that your "main" process is completing execution which ends the entire program. This is normal for every language I have ever used.
If you wanted to spawn some work in a different process and make sure it finishes execution BEFORE ending your program, you have a couple different options.
You could use the Task module. Something along the lines of this should work.
task = Task.async(&Foo.go/0)
Task.await(task)
You could explicitly send and receive messages
defmodule Foo do
def run(0, pid) do
IO.puts("0")
# This will send the message back to the "main" thread upon completion.
send pid, {:done, self()}
end
def run(n, pid) do
IO.puts(to_string n)
run(n - 1, pid)
end
# We now pass along the pid of the "main" thread into the go function.
def go(pid) do
run(100, pid)
end
end
# Use spawn/3 instead so we can pass in the "main" process pid.
pid = spawn(Foo, :go, [self()])
# This will block until it receives a message matching this pattern.
receive do
# The ^ is the pin operator. It ensures that we match against the same pid as before.
{:done, ^pid} -> :done
end
There are other ways of achieving this. Unfortunately, without knowing more about the problem you are trying to solve, I can only make basic suggestions.
With all of that said, mix will not arbitrarily stop your running program. For whatever reason, the "main" process must have finished execution. Also mix is a build tool, not really the way you should be running your application (though, you can). Again, without knowing what you are attempting to do or seeing your code, I cannot give you anything more than this.

How to close file descriptors in python?

I have the following code in python:
import os
class suppress_stdout_stderr(object):
'''
A context manager for doing a "deep suppression" of stdout and stderr in
Python, i.e. will suppress all print, even if the print originates in a
compiled C/Fortran sub-function.
This will not suppress raised exceptions, since exceptions are printed
to stderr just before a script exits, and after the context manager has
exited (at least, I think that is why it lets exceptions through).
'''
def __init__(self):
# Open a pair of null files
self.null_fds = [os.open(os.devnull,os.O_RDWR) for x in range(2)]
# Save the actual stdout (1) and stderr (2) file descriptors.
self.save_fds = (os.dup(1), os.dup(2))
def __enter__(self):
# Assign the null pointers to stdout and stderr.
os.dup2(self.null_fds[0],1)
os.dup2(self.null_fds[1],2)
def __exit__(self, *_):
# Re-assign the real stdout/stderr back to (1) and (2)
os.dup2(self.save_fds[0],1)
os.dup2(self.save_fds[1],2)
# Close the null files
os.close(self.null_fds[0])
os.close(self.null_fds[1])
for i in range(10**6):
with suppress_stdout_stderr():
print 'plop'
if i % 50 == 0:
print i
it fails at 5100 on OSX with OSError: [Errno 24] Too many open files. I'm wondering why and if there is a solution to close the file descriptor. I'm looking for a solution for a context manager which closes stdout and stderr.
I executed your code on a Linux machine and got the same error but at a different number of iterations.
I added the following two lines in the __exit__(self, *_) function of your class:
os.close(self.save_fds[0])
os.close(self.save_fds[1])
With this change I do not get an error and the script returns successfully. I assume that the duplicated file descriptors stored in self.save_fds are kept open if you don't close them with os.close(fds) and so you get the too many files open error.
Anyway my console printed "plop", but maybe this depends on my platform.
Let me know if it works :)

FindNextPrinterChangeNotification returns NULL for ppPrinterNotifyInfo

I'm stuck on problem were I would like to ask for some help:
I have the task to print some files of different types using ShellExecuteEx with the "print" verb and need to guarantee print order of all files. Therefore I use FindFirstPrinterChangeNotification and FindNextPrinterChangeNotification to monitor the events PRINTER_CHANGE_ADD_JOB and PRINTER_CHANGE_DELETE_JOB using two different threads in the background which I start before calling ShellExecuteEx as I don't know anything about the application which will print the files etc. The only thing I know is that I'm the only one printing and which file I print. My solution seems to work well, my program successfully recognizes the event PRINTER_CHANGE_ADD_JOB for my file, I even verify that this event is issued for my file by checking what is give to me as additional info by specifying JOB_NOTIFY_FIELD_DOCUMENT.
The problem now is with the event PRINTER_CHANGE_DELETE_JOB, where I don't get any addition info about the print job, though my logic is exactly the same for both events: I've written one generic thread function which simply gets executed with the event it is used for. My thread is recognizing the PRINTER_CHANGE_DELETE_JOB event, but on each call to FindNextPrinterChangeNotification whenever this event occured I don't get any addition data in ppPrinterNotifyInfo. This works for the start event, though, I verified using my logs and the debugger. But with PRINTER_CHANGE_DELETE_JOB the only thing I get is NULL.
I already searched the web and there are some similar questions, but most of the time related to VB or simply unanswered. I'm using a C++ project and as my code works for the ADD_JOB-event I don't think I'm doing something completely wrong. But even MSDN doesn't mention this behavior and I would really like to make sure that the DELETE_JOB event is the one for my document, which I can't without any information about the print job. After I get the DELETE_JOB event my code doesn't even recognize other events, which is OK because the print job is done afterwards.
The following is what I think is the relevant notification code:
WORD jobNotifyFields[1] = {JOB_NOTIFY_FIELD_DOCUMENT};
PRINTER_NOTIFY_OPTIONS_TYPE pnot[1] = {JOB_NOTIFY_TYPE, 0, 0, 0, 1, jobNotifyFields};
PRINTER_NOTIFY_OPTIONS pno = {2, 0, 1, pnot};
HANDLE defaultPrinter = PrintWaiter::openDefaultPrinter();
HANDLE changeNotification = FindFirstPrinterChangeNotification( defaultPrinter,
threadArgs->event,
0, &pno);
[...]
DWORD waitResult = WAIT_FAILED;
while ((waitResult = WaitForSingleObject(changeNotification, threadArgs->wfsoTimeout)) == WAIT_OBJECT_0)
{
LOG4CXX_DEBUG(logger, L"Irgendein Druckereignis im Thread zum Warten auf Ereignis " << LogStringConv(threadArgs->event) << L" erkannt.");
[...]
PPRINTER_NOTIFY_INFO notifyInfo = NULL;
DWORD events = 0;
FindNextPrinterChangeNotification(changeNotification, &events, NULL, (LPVOID*) &notifyInfo);
if (!(events & threadArgs->event) || !notifyInfo || !notifyInfo->Count)
{
LOG4CXX_DEBUG(logger, L"unpassendes Ereignis " << LogStringConv(events) << L" ignoriert");
FreePrinterNotifyInfo(notifyInfo);
continue;
}
[...]
I would really appreciate if anyone could give some hints on why I don't get any data regarding the print job. Thanks!
https://forums.embarcadero.com/thread.jspa?threadID=86657&stqc=true
Here's what I think is going on:
I observe two events in two different threads for the start and end of each print job. With some debugging and logging I recognized that FindNextPrinterChangeNotification doesn't always return only the two distinct events I've notified for, but some 0-events in general. In those cases FindNextPrinterChangeNotification returns 0 as the events in pdwChange. If I print a simple text file using notepad.exe I only get one event for creation of the print job with value 256 for pdwChange and the data I need in notifyInfo to compare my printed file name against and comparing both succeeds. If I print a pdf file using current Acrobat Reader 11 I get two events, one has pdwChange as 256, but gives something like "local printdatafile" as the name of the print job started, which is obviously not the file I printed. The second event has a pdwChange of 0, but the name of the print job provided in notifyInfo is the file name I used to print. As I use FreePDF for testing pruproses, I think the first printer event is something internal to my special setup.
The notifications for the deletion of a print job create 0 events, too. This time those are sent before FindNextPrinterChangeNotification returns 1024 in pdwChange, and timely very close after the start of the print job. In this case the exactly one generated 0 event contains notifyInfo with a document name which equals the file name I started printing. After the 0 event there's exactly one additional event with pdwChange of 1024, but without any data for notifyInfo.
I think Windows is using some mechanism which provides additional notifications for the same event as 0 events after the initial event has been fired with it's real value the user notified with, e.g. 256 for PRINTER_CHANGE_ADD_JOB. On the other hand it seems that some 0 events are simply fired to provide data for an upcoming event which then gets the real value of e.g. 1024 for PRINTER_CHANGE_DELETE_JOB, but without anymore data because that has already been delivered to the event consumer with a very early 0 event. Something like "Look, there's more for the last events." and "Look, something is going to happen with the data I already provide now." Implementing such an approach my prints now seem to work as expected.
Of course what I wrote doesn't fit to what is documented for FindNextPrinterChangeNotification, but it makes a bit of sense to me. ;-)
You're not checking for overflows or errors.
The documentation for FindNextPrinterChangeNotification says this:
If the PRINTER_NOTIFY_INFO_DISCARDED bit is set in the Flags member of
the PRINTER_NOTIFY_INFO structure, an overflow or error occurred, and
notifications may have been lost. In this case, no additional
notifications will be sent until you make a second
FindNextPrinterChangeNotification call that specifies
PRINTER_NOTIFY_OPTIONS_REFRESH.
You need to check for that flag and do as described above, and you should also be checking the return code from FindNextPrinterChangeNotification.

ZMQ IOLoop instance write/read workflow

I am having a weird system behavior when using PyZMQ's IOLoop instance:
def main():
context = zmq.Context()
s = context.socket(zmq.REP)
s.bind('tcp://*:12345')
stream = zmqstream.ZMQStream(s)
stream.on_recv(on_message)
io_loop = ioloop.IOLoop.instance()
io_loop.add_handler(some_file.fileno(), on_file_data_ready_read_and_then_write, io_loop.READ)
io_loop.add_timeout(time.time() + 10, another_handler)
io_loop.start()
def on_file_data_ready_read_and_then_write(fd, events):
# Read content of the file and then write back
some_file.read()
print "Read content"
some_file.write("blah")
print "Wrote content"
def on_message(msg):
# Do something...
pass
if __name__=='__main__':
main()
Basically the event loop listens to zmq port of 12345 for JSON requests, and reads content from a file when available (and when it does, manipulate it and wrties to it back. Basically the file is a special /proc/ kernel module that was built for that).
Everything works well BUT, for some reason when looking at the strace I see the following:
...
1. read(\23424) <--- Content read from file
2. write("read content")
3. write("Wrote content")
4. POLLING
5. write(\324324) # <---- THIS is the content that was sent using some_file.write()
...
So it seems like the write to file was not done in the order of the python script, but the system call of write to that file was done AFTER the polling, even though it should have been done between lines 2 and 3.
Any ideas?
Looks like you're running into a caching problem. If some_file is a file like object, you can try explicitly calling .flush() on it, same goes for ZMQ Socket which can hold messages for efficiency reasons as well.
As it stands, the file's contents are being flushed when the some_file reference is garbage collected.
Additional:
use the context manager logic that newer versions of Python provide with open()
with open("my_file") as some_file:
some_file.write("blah")
As soon as it finishes this context, some_file will automatically be flushed and closed.