How to call an exe on an insert event in a table - c++

On an insert event in a table, i need to fetch some data in a file using C++ API and send that file to client.
So currently my plan is to Check the " After insert" event using a sql trigger and call the C++ exe from the trigger.
I found in many places that its not advisable to call an exe from a trigger. But i believe in my case it should not be a big issue as my exe is not going to update anything, rather it will just fetch some data and generate a pipe delimited file having those data.Please let me know if this has any limitations.
Question:
What are the steps i should follow to call an exe from a trigger?
If i call my exe from the trigger, should it cause any types of issues in database like database hang?
Note: A better approach comes to my mind is:
We have our own C++ APIs using which i can connect to Database.So i can put a logic in C++ which should have a daemon logic to check the table every time and generate the file once an insertion happens in table.But the problem here is , my client don't want to have a daemon process which needs constant monitoring and increase the maintenance work.They are suggesting to go with an approach where it should run the application only when the insertion event happens.
Please help me on this whether i should go for trigger approach to call the exe from there. Also please let me know for any better approach.

I think a better approach would be to use a DBMS_SCHEDULER call in the trigger to create or schedule a job that will invoke your external application. This way you will decouple your database operation from the external call and yet you will be able to trigger the program when necessary instead of polling the table.

Related

Redirect stdout from Executable Custom Action to MSI log

I have a Custom Action that runs an executable within an msi installer package. The exe is compiled as a console application and stdouts necessary info.
I want that output redirected to the MSI log file.
I don't want the console to be shown during the installation.
For number 2 I suppose I can use windows as a subsystem, which will not open a console at all. But no output will be shown even if I run the exe from a terminal (PowerShell/CMD).
For number 1 I thought of running an executable as a subprocess called within a Custom Action DLL, but it is not possible since the exe is stored in a binary table and won't be generated when I need it. Moreover, it will have a random name.
The Custom Action's logic MUST be run as a separate process.
EDIT: Some colleagues wrote a free guide on installation testing. Maybe it will be useful in the future, to avoid such costly mistakes.
I don't think you can do it if you want to run the custom action as a separate process. I might be wrong. But I never tried this and it doesn't seem/sound possible.
Basically, the MSIEXEC process will own the handle of the log file created by the installation and I don't think you can share it with a separate process.
Why do you need to use a separate custom action process?
As a test - you could try to create an additional DLL custom action, that runs asynchronously. The purpose of this custom action is simply to communicate with your EXE process and write inside the log file any information you want to pass from the EXE custom action. I never tried this approach, but if you have time to kill and really need the main logic to remain in the EXE custom action, you could give it a try.

cfapi: CfDehydratePlaceholder seems to be stucked

My target is, that files can be hydrated or dehydrated on user request via the Explorer "free up space" or "Always keep on Device" ContextMenu entry. In case I create a new placeholder file that is dehydrated from the beginning, everything works and I can hydrate it via the callback mechanics. But the way around does not work for me. Inside of the Explorer the file will be marked as UnPinned and the file will be marked as syncing, but my application does not receive any callback from CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE or CF_CALLBACK_TYPE_NOTIFY_DEHYDRATE_COMPLETION. Then I wanted to do it manually with CfDehydratePlaceholder, but exactly the same behaviour. Nothing happens and the file remains in the state, syncing. Even if I used CfSetInSyncState to set the state to CF_IN_SYNC_STATE_IN_SYNC it remains to be in the state syncing.
Now I wanted to implement a minimal example with the help of Cloud Mirror Example, but I realized it has the same behaviour. When I try to dehydrate a file again exactly the same happens there as well. From my perspective, it feels for me like cfapi expects an ack from the cloud service, which it never gets.
But in OneDrive everything works like expected. What I am missing? Did I have to set some specific settings?
I had a misunderstanding of the whole API and here is how I understand the API now, to help other people, who are struggling with it.
You have to register your sync root and connecting your app to it. In case of connecting it, you will receive a CF_CONNECTION_KEY, which is needed to communicate with the virtual filesystem. Then you can add extended attributes to all files inside of your sync root. The most important are custom attributes you can choose by yourself to identify the file object by your app if needed and then the PinState and SyncState. Mostly the SyncState don't have to be changed by the app, besides marking a file as synced after it was processed by the app. (you can do it at the moment you update your custom attributes) Because in case a file changed, the SyncState will automatically be changed. The PinState declares which final state a file should have. For example UNPINNED means, that the file should be dehydrated, and PINNED the opposite. It does not mean, that the file necessarily has already this state. My misunderstanding was, that I thought in case I unpinned a file, it will be automatically dehydrated. Or in case I pinned a placeholder I will receive a request via the callback function I mentioned in my question. But this is not the case. Your app needs to find out via a FileWatcher (i can recommend my own created FileWatcher project: https://github.com/neXenio/panoptes) that the file attribute of specific files was changed. Then your app has to process every step. Like already mentioned in case of dehydrating, the app needs to call CfDehydratePlaceholder. In case of hydrating, you need to open a transfer session via CfGetTransferKey and then hydrate (send the data to the empty file) via the method CfExecute, where you need the connection key and the transfer key. And that's are the basics. There is much more to tell about it, but I guess with this beginning, everybody can figure it out by himself.

Django: Where to place an infinite loop

I am currently working on a project where I'd need to integrate a django application with mastodon, a federated twitter-like service.
In order to interact with Mastodon, I use Mastodon.py package: https://mastodonpy.readthedocs.io/en/stable/#
I would need to monitor events occurring to a specific mastodon account, a bot account managed by the django application, using the streaming capabilities provided by the package: https://mastodonpy.readthedocs.io/en/stable/#streaming
So I would need to call one of these stream methods in an infinite loop. But I can't figure out where I should place it in django. Is there a main loop somewhere where I could insert it?
You need to run this kind of things in the background. There are many options you can choose from to setup background processing.
I find the following quite easy to set up and it might be a good start for you.
Django Background tasks
Basically you create a function/job which should be done in background. You annotate it with special decorator to register as a task.
You can then choose when to run - in your case - you can run it repeatedly every certain amount of time ( no need for "infinite" loop in your job task).
It is database backend task queue - so you will run a process which monitors your tasks and run them in chosen times. See docs for detail.
Maybe you can create a django command,place your infinite loop in there, and let supervisor handle the daemonization
You can create a method to process whatever you want and call that method in files such as urls.py(which will get called only once when the server starts).
Infinite loops are not really recommended when working at Django,but, if you cannot make it work with a method ,a good solution would be to create a seperate thread and run your infinite loop there.
This way the Django application will keep being active and non-blocked and you will have the loop running and waiting for an event.
I honestly don't know if performance and speed wise this is a good solution but it does the job.

How to determine when files are done copying for further processing?

Alright so to start this is strictly for Windows and I'd prefer to use C++ over .NET but I'm not opposed to boost::filesystem although if it can be avoided in favor of straight Windows API I'd prefer that.
Now the scenario is an application on another machine I can't change is going to create files in a particular directory on the machine that I need to make backups of and do some extra processing. Currently I've made a little application which will sit and listen for change notifications in a target directory using FindFirstChangeNotification and FindNextChangeNotification windows APIs.
The problem is that while I can get notified when new files are created in the directory, modified, size changes, etc it only notifies once and does not specifically tell me which files. I've looked at ReadDirectoryChangesW as well but it's the same story there except that I can get slightly more specific information.
Now I can scan the directory and try to acquire locks or open the files to determine what specifically changed from the last notification and whether they are available for further use but in the case of copying a large file I've found this isn't good enough as the file won't be ready to be manipulated and I won't get any other notifications after the first so there is no way to tell when it's actually done copying unless after the first notification I continually try to acquire locks until it succeeds.
The only other thing I can think of that would be less hackish would be to have some kind of end token file but since I don't have control over the application creating the files in the first place I don't see how I'd go about doing that and it's still not ideal.
Any suggestions?
This is a fairly common problem and one that doesn't have an easy answer. Acquiring locks is one of the best options when you cannot change the thing at the remote end. Another I have seen is to watch the file at intervals until the size doesn't change for an interval or two.
Other strategies include writing a no-byte file as a trigger when the main file is complete and writing to a temp directory then moving the complete file to the real destination. But to be reliable, it must be the sender who controls this. As the receiver, you are constrained to watching the directory and waiting for the file to settle.
It looks like ReadDirectoryChangesW is going to be your best bet. For each file copy operation, you should be receiving FILE_ACTION_ADDED followed by a bunch of FILE_ACTION_MODIFIED notifications. On the last FILE_ACTION_MODIFIED notification, the file should no longer be locked by the copying process. So, if you try to acquire a lock after each FILE_ACTION_MODIFIED of the copy, it should fail until the copy completes. It's not a particularly elegant solution, but there doesn't seem to be any notifications available for when a file copy completes.
You can process the data once the file is closed, right? So the task is to track when the file is closed. This can be done using file system filter driver. You can write your own or you can use our CallbackFilter product.

Win32 C++ ReadDirectoryChangesW "creation" and "modification" of file difference detect?

Here is the problem: I monitor a directory using Win32 API ReadDirectoryChangesW function. And I need to distinguish between newly created files and modified files. But there are problems... as always :(
Cases:
I monitor directory for new/modify (FILE_NOTIFY_CHANGE_FILE_NAME | FILE_NOTIFY_CHANGE_SIZE). Problem: After file creation, new file event + modify file event is triggered. But i need only one. How can I avoid that? When file is modified I get what I want :).
I monitor directory only for new file (FILE_NOTIFY_CHANGE_FILE_NAME) - NO PROBLEM.
I monitor directory only for modify file (FILE_NOTIFY_CHANGE_SIZE). Problem: When a new file is, modify action is fired along with file creation event. How can I avoid that?
Of course, I implemented some workarounds. But, I want to know if there any elegant way of handling the problems I described.
You should be catching FILE_NOTIFY_CHANGE_LAST_WRITE, not FILE_NOTIFY_CHANGE_SIZE, for a modified file. Files may be modified without the size changing.
You should also keep a queue of changes and the time they happened and only process the queue after there have been no changes in the past 1-2 seconds. Some applications can do very strange things when creating or modifying files, and you'll most likely want to special case for popular applications if you plan on using this code in the wild.
ReadDirectoryChanges isn't one of the friendliest winapi functions. You probably can't get around receiving two events on file creation; I'm not completely sure whether you'll get an extra modify for FILE_NOTIFY_CHANGE_LAST_WRITE on creation, but I think you probably will. Using the queue approach will allow you to easily throw out the extra event if it has the same time stamp as the creation event.