Archive file after MFT transfer - websphere-mq-fte

We're using IBM MQFT for a file transfer between 2 systems and we have a requirement to move the original file to a different location on the source file system once the transfer is complete.
From the knowledge center MFT documentation what I can see is that we may need to write a user exit program to do this. I guess I would set the source_file_disposition to "leave" and use the exit program to move the file.
Is that the correct way to go about it, or there a simpler way to do this that I'm not seeing?
Regards

There are few options for you.
You can run a command or a shell script or an Ant script as part of postSrc parameter of fteCreateTransfer command. Once transfer is complete MFT will run the specified command to move the files from source to another directory.
As you mentioned a Java exit can be written to move the files after transfer completion.
You may want to refer samples Java exit and Ant Script here.

Related

Is there such a thing as a posix lstatat call?

I am working on a FUSE and I have a file descriptor to the directory prior to mounting the fuse on top. I want to use that handle to read/write files with state information underneath the FUSE mounted file system, and then to be able to access that data next time I mount it. So I cannot use the normal lstat call since it won't see the files I want to access, but the files FUSE exposes instead. What I need is the equivalent of fstatat that works for symbolic links, since fstatat apparently gives the the stat info on the file the symbolic link points to, not the symbolic link itself. Yet I cannot find documentation for such a function. Does it exist? Am I thinking of an incorrect name?
There is no lstatat() function in POSIX, however, fstatat()
takes a flag argument which can be AT_SYMLINK_NOFOLLOW,
which may do what you're looking for.

How to run a batch file in siebel escript and execute plsql package through a batch file by passing a variable and to get the output

My requirement is to execute a plsql package through siebel escript. For that, I am planning to write a batch file which can be invoked in the escript.
in the batch file, I want to execute the package. But stuck at passing the input to package and get the output from it. Please help me with the code.
Thanks.
The quickest answer might be using Clib Send Command Method. This can be used to run commands on the siebel server, on any OS. eg:
Clib.system("dir /p C:\Backup");
So you could try invoking your bat file
Clib.system("C:\custom.bat arg1 arg2");
You will have to handle the variables in the bat file (or .sh) file and invoke your PLSQL from there.
The flip side is that there is no way of getting any output from the command line back to Siebel.
https://docs.oracle.com/cd/E14004_01/books/eScript/C_Language_Reference101.html#wp1008859
You can get the output back into Siebel indirectly by having the command pipe it to a text file and having Siebel process that file.
The only way to do this is to call the batch with Clib.system and have it save the output into a file. You then need to have some BS/Workflow to read the file and delete it.
It will work reliably if you are careful with the file naming to avoid concurrency issues.

How to implement MSBUILD file tracking feature (Tracker.exe) for a not native VC compiler (GCC) declared in .props files?

after searching for hours on the internet I could not find any information or documentation about this. Does anyone know if there is a way to get this done?
Would be great to get a hint to the right direction.
Thanks in advance,
Alex
You have to write task that will use file tracking for incremental build.
.NET API seems is here: FileTracker Class
If I were you I’d try to disassemble Microsoft.Build.CPPTasks.Common.dll assembly - class Microsoft.Build.CPPTasks.TrackedVCToolTask to get an idea how it works.
So here’s my off-the-cuff idea how it can works:
I think Tracer.exe starts child process (your tool as suspended process).
Then it patches kernel32.dll file winapi to track all read and write operation (so I think they patches CreateFile and CloseHandle).
Then resumes process
After process is finished you should get list of files the child process used.
Write file list of input file that were used to produce output file into log file.
Second time is yout task invoked you can make optimization in build. Because you have file mapping now you should be able decide if you call your tool for given output or you skip it. You can skip it in case output file time-stamp is newer then all input files and none setting for compilation has changed (project file time-stamp - or something more sophisticated).
File Tracking

Monitoring a directory for subdirectory complete creation and then launching another process, c++

So I have an idea that I would like to implement and it's as follows:
Monitor a specific directory.
once a sub-directory is not only created but completed (i.e. a folder that's being downloaded or copied has just completed) the code calls a procedure or a scheme to compress the folder.
I have a sort of an idea of implementing this using ReadDirectoryChangesW. However my question is how to wait for changes, but when a change happens, it waits for its completeness. The second question would be how to identify the subfolder that's completed so I can call the compression scheme and supply it as an argument.
Thank you.
Since it's labelled "winapi", just set the NTFS compression attribute on the subdirectory as soon as you see it. Any new files in that directory will be automatically compressed as they're created.

inotify : How to be notified of new files in the directory after they are complete in transfer?

A file is copied from machine1/dir1 to machine2/dir2. I have added a inotify watch on the dir2 for any new files created. Now if the file is large, it might take a few seconds to be fully on the new machine. If I'm not wrong, inotify will notify the application as soon as it detects an event. So if the file status has to be checked, How should it be done ?
Save downloaded file with temporary filename (or to other directory) and rename it to expected filename when file moved successfully.
Nginx for example use this method to store cached data
Caching data is first written to the temporary file which is then moved to the final location in a cache directory. Cheap and atomic rename syscall is performed instead of a full file copy So it's better to use the same file system in both locations
There's no way to answer this because it depends on the application environment and requirements. It might do to see that the file hasn't been modified for 60 seconds. It might require checking every few seconds. It depends.
Using IN_CLOSE_WRITE works if its only a scp from one machine to another. Otherwise, it depends on the way the file is uploaded from one machine to another. If its a one time open and close , IN_CLOSE_WRITE is the way to do it.
Both the answers above make sense depending on how we do it.