IBM MQ v9 - Managed File Transfer - Initiate MFT once file is placed in folder - websphere-mq-fte

How can I initiate a file transfer, once a file is placed in a folder, using IBM MQ v9.0 Managed File Transfer. I can achieve the same by below transfer initiation methods (have tried and tested, working fine).
Transfer by a initiation file
Transfer by a schedule
Transfer by a monitor
A file monitor is fine, but the trigger file contains the details of files to be transferred. When the trigger file is placed in the folder, files specified in the trigger file are being transferred.
I need a solution, once a file is placed in the folder, the file itself should be fetched and transferred.
As per below IBM link https://www.ibm.com/support/knowledgecenter/en/SSFKSJ_8.0.0/com.ibm.wmqfte.doc/create_monitor_cmd.htm in the purpose section
"For example, you can use a resource monitor in the following way: An
external application puts one or more files in a known directory and
when processing is complete, the external application places a trigger
file in a monitored directory. The trigger file is then detected and a
defined file transfer starts, which copies the files from the known
directory to a destination agent".
i.e We have to place the set of files to be transferred and then place a (second) trigger file to initiate the transfer. My question is, is there a way to initiate without the second file, once a file is placed in the transfer directory.
Any help is very much appreciated.
Regards
Yasothar

Related

Putting a TWS file dependencies on AWS S3 stored file

I have an ETL application which is suppose to migrate to AWS infra. The scheduler being used in my application is Tivoli Work Scheduler and we want to use the same on cloud as well which has file dependencies.
Now when we move to aws , the files to be watched will land in S3 Bucket. Can we put the OPEN dependency for files in S3? If yes, What would be the hostname ( HOST#Filepath ) ?
If Not, what services should be aligned to serve the purpose. I have both time as well as file dependency in my SCHEDULES.
Eg. The file might get uploaded on S3 at 1AM. AT 3 AM my schedule will get triggered, look for the file in S3 bucket. If present, starts execution and if not then it should wait as per other parameters on tws.
Any help or advice would be nice to have.
If I understand this correctly, job triggered at 3am will identify all files uploaded within last e.g. 24 hours.
You can list all s3 files to list everything uploaded within specific period of time.
Better solution would be to create S3 upload trigger which will send information to SQS and have your code inspect the depth (number of messages) there and start processing the files one by one. An additional benefit would be an assurance that all items are processed without having to worry about time overalpse.

How to auto refresh remote list, after uploading a local folder to a Storj bucket through FileZilla?

In FileZilla client, when a local folder is dragged and dropped into a remote directory, which part of the FileZilla code is recursively sending command to transfer (upload) all local files and sub-folders (within the selected local folder) to the remote end?
My main purpose is to insert command to either list or refresh the remote directory, once the upload is complete. Although this is being done in ftp and sftp protocols, but I am not able to do so for storj feature.
I have tried the including the "list" or refersh commands at the following points in different codes:
at the end of the "put" command within /src/storj/fzstorj.cpp file
after the "Transfers finished" notification in void CQueueView::ActionAfter(bool warned) function in /src/interface/QueueView.cpp file
Reason: this notification is displayed when all files and subfolders of a selected local folder have been uploaded to a Storj bucket.
I also tried tracking files that take part in the process, mainly those within /src/engine/storj folder, like, file_transfer.cpp sending "put" command through int CStorjFileTransferOpData::Send()function
This did not help much.
While checking who is giving command to the storj engine, I observed it is being done by calling void CCommandQueue::ProcessCommand(CCommand *pCommand, CCommandQueue::command_origin origin) in /src/interface/commandqueue.cpp
Expected output is autorefreshing of Storj bucket or upload path, when all desired files and sub-folders are uploaded from the local end through FileZilla client.
Any hint towards the solution would be of great help to me.
Thank You!

AWS s3 sync to upload if file does not exist in target

I have uploaded about 1,000,000 files from my local directory to s3 buckets/subfolders and some of them have failed.
I would like to use the 'sync' option to capture those that did not make it the first time. The s3 modified date is the date/time my file was uploaded (which differs from my source file date/times).
As I understand, sync will upload a file to the target if it does not exist, if the file date has changed, or if the size is different.
Can I modify the command line to NOT use the file date as a consideration for syncing? I ONLY want to copy a file if it does not exist.
aws s3 sync \localserver\localshare\folder s3://mybucket/Folder1
aws s3 sync will compare the "last modified time".
For the objects in S3, there is only one timestamp LastModified, which should be when you uploaded the files.
For your local file (assume a posix linux file system). It should have 3 timestamps: last-access, last-modified, last-status-change. Only last-modified time will be used for comparison.
Now support you uploaded 1M files and some of them failed. For all the files had uploaded successfully, they should have identical last-modified time, and then another sync will not upload them again (sync will validate whether those files are identical and it will be considerable long for the validations for 1M objects.)
On the meantime, you can use aws s3 sync --size-only arguments. It fits what you described. But be sure to check whether it is really something you need. I mean, in many cases, many files could be keep the same size even after being modified (intentionally or accidentally), --size-only will ignore such same-size files.

Amazon S3 synch command uploads the entire modified file again or just the delta in the file?

My system generate large log files continuously and I want to upload all the log files to Amazon S3. I am planning to use the s3 synch command for this. My system appens the logs in the same file until they are of about 50MB and then it create new log file. I understand that synch command will synch the modified local log file in s3 bucket, but I dont want to upload the entire log file when the file changes as the files are large and sending same data again and again will consume my data bandwidth.
So I am wondering if s3 synch command sends the entire modified file or just the delta in the file?
The documentation implies that it copies the whole updated files
Recursively copies new and updated files
Plus there would be no way to to do this without downloading the file from S3 which would effectively double the cost of an upload since you'd pay the download and upload costs.

Limit MQFTE file transfer to one file at a time

I have a MQFTE setup where we are receiving files from an external vendor. The files get dumped on a server in DMZ and we have an MQFTE agent that picks the files from that server and drops to our server.
We receive files in "sets" i.e. each incoming file has an associated xml file that describes and contains metadata about the file. E.g. a applicationform.pdf and applicationform.xml. The final application stores the pdf file based on the data/metadata in the xml.
Since the trigger is fired for each incoming file, we check in the trigger whether or not we've received the XML file and the content file (e.g. PDF).
However, I don't think this is the best approach as it adds to a lot of booking code to check for concurrency issues when both files arrive at same time. Is there a way to :
Restrict the trigger so that it only fires when both files have arrived? In my research this is not possible.
Configure the agent on the server so that it only receives one file at a time? Looking at the documentation, it seems like it can achieved but only on the agent initiating the transfer, not on the agent receiving the transfer? The documentation hints at monitorMaxResourcesInPoll and -bs parameter, but that would be on the source agent I guess. Since the agent is shared with multiple systems, this would impact them as well.
Also, I would appreciate any tips and suggestions or even alternative solutions to best meet the requirement.
I don't think there is a way to check for both files existing before the monitor triggers. What some users do is send all of the files they want to transfer, and then finally put a 'marker' file in the directory which the resource monitor looks for. Because the marker file is only written after all other files are ready to be sent, the monitor only transfers the files when they're all there.
In answer to 2) I you could set maxDestinationTransfers to 1 on the destination agent to limit it to receive a single transfer at a time. If a transfer contains multiple files they will be transferred in sequence so the destination is really only receiving 1 file at a time. monitorMaxResourcesInPoll simply limits the monitoring agent to the number of files it parses in the source directory per monitor poll. You could set that to 1 but if you want to transfer the PDF and the XML file in the same transfer you'd need to set it to 2. It's probably not the setting you want to use.