We've been transferring files from one folder to another using CNTFTEAgent. But sometimes source file appears to be locked in a source folder (mq Explorer says that "file is not found" but it does exist though), so that transferring becomes as "failed".
We decided to use "exits" for retrying such failed tranfers.
The last fired "exit" is SourceTransferEndExit, but it does not contain information about file and filspace where file should be put to.
But it contains transferId. So my question is - how to retry transferring attemption using java API, or is it possible to do that somehow, if we know only transferId?
Such information can be found in TransferMetaInfo and looks like this -
com.ibm.wmqfte.TransferId => 414d5120434e54465445514d47522020ce34465321038a03
You can retry using ant, 'rcproperty' property hold transfer status. If transfer status is not '0' then retry.
Related
I use boost log by this config.
[Sinks.2]
Filter="%Severity% >= 2"
Destination=TextFile
AutoFlush=true
Format="[%TimeStamp%] [%ThreadID%] <%Severity%> %Message%"
Asynchronous=false
Target="logs"
FileName="logs/quo.%Y%m%dT%H%M%S.%a.%5N.log.detail"
RotationTimePoint="00:00:00"
RotationSize=104857600
MinFreeSpace=4294967296
MaxSize=4294967296
ScanForFiles=All
when date change to next day. my program crash by exception:
terminate called after throwing an instance of
'boost::filesystem::filesystem_error'
what(): boost::filesystem::last_write_time: No such file or directory: "/root/work/hy-trade/bin/debug/logs/quo.20181027T173106.Sat.00000.log.detail"
I check my disk space, find the free space less than MinFreeSpace in config and the file quo.20181027T173106.Sat.00000.log.detail not exists.
how to avoid this exception?
version of boost is 1.67
thank you
It looks like someone had already deleted the log file before it was rotated. It may have been an external process, or Boost.Log.
With Boost.Log, this can happen if you have multiple file sinks that write log files into the same directory, which is also used as the target directory for the rotated files (i.e. the FileName parameter includes the path specified in the Target parameter, and there are multiple sinks that use that path). The problem is because, according to ScanForFiles=All, the library scans the target directory for any files but does not update the file counter to be used for creating new files. This means that if the file "quo.20181027T173106.Sat.00000.log.detail" was present in that directory when your process started then it would be considered as an old file, even if upon starting your process would be still writing new logs to that file. Then, when a file rotation happens and storage limits are exceeded (e.g. if MinFreeSpace is not satisfied), that file may be deleted. The rotation has to happen on another sink that still stores files into the same "logs" directory.
To solve the problem you can do one of the following:
Use ScanForFiles=Matching in your settings so that the file counter is updated after scanning. This will make sure that new log files have unique names and don't get deleted prematurely.
Write log files to a different directory from your target storage. I.e. specify FileName so that it doesn't point to the same directory as Target.
Also, you may want to add exception handling to avoid crashing in case of errors (which may still happen for whatever reason on filesystem operations). See here and here for more info (also, follow the links in those sections).
I had made a source file data type change in source analyzer. I did realize that it had made the mapping invalid. I ran the mapping and it failed. Now I reverted the change, validated the mapping, check in the mapping, validated the workflow, check in the workflow.
Now I am getting the error:
Severity Timestamp Node Thread Message Code Message
INFO 7/23/2015 10:40:03 AM node01_CSADevelopment READER_1_4_1 FR_3055 Reading input filenames from the indirect file [<input_directory_folder>/<input_file>].
Severity Timestamp Node Thread Message Code Message
ERROR 7/23/2015 10:40:03 AM node01_CSADevelopment READER_1_4_1 FR_3000 Error opening file [<input_file_folder>/<header_of_the_input_file>]. Operating system error message [No such file or directory].
here the term "input file" is the file which I wanted to load and "header_of_the_input_file" is the header of the input file.
I don't understand, why it is happening. I had just made a small change and then reverted it.
The error is just saying the filenames mentioned in the indirect file are not found. So, you just need to make sure all the source files are there in the "input_file_folder"
There is a property in the session to configure the source file as indirect. An indirect file contains a list of source filenames. Informatica reads all the files listed and loads the data. If you think you have inadvertently made the source file indirect, you can change the option in session properties (mapping tab -> Source Qualifier)
It does not have anything to do with the datatype change and reverting it.
I need my program to start a new log file in each execution. I want to use PoCo as I am already using this library in the code.
From my point of view, I have two possibilities but I do not know how to configure any of them using a channel in Poco.
Just starting a new file each time the program starts
The atual file name (not the rolled but the one that is being writting) containing the timestamp when it was created.
If I am not wrong, using FileChannel is not possible any these possibilities. I guess I could write a new PoCo channel but, obviously, I prefer something already working.
Does anybody have any idea for this. I tried to figure out using two channels but I do not see how.
thank you
FileChannel has rotateOnOpen property. If you set this to true, it will create new file every time channel is opened. See FileChannel. If you do not have this property available, you are using an older version of Poco; in this case, you can simply open File channel with a newly generated name every time your application starts:
std::string name = yourCustomNameGenFunc();
AutoPtr<FileChannel> pChannel = new FileChannel(name);
I'm now using google glog. When I'm debugging the program, every time the process restart, a new log file generated, identified by the new pid as filename's sufix, like this
ied_config.20131220-142934.4524
ied_config.20131220-171227.3948
ied_config.20131225-170117.7032
ied_config.20131225-170131.5200
ied_config.20131225-171450.7820
ied_config.20131225-172336.4116
ied_config.20131225-172924.6120
ied_config.20131225-173000.2980
ied_config.20131225-173037.1960
ied_config.20131225-173144.7304
ied_config.20131226-095843.1012
smv_client.20131219-082120.7184
smv_client.20131219-162339.5508
smv_client.20131219-163055.6156
smv_client.20131219-163155.4892
smv_client.20131219-163206.5576
smv_client.20131219-163216.6340
smv_client.20131219-163236.6952
smv_client.20131219-163307.7940
smv_client.20131219-163317.4920
smv_client.20131219-163347.6556
smv_client.20131219-163408.5124
smv_client.20131219-163428.2644
smv_client.20131219-163448.6040
smv_client.20131219-163529.6948
smv_client.20131219-163539.1592
smv_client.20131219-163549.3776
smv_client.20131219-172949.5412
smv_client.20131219-173000.4180
smv_client.20131219-173010.7432
smv_client.20131220-170628.636
smv_client.20131220-170930.3904
smv_client.20131226-095841.1296
I want to join these log just into one file for each program, I can't file the configuration for the glog ,any helps?
If you want to change the name of log files, you may need to call SetLogDestination():
google::SetLogDestination(google::INFO, "/var/tmp/another_destination.INFO");
From release 0.5 on May 8, 2021
Allow a log file to be simply named "foobar.log" with no appended
string #124
google::SetLogDestination(google::INFO, "/var/tmp/another_destination.INFO");
FLAGS_timestamp_in_logfile_name = false;
I'm trying to send some files (a zip and a Word doc) to a directory on a server using ftplib. I have the broad strokes sorted out:
session = ftplib.FTP(ftp.server, 'user','pass')
filewpt = open(file, mode)
readfile = open(file, mode)
session.cwd(new/work/directory)
session.storbinary('STOR filename.zip', filewpt)
session.storbinary('STOR readme.doc', readfile)
print "filename.zip and readme.doc were sent to the folder on ftp"
readfile.close()
filewpt.close()
session.quit()
This may provide someone else what they are after but not me. I have been using FileZilla as a check to make sure the files were transferred. When I see they have made it to the server, I see that they are both way smaller or even zero K for the readme.doc file. Now I'm guessing this has something to do with the fact that I stored the file in 'binary transfer mode' <--- whatever that means.
This is where my problems lie. I have no idea at all (yet) what is meant by binary transfer mode. Is it simply that I have to use retrbinary to return the files to their original state?
Could someone please explain to me like I'm a two year old what has happened to my files? If there's any more info required, please let me know.
This is a fantastic resource. Solved most of my problems. Still trying to work out the intricacies of FTPs, but I guess I will save that for another day. The link below builds a function to effortlessly upload files to an FTP without the partial upload problem that I've seen experienced by more than one Stack Exchanger.
http://effbot.org/librarybook/ftplib.htm