how to print logs into only one file by google glog - c++

I'm now using google glog. When I'm debugging the program, every time the process restart, a new log file generated, identified by the new pid as filename's sufix, like this
ied_config.20131220-142934.4524
ied_config.20131220-171227.3948
ied_config.20131225-170117.7032
ied_config.20131225-170131.5200
ied_config.20131225-171450.7820
ied_config.20131225-172336.4116
ied_config.20131225-172924.6120
ied_config.20131225-173000.2980
ied_config.20131225-173037.1960
ied_config.20131225-173144.7304
ied_config.20131226-095843.1012
smv_client.20131219-082120.7184
smv_client.20131219-162339.5508
smv_client.20131219-163055.6156
smv_client.20131219-163155.4892
smv_client.20131219-163206.5576
smv_client.20131219-163216.6340
smv_client.20131219-163236.6952
smv_client.20131219-163307.7940
smv_client.20131219-163317.4920
smv_client.20131219-163347.6556
smv_client.20131219-163408.5124
smv_client.20131219-163428.2644
smv_client.20131219-163448.6040
smv_client.20131219-163529.6948
smv_client.20131219-163539.1592
smv_client.20131219-163549.3776
smv_client.20131219-172949.5412
smv_client.20131219-173000.4180
smv_client.20131219-173010.7432
smv_client.20131220-170628.636
smv_client.20131220-170930.3904
smv_client.20131226-095841.1296
I want to join these log just into one file for each program, I can't file the configuration for the glog ,any helps?

If you want to change the name of log files, you may need to call SetLogDestination():
google::SetLogDestination(google::INFO, "/var/tmp/another_destination.INFO");

From release 0.5 on May 8, 2021
Allow a log file to be simply named "foobar.log" with no appended
string #124
google::SetLogDestination(google::INFO, "/var/tmp/another_destination.INFO");
FLAGS_timestamp_in_logfile_name = false;

Related

exception of boostlog when date changed to next day

I use boost log by this config.
[Sinks.2]
Filter="%Severity% >= 2"
Destination=TextFile
AutoFlush=true
Format="[%TimeStamp%] [%ThreadID%] <%Severity%> %Message%"
Asynchronous=false
Target="logs"
FileName="logs/quo.%Y%m%dT%H%M%S.%a.%5N.log.detail"
RotationTimePoint="00:00:00"
RotationSize=104857600
MinFreeSpace=4294967296
MaxSize=4294967296
ScanForFiles=All
when date change to next day. my program crash by exception:
terminate called after throwing an instance of
'boost::filesystem::filesystem_error'
what(): boost::filesystem::last_write_time: No such file or directory: "/root/work/hy-trade/bin/debug/logs/quo.20181027T173106.Sat.00000.log.detail"
I check my disk space, find the free space less than MinFreeSpace in config and the file quo.20181027T173106.Sat.00000.log.detail not exists.
how to avoid this exception?
version of boost is 1.67
thank you
It looks like someone had already deleted the log file before it was rotated. It may have been an external process, or Boost.Log.
With Boost.Log, this can happen if you have multiple file sinks that write log files into the same directory, which is also used as the target directory for the rotated files (i.e. the FileName parameter includes the path specified in the Target parameter, and there are multiple sinks that use that path). The problem is because, according to ScanForFiles=All, the library scans the target directory for any files but does not update the file counter to be used for creating new files. This means that if the file "quo.20181027T173106.Sat.00000.log.detail" was present in that directory when your process started then it would be considered as an old file, even if upon starting your process would be still writing new logs to that file. Then, when a file rotation happens and storage limits are exceeded (e.g. if MinFreeSpace is not satisfied), that file may be deleted. The rotation has to happen on another sink that still stores files into the same "logs" directory.
To solve the problem you can do one of the following:
Use ScanForFiles=Matching in your settings so that the file counter is updated after scanning. This will make sure that new log files have unique names and don't get deleted prematurely.
Write log files to a different directory from your target storage. I.e. specify FileName so that it doesn't point to the same directory as Target.
Also, you may want to add exception handling to avoid crashing in case of errors (which may still happen for whatever reason on filesystem operations). See here and here for more info (also, follow the links in those sections).

FileName Port is not supported with connection or merge option

I need to create a csv flat file and need to store in particular path in ftp .
File name should be dynmaically created with timestamp . i have created the filename port in informatica and mapped to expression which i created. when i ran the workflow , am getting below error
Severity Timestamp Node Thread Message Code Message
ERROR 28-06-2017 07:31:19 PM node01_oktst93 WRITER_1_*_1 WRT_8419 Flat File Target [NewOrders] FileName Port is not supported with connection or merge option.
Please help to resolve without deleting filename port .
Thanks
If your requirement is to create dynamic file during each session run. Please check the below steps:
1) Connect the source qualifier to an expression transformation. In the expression transformation create an output port (call it as File_Name) and assign the expression as 'FileNameXXX'||to_char(sessstarttime, 'YYYYMMDDHH24MISS')||'.csv'
2) Now connect the expression transformation to the target and connect eh File_Name port of expression transformation to the FileName port of the target file definition.
3) Create a workflow and run the workflow.
I have used sessstarttime, as it is constant throughout the session run. If you have used sysdate, a new file will be created whenever a new transaction occurs in the session run
file port option dosen't work with the FTP target option. If you are simply using a local flat file: please disable the append if exists option at session level.
Please refer the below informatica KB :
https://kb.informatica.com/solution/11/Pages/102937.aspx
Late answer but may help some.
Since file port option dosen't work with the FTP target option.
Another way is to
Create a variable in workflow
Then create an assignment in between
Then you may set the $variable with full path i.e.
'/path/to_drop/file/name_of_file_'||to_char(SYSDATE, 'YYYYMMDD')||'.csv'
Use that $variable now in your session under workflows.
add it in your mappings now
Late answer but may help some.

log4cpp and odd console output

I am trying to use log4cpp on my project. But everytime I try to write a log message in addition to the log message going to the log file I also get this message to the console window.
/var/log/ngis/mixer.log.3
Here is the copy of the appropriate section of my log4cpp.properties file.
# This is for the rolling file Appender, this will not send log info
# to the syslog.
log4cpp.rootCategory=INFO, rootAppender
log4cpp.appender.rootAppender=RollingFileAppender
log4cpp.appender.rootAppender.fileName=/var/log/ngis/mixer.log
log4cpp.appender.rootAppender.maxFileSize=10MB
log4cpp.appender.rootAppender.maxBackupIndex=3
log4cpp.appender.rootAppender.BufferedIO=true
log4cpp.appender.rootAppender.BufferSize=100000
log4cpp.appender.rootAppender.layout=PatternLayout
log4cpp.appender.rootAppender.layout.ConversionPattern=%d[%p %c] %m%n
The data is going to the log file. I have tried creating a file called /var/log/ngis/mixer.log.3 but that does not help.
if I change the .maxBackupIndex=3 to .maxBackupIndex=2 then the message changes to:
/var/log/ngis/mixer.log.2

PoCo Logging. Log file name containing timestamp of creation OR New log file every time application starts

I need my program to start a new log file in each execution. I want to use PoCo as I am already using this library in the code.
From my point of view, I have two possibilities but I do not know how to configure any of them using a channel in Poco.
Just starting a new file each time the program starts
The atual file name (not the rolled but the one that is being writting) containing the timestamp when it was created.
If I am not wrong, using FileChannel is not possible any these possibilities. I guess I could write a new PoCo channel but, obviously, I prefer something already working.
Does anybody have any idea for this. I tried to figure out using two channels but I do not see how.
thank you
FileChannel has rotateOnOpen property. If you set this to true, it will create new file every time channel is opened. See FileChannel. If you do not have this property available, you are using an older version of Poco; in this case, you can simply open File channel with a newly generated name every time your application starts:
std::string name = yourCustomNameGenFunc();
AutoPtr<FileChannel> pChannel = new FileChannel(name);

How to retry file transfer using transferId (IBM MQ fte)

We've been transferring files from one folder to another using CNTFTEAgent. But sometimes source file appears to be locked in a source folder (mq Explorer says that "file is not found" but it does exist though), so that transferring becomes as "failed".
We decided to use "exits" for retrying such failed tranfers.
The last fired "exit" is SourceTransferEndExit, but it does not contain information about file and filspace where file should be put to.
But it contains transferId. So my question is - how to retry transferring attemption using java API, or is it possible to do that somehow, if we know only transferId?
Such information can be found in TransferMetaInfo and looks like this -
com.ibm.wmqfte.TransferId => 414d5120434e54465445514d47522020ce34465321038a03
You can retry using ant, 'rcproperty' property hold transfer status. If transfer status is not '0' then retry.