Consider the following program:
#include <syslog.h>
int main()
{
openlog("test-app", LOG_CONS | LOG_NDELAY | LOG_PERROR | LOG_PID, LOG_USER);
setlogmask(LOG_UPTO(LOG_DEBUG));
syslog(LOG_DEBUG, "Testing!");
}
Note the use of LOG_PERROR, which emits messages to stderr as well as whatever file syslogd is configured to fill (in my case, /var/log/messages).
The console output is:
test-app[28072]: Testing!
The syslog file contains:
Apr 17 13:20:14 cat test-app[28072]: Testing!
Can I configure syslog so that the console output contains timestamps also? If so, how?
This is not doable when using the syslog call on it's own. The reason is that the timestamp that you see on the log file is after the log message has been sent to the daemon. It's the syslog daemon that actually performs the addition of the timestamp. The logging daemon is the thing that applies the timestamp.
When you use the LOG_CONS and LOG_PERROR options the message is sent to the console or stderr without ever getting timestamped because the timestamping does not occur at the point of logging.
Essentially the use of LOG_CONS and LOG_PERROR bypass the daemon entirely; you would need to modify the implementation of the syslog call itself to add your own timestamp. Thankfully, that's pretty easy as the implementation of the syslog library call is in the glibc source, and the source isn't terribly complicated.
Related
I have a production job where I use two nodes (0=master and 1=slave) via OPENMPI and all the threads on each node via OPENMP.
I submit the job on the master.
Job opens a disk file on master to log some info. ( I see same file is opened on slave as well during the run)
I have statements like
write(lu,*) 'pid=',pid,' some text'
and
write(6, *) 'pid=',pid,' some text'
one after the other. (unit 6 is the stdout -screen- in gfortran).
I see on screen that both statements are printed one after the other ( pid=0 and pid=1 ).
Strangely enough most (not all) of master prints (pid=0) on the log file are absent.
This is puzzling. I would like to learn the rule. I thought both master and slave share the logfile.
I have a host file with two hosts each requesting 32 threads ( via slots and max-slots commands ) and I am running the following command as a script
miprun --hostfile hostfile --map-by node:pe32 myexecutable
I will appreciate if some expert can shed light on the issue.
Let us consider the IOU project. when we start the ExampleFlow we provide some input in the flow and those input can be tracked in the log file but want we want to display that input in the node shell also after the transaction.
Like when a contract fail the node shell show that some RPC error is their but if you go to log file you will see that you have given iouValue grater then 100 this same error massage we want to display in the node shell can i do it if yes then how if no then why not.
To have full control of the response, you need to implement an RPC client, or a webserver; below are samples for both:
RPC Client: https://github.com/corda/samples/tree/release-V4/cordapp-example/clients
Webserver: https://github.com/corda/samples/tree/release-V4/spring-webserver
I would like to collect output of service.exe to my c++ program (windows environment). In simple case there is call in cmd like "service.exe > log.txt" but in my case service is always shutdown by other system call "taskkill service.exe" that cause lack of output in log.txt.
How can I solve this ? I try windows _popen but the same as simple system function call - lack of output in file.
Code for example:
void test_of_runtime()
{
std::string result;
/* Create path to service */
boost::filesystem::path work_path(boost::filesystem::current_path());
/* Run servie and save output in log file */
work_path += "\\service.exe > runtime_log.txt";
/* Create another thread which terminate service after 10 seconds */
boost::thread terminate_thread(terminate_thread_func);
terminate_thread.detach();
system(work_path.string().c_str());
}
void terminate_thread_func()
{
std::string terminate_command = "taskkill /F /T /IM service.exe";
/* Wait 10 sec */
boost::this_thread::sleep_for(boost::chrono::seconds(10));
/* Terminate service after 10 sec */
system(terminate_command.c_str());
}
You should avoid terminating any task with taskkill /f except in exceptionnal conditions. The correct way is to send a signal to the running task that it should stop to allow it to eventually do some cleanup and flush its buffers.
If service.exe has a message loop, you can use PostThreadMessage, provided you have access to the id of the thread using the message queue. You normally get it in the PROCESS_INFORMATION structure filled by the CreateProcess call.
If service.exe has not message loop (i.e. it is written as a console application), it could register a HandlerRoutine with SetConsoleCtrlHandler. You could then send a CTRL_BREAK_EVENT with the GenerateConsoleCtrlEvent function or with taskkill without /F (still unsure about this last one). You will find here an usage example of a break handler.
If none of above is an option, them service.exe must be ready to be forcibly killed at any time. That means it should not buffer its output, and specifically you should not start it with service.exe > log.txt and have it write to stdout, but instead give it the name of a log file, and the program should ideally log using the open write close pattern.
right now in the non verbose log procmail is not logging who the recipient is.
It is just logging who the sender is, the subject, date and the delivery.
From info#essegisistemi.it Tue Apr 15 20:33:19 2014
Subject: ***** SPAM 7.3 ***** Foto
Folder: /usr/lib/dovecot/deliver -d christian -m Junk 132417
Where can I configure to include the To and the CC into the logfile ?
You simply assign the strings you want to log to the LOG pseudo-variable.
Usually you want to add a newline explicitly. Something like this:
NL="
"
TO=`formail -zxTo:`
Cc=`formail -zxCc:`
LOG=" To: $TO$NL Cc: $CC$NL"
These will usually end up before the "log abstract" (what you have in your question). If you need full control over what gets logged, maybe set LOGABSTRACT=no and implement your own log abstract instead. (This is fairly tricky, though.)
Note also that logging could be asynchronous. The log abstract gets written all at once, but if you have many messages arriving at roughly the same time, you might want to add disambiguating information to the log entries, or (in desperation) force locking during logging so that no two messages can perform logging at the same time.
LOCKFILE=procmail.lock
# Critical section -- only one Procmail instance at a time can execute these recipes
LOG="fnord$NL"
:0w
| /usr/lib/dovecot/deliver -d "$USER" -m "$FOLDER"
# delivery succeeded, lockfile will be released
The disambiguating information could be just the process ID. In order to include it in the log abstract also, you need to smuggle it in somehow. Not sure how to do that with Dovecot, is there an option you can pass in which will simply be ignored but logged by Procmail?
TO=`formail -zxTo:`
LOG="[$$] To: $TO$NL"
CC=`formail -zxCc:`
LOG="[$$] Cc: $CC$NL"
:
:
# deliver
:0w
| deliver -d "$USER" -m "$FOLDER" -o ignore=$$
... should end up logging something like Folder: deliver -d you -m INBOX -o ignore=1742 where 1742 would be the process-ID so that you can find the same PID in the previous log entries.
I'm trying to use ActiveMQ-CPP with HornetQ. I'm using the ActiveMQ-CPP bundled example, but I'm having a hard time with it.
The producer works like a charm, but the consumer gives me the following message:
* BEGIN SERVER-SIDE STACK TRACE
Message: Queue /queue/exampleQueue does not exist
Exception Class
END SERVER-SIDE STACK TRACE *
FILE: activemq/core/ActiveMQConnection.cpp, LINE: 768
FILE: activemq/core/ActiveMQConnection.cpp, LINE: 774
FILE: activemq/core/ActiveMQSession.cpp, LINE: 350
FILE: activemq/core/ActiveMQSession.cpp, LINE: 281
Time to completion = 0.161 seconds.
The problem is that the queue exists. The code works all right with ActiveMQ+Openwire, but I'm not having the same luck with HornetQ+STOMP.
Any ideas?
Try to set the same Queue's address you defined on Hornetq as the destination.
Probably your queue is defined on HornetQ like this
<queue name="exampleQueue">
<address>jms.queue.exampleQueue</address>
</queue>
So, try to connect to this address via STOMP.
See the following frames according to the protocol:
Subscribing to the queue
SUBSCRIBE
destination:jms.queue.exampleQueue
^#
Sending a message
SEND
destination:jms.queue.exampleQueue
it works
^#
As soon as the message is sent, you'll get the message on the session you subscribed to the queue
MESSAGE
timestamp:1311355464983
redelivered:false
expires:0
subscription:subscription/jms.queue.exampleQueue
priority:0
message-id:523
destination:jms.queue.exampleQueue
it works
-- EDIT
There's one point left I would like to add...
HornetQ doesn't conform to STOMP's naming standarts (see http://community.jboss.org/message/594176 ), so there's a possibility that the activemq-cpp follows the behavior of ativemq-nms, which "normalize" queue's name to the STOMP standart: "/queue/YourQueue" (and causes naming issues).
So, if that's the case, even if you try to change your destination name to 'jms.queue.exampleQueue', activemq-cpp could normalize it to '/queue/jms.queue.exampleQueue'.
In NMS+HornetQ, there's no "out of the box" way of avoiding this. The only choice is to edit NMS's source code and remove the part which normalize queue's names. Maybe it's the same way out on activemq-cpp.
HornetQ doesn't like the "/queue/" prefix for a SUBSCRIBE. I took that out of the ToStomp method in StompHelper and everything worked.