Reading through the doc for Boost.Log, it explains how to "fan out" into multiple files/sinks pretty well from one application, and how to get multiple threads working together to log to one place, but is there any documentation on how to get multiple processes logging to a single log file?
What I imagine is that every process would log to its own "private" log file, but in addition, any messages above a certain severity would also go to a "common" log file. Is this possible with Boost.Log? Is there some configuration of the sinks that makes this easy?
I understand that I will likely have the same "timestamp out of order" problem described in the FAQ here, but that's OK, as long as the timestamps are correct I can work with that. This is all on one machine, so no remote filesystem problems either.
My expectation is that the Boost.Log backends that directly write logfiles will keep those files open in between writing the log entries.
This will cause problems with using the same logfile from multiple processes, because the filesystem usually won't allow more than one process to write to a file.
There are a few Boost.Log backends that can be used to have all the logging end up in one place.
These are the syslog and Windows eventlog backends. Of these, the syslog backend is probably the easiest to use.
Related
Looking at the docs, all the logging paths specified for console-capture and requestlog are relative to jetty.base, normally in $jetty.base/logs . That's ok for many purposes but, I really want logs to go into /var/logs/jetty , just like a lot of other processes would do. I've tried setting this in console-capture as /var/log/jetty, but that just tries to save log files in $jetty.base/var/log/jetty, which isn't what I need.
Is there some way to do this? I'm looking for the simplest possible approaches to saving logs. This is the last thing I need to do before my Jetty installation is fully in production. Overall it's been great. This is all with Jetty 9, latest release, on Ubuntu.
Start by not using console-capture.
You have progressed beyond the limited scope of console-capture with your requirement.
You'll want a formal logging framework, pick one, like "logback" (which the Jetty devs recommend), or java.util.logging, or log4j.
Use one of the logging-* modules to setup Jetty's server classpath to start using that logging library.
Now configure that logging library (example: if you are using "logback", the file ${jetty.base}/resources/logback.xml is what you configure)
Finally, configure your access logging to use slf4j.
Boom, all of your logging is now going to your logging library of choice, and it's configuration can be used to slice / dice / roll over / filter / etc the logging in any way you want.
You can have it split into different logging output files, combine them into one, roll on different rules (size, number of lines, duration, time, etc).
Definitely made some progress on this. For some reason, console redirect was taking the absolute path correctly while the request logger was not. For that, I made my configuration relative: ../../../var/log/jetty
This seems like a clunky way to do it but it does seem to work. I'm still getting a failure on startup but weirdly enough it's running fine and I don't see exceptions so I need to figure that out now.
I am not able to understand that why do we use a library for logging files when we can simply write out the error or the exception by appending to a file.
Logging is a powerful aid for understanding and debugging program’s run-time behavior. Logs capture and persist the important data and make it available for analysis at any point in time.
Both using Logging framework(library) or simply write out log to a file do the minimum job of logging, but choosing one of them depends on size of your software.
For small size software which doesn't do any important/continuous task you don't need any logging framework. But for bigger one you need one of them without doubt. Some pros of Logging framework:
Thread-safe
Asynchronous Logging
Enable/Disable logs without rebuild codes
Set multiple level of log
Change level of log without rebuild codes
Multiple output method of logs (file, network, syslog,...)
Log rotate (manage log files based on size, date,...)
I'm working on a project, which require a special Centos Apache server setup. All entries must be pre-processed and sorted by certain conditions and then placed into separate log files.
Take these Apache requests, for instance:
http://logs.domain.com/log.gif?obj=test01|id=886655774|e=via|r=4524925241
http://logs.domain.com/log.gif?obj=test01|id=886655774|e=via|r=9746354562
For every entry, Apache must search through the request URL. If it finds the "obj"-parameter, a log with this name must be created. In this case, one file with the name "test01" is created (if it doesn't already exist), containing these two entries.
I found this among others: http://httpd.apache.org/docs/2.0/mod/mod_rewrite.html and assume, it could be done with some kind of RegEx, but I still need a further push in the right direction, please.
Apache does not support this sort of fine-grain log directing. However, you could certainly write a log reader which reads from the Apache logs and subsequently writes the appropriate sub-logs.
There are three things to be aware of:
Apache log writes are not atomic and Apache uses no file locking. That means that the program reading the log cannot be guarunteed that the last log entry is complete, and that there is no reliable real-time way to tell if Apache is currently writing to the log.
The error logs can have multiple lines, and only the first would match your pattern. Additionally, many CGI scripts will write lines without the appropriate prefix. This is very difficult to control unless you write all the code yourself. The access log, at least, is pretty standard 1 line = 1 entry with prefix though.
The reader must keep track of the current file atime, ctime, and size somewhere on disk, so that if it's stopped and restarted, it can continue where it left off. Additionally, it must correctly handle the switch-over when Apache begins writing a new log. (The old log is renamed.)
I have an executable that needs to process records in the database when the command arrives to do so. Right now, I am issuing commands via TCP exchange but I don't really like it cause
a) queue is not persistent between sessions
b) TCP port might get locked
The idea I have is to create a folder and place files in it whose names match the commands I want to issue
Like:
1.23045.-1.1
2.999.-1.1
Then, after the command has been processed, the file will be deleted or moved to Errors folder.
Is this viable or are there some unavoidable problems with this approach?
P.S. The process will be used on Linux system, so Antivirus problems are out of the question.
Yes, a few.
First, there are all the problems associated with using a filesystem. Antivirus programs are one (though I cannot see why it doesn't apply to Linux - no delete locks?). Disk space, file and directory count maximums are others. Then, open file limits and permissions...
Second, race conditions. If there are multiple consumers, more than one of them might see and start processing the command before the first one has [re]moved it.
There are also the issues of converting commands to filenames and vice versa, and coming up with different names for a single command that needs to be issued multiple times. (Though these are programming issues, not design ones; they'll merely annoy.)
None of these may apply or be of great concern to you, in which case I say: Go ahead and do it. See what we've missed that Real Life will come up with.
I probably would use an MQ server for anything approaching "serious", though.
I have a server program that I am writing. In this program, I log allot. Is it customary in logging (for a server) to overwrite the log of previous runs, append to the file with some sort of new run header, or to create a new log file (it won't be restarted too often).
Which of these solutions is the way of doing things under Linux/Unix/MacOS?
Also, can anyone suggest a logging library for C++/C? I need one, regardless of the answer to the above question.
Take a look in /var/log/...you'll see that files are structured like
serverlog
serverlog.1
serverlog.2
This is done by logrotate which is called in a cronjob. But everything is simply in chronological order within the files. So you should just append to the same log file each time, and let logrotate split it up if needed.
You can also add a configuration file to /etc/logrotate.d/ to control how a particular log is rotated. Depending on how big your logfiles are, it might be a good idea to add here information about your logging. You can take a look at other files in this directory to see the syntax.
This is a rather complex issue. I don't think that there is a silver bullet that will kill all your concerns in one go.
The first step in deciding what policy to follow would be to set your requirements. Why is each entry logged? What is its purpose? In most cases this will result in some rather concrete facts, such as:
You need to be able to compare the current log with past logs. Even when an error message is self-evident, the process that led to it can be determined much faster by playing spot-the-difference, rather than puzzling through the server execution flow diagram - or, worse, its source code. This means that you need at least one log from a past run - overwriting blindly is a definite No.
You need to be able to find and parse the logs without going out of your way. That means using whatever facilities and policies are already established. On Linux it would mean using the syslog facility for important messages, to allow them to appear in the usual places.
There is also some good advice to heed:
Time is important. No only because there's never enough of it, but also because log files without proper timestamps for each entry are practically useless. Make sure that each entry has a timestamp - most system-wide logging facilities will do that for you. Make also sure that the clocks on all your computers are as accurate as possible - using NTP is a good way to do that.
Log entries should be as self-contained as possible, with minimal cruft. You don't need to have a special header with colors, bells and whistles to announce that your server is starting - a simple MyServer (PID=XXX) starting at port YYYYY would be enough for grep (or the search function of any decent log viewer) to find.
You need to determine the granularity of each logging channel. Sending several GB of debugging log data to the system logging daemon is not a good idea. A good approach might be to use separate log files for each logging level and facility, so that e.g. user activity is not mixed up with low-level data that in only useful when debugging the code.
Make sure your log files are in one place, preferably separated from other applications. A directory with the name of your application is a good start.
Stay within the norm. Sure you may have devised a new nifty logfile naming scheme, but if it breaks the conventions in your system it could easily confuse even the most experienced operators. Most people will have to look through your more detailed logs in a critical situation - don't make it harder for them.
Use the system log handling facilities. E.g. on Linux that would mean appending to the same file and letting an external daemon like logrotate to handle the log files. Not only would it be less work for you, it would also automatically maintain any general logging policies as a whole.
Finally: Always copy log important data to the system log as well. Operators watch the system logs. Please, please, please don't make them have to look at other places, just to find out that your application is about to launch the ICBMs...
https://stackoverflow.com/questions/696321/best-logging-framework-for-native-c
For the logging, I would suggest creating a new log file and clean it using a certain frequency to avoid it growing too fat. Overwrite logs of previous login is usually a bad idea.