I had config rotate log daily for apache.
When new day comes, example 00:00AM today (07/31/2017), new access.log file created, old access.log file changed to access.log-31072017
The problem here, tomorrow, access.log file will change to access.log-01082017 (yes), new access.log will create (yes), but access.log-31072017 file lost (ouch).
And, I performed:
vi /etc/logrotate.d/httpd
Insert end of file
/home/*/logs/*log{
missingok
notifempty
sharedscripts
delaycompress
postrotate
/bin/systemctl reload httpd.service > /dev/null 2>/dev/null || true
endscript
}
Rotate config
vi /etc/logrotate.conf
Change weekly to daily
Change rotate 4 to rotate 1
The Log file is recorded at the /home/example.com/logs/ path
How to retain the files of the previous days
Thank advance
Try changing the value rotate in /etc/logrotate.conf back to 4. Despite the comment in logrotate.conf it is not the number of weeks the logs are kept but the number of times the files are rotated before they are deleted.
The manpage for logrotate.conf explains this more clearly:
rotate count
Log files are rotated count times before being removed or mailed to the address specified in a mail directive. If count is 0, old versions are removed
rather than rotated. Default is 0.
Setting it to 4 should keep the old logs for four days.
Related
I am trying to pull the data from the Check Mk server RRD File for CPU and Memory. From that, am trying to find the MAX and Average value for the particular Host for 1 month period. For the maximum value, I have fetched the file from the Check_mk server using the RRD fetch command and I get the exact value when I compared the output in the check_mk graph but when I try to do the same for the Average value I get the wrong output which does not match the Check_mk graph value and RRD File raw value. Kindly refer to the attached images where I have verified the value for average manually by fetching the data but it shows the wrong output.
Hello #Steve shipway,
Please find the requested data.
1)Structure of RRD File. Attached the image.
2)We are not generating the graph from the Check_mk . We are generating the RRD File using rrdtool dump CPU_utilization.xml > /tmp/CPU_utilization1.xml rrdtool fetch CPU_utilization_user.rrd MAX -r 6h -s Starting Date-e ending date.
Share
enter image description here
I'm wanting to graph the memory of certain processes in windows using rrdtool which I can then show in a simple webpage.
So I got this far. Not quite sure I understand how it works after following some tutorials I get blank output.
rrdtool create psrvr-mem.rrd --start 1023654125 --step 300 DS:mem:GAUGE:600:0:U RRA:AVERAGE:0.5:12:24 RRA:AVERAGE:0.5:288:31
Update script.
#ECHO OFF
:LOOP
FOR /F "tokens=*" %%g IN ('powershell "get-process psrvr | select -ExpandProperty PrivateMemorySize"') do (SET mem_usage=%%g)
rrdtool update psrvr-mem.rrd N:%mem_usage%
TIMEOUT /t 300
GOTO LOOP
And finally the graph
rrdtool graph memory.png --start -1800 -a PNG -t "Memory" --vertical-label "" -w 1260 -h 400 -r DEF:mem=psrvr-mem.rrd:mem:AVERAGE AREA:mem#FF0000
Unfortunately, the result is an empty graph. Not sure why. And not quite understanding if the start time is particularly critical or not.
-- edit --
Still not creating lines on the graph.
However, well I found one problem is there were no labels on any axis. Because the font file was missing. Turns out the pre-compiled binaries downloaded from rrdtool website are 15 years out of date for windows. I found a newer binary but if I get this going then I'll probably try and compile it with the latest revision. For now, I'll settle with something that seems to be working better from 2013.
After more messing about this is my attempt at creating a 5 minute average.
rrdtool create psrvr-mem.rrd --start 1592774026 --step 60 DS:mem:GAUGE:120:0:U RRA:AVERAGE:0.5:1:5
Then fill with some values using the update script above.
And then try to graph it again.
rrdtool graph memory.png --start=now-300 -a PNG -t "Memory" --vertical-label "Bytes" -w 800 -h 300 -r DEF:mem=psrvr.rrd:mem:AVERAGE LINE:mem#FF0000
The output is a graph where the line isn't moving. It might be zero all the way.
I am reading the AWS documentation, and have a question regarding the behavior of my application.
I make a get_key request to a nonexistent key.
Then I put that object.
Then I make another get_key request to the same key.
Now, the documentation explains why I might get None at step 3: "The caveat is that if you make a HEAD or GET request to the key name (to find if the object exists) before creating the object, Amazon S3 provides eventual consistency for read-after-write." (from the docs)
What I am concerned about, is why a list call on my bucket after step 2, lists the key (even if at step 3 I still get a None for this same key).
for k in bucket.list(some_path):
key = k.get_key(k.name)
Most of these work, but randomly some will return None.
Shouldn't it be the case that the list returns the key only if it is available ? And how could I make sure I eventually get the value (is there a notification system whereby I'm notified when step 3 will actually return my object, or could a timed-out retry work-> say, wait a couple of seconds and try again?)
You are right, could be you get nothing ( 404 ) . And as I know Amazon is not defining a maximum time for "S3 provides eventual consistency for read-after-write" . What you know for sure is after step 2 the file is saved in the AWS bucket. If you have versioning enabled on the bucket the file will be there as a version even another write is overwriting your file.
I think you have 2 options ,
Option A: build a caching mechanism and keep the file in cache until you get 200 for the request. Cache could be a simple folder .
Option B: in STEP 3 --> loop till you get the 200 ( add a delay inside the loop ). You know this will happen the problem is sometimes this could be longer, usually for normal size file this is fast.
Option C: consider finishing STEP 2 only after you get the 200.
I save sometimes huge database backups ( 500GB-1TB) to s3 and after the download is done ( takes sometimes hours ) there is a lag of 5-60 sec till I GET the 200 ( could by my data-center network is guilty too ) . The delay on the S3 web console page is even longer, sometimes I am able to filter for this file only after 5 min . For small size files I never detected delays.
I have a problem i configured daily logrotation for catalina.out in Centos7 but it is not rotated , if force run logrotate it rotates catalina but not automatically on daily basis.
logrotate.d/tomcat configfile:
/usr/local/tomcat7/logs/catalina.out
{
daily
rotate 30
missingok
compress
copytruncate
}
logrotate.conf:
# see "man logrotate" for details
# rotate log files daily
daily
# keep 4 weeks worth of backlogs
rotate 4
# create new (empty) log files after rotating old ones
create
# use date as a suffix of the rotated file
dateext
# uncomment this if you want your log files compressed
#compress
# RPM packages drop log rotation information into this directory
include /etc/logrotate.d
# no packages own wtmp and btmp -- we'll rotate them here
/var/log/wtmp {
monthly
create 0664 root utmp
minsize 1M
rotate 1
}
/var/log/btmp {
missingok
monthly
create 0600 root utmp
rotate 1
}
# system-specific logs may be also be configured here.
logrotate status/debug:
rotating pattern: /usr/local/tomcat7/logs/catalina.out
after 1 days (30 rotations)
empty log files are rotated, old logs are removed
considering log /usr/local/tomcat7/logs/catalina.out
log does not need rotating (log has been already rotated)
"/usr/local/tomcat7/logs/catalina.out" 2019-8-5-9:25:18
In it's default state tomcat uses the log4j
You should have a file at /etc/tomcat/log4j.properties which contains the configuration for the log management.
The default config is (taken from a test box):
log4j.rootLogger=debug, R
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=${catalina.home}/logs/tomcat.log
log4j.appender.R.MaxFileSize=10MB
log4j.appender.R.MaxBackupIndex=10
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%p %t %c - %m%n
log4j.logger.org.apache.catalina=DEBUG, R
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=DEBUG, R
log4j.logger.org.apache.catalina.core=DEBUG, R
log4j.logger.org.apache.catalina.session=DEBUG, R
Based on that config the logs will automatically rotate when they grow to 10MB in size, and will keep a maximum of 10 old logs.
You can change these settings as you wish and there are a couple of good guides here and here which explain all the options, and show how to change to a rolling appender which may be more useful for your needs.
Also log4j takes care of the rotation but if you are doing something like tail -f cat.out and the log rotates you will need to re-tail the file to continue watching it else it will just appear to stop mid way (much like the other logs do)
Remember to remove any config you tried to apply via logrotate so things don't go wrong later on!
To have a daily rotation you need to use these settings;
DailyRollingFileAppender
DailyRollingFileAppender rotates log files based on frequency of time
allowing customization upto minute. Date Patterns allowed as part of
the Appender are as follows:
yyyy-MM Roll over to new log file beginning on first day of every month
yyyy-ww Roll over to new log file beginning on first day of every week
yyyy-MM-dd Roll over daily
yyyy-MM-dd-a Roll over on midday and midnight
yyyy-MM-dd-HH Roll over every hour
yyyy-MM-dd-HH-mm Roll over every minute
That would give a config of:
log4j.rootLogger=INFO, fileLogger
log4j.appender.fileLogger.layout=org.apache.log4j.PatternLayout
log4j.appender.fileLogger.layout.ConversionPattern=%d [%t] %-5p (%F:%L) - %m%n
log4j.appender.fileLogger.File=example.log
log4j.appender.fileLogger=org.apache.log4j.DailyRollingFileAppender
log4j.appender.fileLogger.datePattern='.'yyyy-MM-dd-HH-mm
I am facing the problem in doing the settings in
logrotate.conf
I had done the settings once, but it didn't work accordingly.
The main condition is to rotate the log files along with compression at the interval of 5 days
/var/log/humble/access.log
{
daily
copytruncate
rotate 5
create 755 humble humble
dateext
compress
include /etc/logrotate.d/humble/
}
Even after doing this, compression was stopped after days.
Your logrotate.conf file should have mention to include your file "humble":-
include /etc/logrotate.d/humble
#End of Logrotate.conf
and then in your /etc/logrotate.d/humble
/var/log/humble/access.log
{
daily
copytruncate
rotate 5
create 755 humble humble
dateext
compress
}
The number specified after rotate gives how many files to be kept for backup after rotation. Here it is 5.
Also, you need to add a rule in the crontab file for triggering the logrotate every 5 days.
Rule in crontab file for running it every 5 days is :-
0 0 */5 * *