Problem:
After deploying a microservices as a war via AWS EBS Tomcat 7 container...noticed that the log rotation which occurs at UTC day boundary leaves a stale inode file.
The log rotation is more of a copy n truncate, which causes a stale file handler for rsyslog, which is listening for changes to catalina.out. What's the best way to prevent stale inode descriptors? Should I specify a rollover policy in logback.xml or logrotate or ...?
output of sudo lsof /var/log/tomcat7/catalina.out (and sudo stat report latest inode)
rsyslogd 18970 root 2r REG 202,1 1250 134754 /var/log/tomcat7/catalina.out
but doesn't match log output of rsyslog in debug mode.
4638.114765354:7fc839b8c700: stream checking for file change on '/var/log/tomcat7/catalina.out', inode 135952/135952file 7 read 0 bytes
Workaround
Stop Tomcat, remove catalina.out, then restart tomcat. This allowed rsyslog to continue streaming of new records.
However, after a few hours, rsyslog fails to stream newer log records to rsyslog destination server. The debug log of rsyslog contains the same inode as output of stat and lsof. If you run
sudo stat /var/log/tomcat7/catalina.out
rsyslog starts streaming again.
Have you noticed rsyslog stop streaming intermittently outside of the log rollover use case?
Why would a sudo stat /var/log/tomcat7/catalina.out cause rsyslog to stream again?
I also have a problem with rsyslog not sending new records as soon as logrotated does its rotation on catalina.out. issuing a stat doesn't quite cut it for me as the problem is tomcat stops writing to catalina.out (!) ...after perusing various forums and blogs, I was able to address this through the following steps:
make sure that we have $WorkDirectory defined in rsyslog configuration; this allows rsyslog write the "state file" for catalina.out (or for any other log file(s) it watches, for that matter)
As noted here on Loggly's blog, you need to stop rsyslog, delete this state file, then restart rsyslog on a postrotate entry.
my logrotate setting for catalina.out (state files are in /var/lib/rsyslog):
/opt/tomcat/logs/catalina.out {
rotate 7
size 50M
notifempty
missingok
postrotate
service rsyslog stop
rm /var/lib/rsyslog/*
service rsyslog start
endscript
}
Related
I regularly create an image of a running instance without stopping it first. That has worked for years without any issues. Tonight, I created another image of the instance (without any changes to the virtual server settings except for a "sudo yum update -y") and noticed my ssh session was closed. It looked like it was rebooted after the image was created. Then the web console showed 1/2 status checks passed. I rebooted it a few times and the status remained the same. The log showed:
Setting hostname localhost.localdomain: [ OK ]
Setting up Logical Volume Management: [ 3.756261] random: lvm: uninitialized urandom read (4 bytes read)
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
[ OK ]
Checking filesystems
Checking all file systems.
[/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/xvda1
/: clean, 437670/1048576 files, 3117833/4193787 blocks
[/sbin/fsck.xfs (1) -- /mnt/pgsql-data] fsck.xfs -a /dev/xvdf
[/sbin/fsck.ext2 (2) -- /mnt/couchbase] fsck.ext2 -a /dev/xvdg
/sbin/fsck.xfs: XFS file system.
fsck.ext2: Bad magic number in super-block while trying to open /dev/xvdg
/dev/xvdg:
The superblock could not be read or does not describe a valid ext2/ext3/ext4
[ 3.811304] random: crng init done
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
[FAILED]
*** An error occurred during the file system check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
/dev/fd/9: line 2: plymouth: command not found
Give root password for maintenance
(or type Control-D to continue):
It looked like /dev/xvdg failed the disk check. I detached the volume from the instance and reboot. I still couldn't ssh in. I re-attached it and rebooted. Now it says status check 2/2 passed but I still can't ssh back in and the log still showed issues with /dev/xvdg as above.
Any help would be appreciated. Thank you!
Thomas
I have a web application running on Laravel5.2 framework, with session driver set to redis with following AWS setup.
Instance-1: Running web application, with Redis configurations in .env file as follow
Redis-host: aws-private-ip-of-instance-2
Redis-password: NULL
Redis-port: 6379
Instance-2: Redis-server running with following configuration
Bind aws-private-ip-of-instance-2 and 127.0.0.1
Working directory /var/lib/redis with 775 permission, and ower-group is redis.
RDB snapshot name dump.rdb with 660 permission, and ower-group is redis.
NOTE: In AWS inbound rule for port 6379 is configured for
Instance-2.
Everything works fine, until redis tries to write the data on the RDB file. Following error shows on front-end.
MISCONF Redis is configured to save RDB snapshots, but is currently
not able to persist on disk. Commands that may modify the data set are
disabled. Please check Redis logs for details about the error.
While in the logs of Redis server i got following data.
4873:M 23 Sep 10:08:15.028 * 1 changes in 900 seconds. Saving...
4873:M 23 Sep 10:08:15.028 * Background saving started by pid 7392
7392:C 23 Sep 10:08:15.028 # Failed opening .rdb for saving: Read-only file system
4873:M 23 Sep 10:08:15.128 # Background saving error
Things I have tried
Add vm.overcommit_memory = 1 to /etc/sysctl.conf, as suggested in Redis-administraition-blog
Change path to dump.rdb file to tmp folder and change permissions to 777.
This other Stack Exchange thread might help, since you are using a custom /tmp dir for data:
The simple way to do this is to run systemctl edit redis. This will create an override drop-in file /etc/systemd/system/redis.service.d/override.conf, in which you can place your changes (and the proper section):
[Service]
ReadWriteDirectories=-/my/custom/data/dir
You may also create that directory and place files ending in .conf in it manually. But do not leave the directory empty, as this will disable the service.
In either case, run systemctl daemon-reload and you are ready to restart your service.
Many threads also point to filesystem inconsistency as root cause. Since you are using EC2, check this AWS forums post:
To fix this, you will have to:
Stop the instance
Detach the root volume of your instance
Attach the volume as a data volume to any running Linux instance in the same availability zone
Perform a filesystem check (fsck) on the volume and fix the issues
Detach the volume and attach it back to your instance as it's root volume
Boot back instance and verify if the volume was able to mount successfully
As a last resort, terminate the instance if possible.
Hope it helps!
Well this is very embarrassing to post answer of own question, which was a really stupid mistake. But hope new folks here learns from my mistake too.
So first thing I have done is enable detail logs for redis-server in /etc/redis/redis.conf file by changing log_level option to debug.
Observe the logs and understand that my redis port 6379 was open for everyone on internet.
So from logs I observe that someone else's server is spoofing into my redis server and making it slave of it. And as my redis server is configure in a way that slave is read-only, when i try to access my redis-server it throw error of read-only.
After applying the fire-wall for redis server port, I have not encounter this issue anymore.
While registering a host to the cluster of Ambari-server, I am getting the following error.
"Host checks were skipped on 1 hosts that failed to register."
I'm trying to install HDP 2.5 version on the instance of AWS.
I have tried to follow the documentation of Hortonworks.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-installation/content/set_the_hostname.html
I have added public ip address and public hostname to /etc/hosts file and change the name of host in /etc/hostname file on the server and on the host. Rebooted both, hostname got changed. Then I have stop iptables by
sudo service iptables stop
After doing everything, the host registration is still failing. Kindly help. I am stuck.
Background
From my experience with Ambari (Hortonworks) you have to explicitly setup your Hadoop nodes in each other's /etc/hosts file with the actual name/IPs that the Hadoop services will bind to. NOTE: hostnames should also be FQDN - fully qualified domain names.
For example if you're setting up the hosts as:
node01.mydom.com (10.0.0.2)
node02.mydom.com (10.0.0.3)
node03.mydom.com (10.0.0.4)
These entries should be in all 3 server's /etc/hosts and these should be the names used when referencing them within Ambari's installation/setup wizards.
If you do not pay special attention to this detail, Ambari's server will fail to find/manage any of the other node's that you're telling it to manage.
hostname of ambari-agents
The other item to look at is that the ambari-agent's and what hostnames they think they're going as.
$ ps -eaf|grep ambari_agent
root 3282 1 0 Jul30 ? 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start --expected-hostname=node01.mydom.com
root 3290 3282 1 Jul30 ? 08:24:29 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start --expected-hostname=node01.mydom.com
Debugging further
In the screen where you're attempting to register the other nodes as agents, there's a full log of what's happening and you can typically get the commands from this area and attempt to run them directly. I've done this on a number of occasions. The commands will often be python ... commands which you can then copy/paste from the logs and run on the Ambari server where you're attempting to run the install.
In my EC2 instance, syslog I am seeing a lot of:
Aug 19 07:42:01 ip-172-31-0-40 CRON[6465]: (root) CMD (/var/awslogs/bin/awslogs-nanny.sh > /dev/null 2>&1)
type statements getting printed in the Cloudwatch logs (more than a few dozen per second). Is there any way to turn this off and give only minimal log messages ?
That log line is generated by the cron daemon itself it writes to syslog by default on some distros. How to change this behavior is also dependent but for example the top answer for https://unix.stackexchange.com/questions/212355/where-is-my-logfile-of-crontab explains how configure cron to have its own log file on Debian based systems. This would stop the messages flooding your syslog
I've setup opscenter on one of cassandra cluster nodes. After installation, when setting up my cluster, I tried installation of datastax agent on all the cluster nodes via UI, but it failed. So, I had to install the agents manually.
After manually installing the agents, the node in which opscenter is installed is able to connect, but not the other nodes. It still says, "2 agents failed to connect". What could be the issue?
PS : My cassandra cluster is setup on AWS in ubuntu
My agent.log file looks like this
ERROR [os-metrics-9] 2015-07-27 07:04:43,390 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [os-metrics-7] 2015-07-27 07:04:43,391 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [os-metrics-8] 2015-07-27 07:04:53,391 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [os-metrics-3] 2015-07-27 07:04:53,392 Long os-stats collector failed: Cannot run program "iostat": error=2, No such file or directory
ERROR [StompConnection receiver] 2015-07-27 07:05:02,946 failed connecting to **.**.**.**:61620:java.net.ConnectException: Connection timed out
You have to set the stomp_interface in the address.yaml like
stomp_interface: <ip-address>
After agent restart it should be connected.
As your agent have been able to connect from the same box where opscenter is installed, so it sounds like :
You might have not configured your firewall properly. If you please try by disabling firewall on all your boxes.
You may have multiple interfaces and C* installation picked up an undesired interface. So run ifconfig or ip command on all of your instances and check with C* yaml.
About iostat failure message : You have not install sysstat pkg. Seems, you have not install dependencies as part of DSE install.
The agents uses iostat to collect some information from disks. If it cant find it you will get that error but it just means those metrics will be missing some os metrics (likely a lot of disk and cpu metrics will be missing)
These are some useful configurations that you should keep in mind when starting the agent manually in the conf/address.yaml file:
###A name for the node to use as a label throughout OpsCenter.
alias:
###Reachable IP address of the opscenterd machine. The connection made will be on stomp_port. Internal IP in this case
stomp_interface:
###Port for the agent's HTTP service (default: 61621).
#api_port: 61621
###The stomp_port used by opscenterd. == Must match with the 'incoming_port' in opscenter.conf
stomp_port: 61620
###The IP used to identify the node.
local_interface: 100.73.158.44
###The IP that the agent HTTP server listens on.
agent_rpc_interface:
###Host used to connect to local JMX server.
jmx_host: 100.73.158.44
###Whether or not to use SSL communication between the agent and opscenterd.
use_ssl: 1
To solve the "Cannot run program 'iostat'" error, do this:
sudo apt-get install sysstat