I want to copy the audit file to log server, (CentOS 7)
When I put in /etc/rsyslog.conf:
audit.* #logserver:514
I get the error:
rsyslogd: unknown facility name "audit" [v8.24.0-41.el7_7.2]
When I trying to copy the audit log, as I do with apache log, I put:
$ModLoad imfile
$InputFileName /var/log/audit/audit.log
$InputFileTag tag_audit_log:
$InputFileStateFile audit_log
$InputFileSeverity info
$InputFileFacility local6
$InputRunFileMonitor
if $syslogtag == 'tag_audit_log' then #logserver:514
I get the error:
rsyslogd: imfile: on startup file '/var/log/audit/audit.log' does not
exist but is configured in static file monitor - this may indicate a
misconfiguration. If the file appears at a later time, it will
automatically be processed. Reason: Permission denied
[v8.24.0-41.el7_7.2]
The file exist:
# ls -lZ /var/log/audit/audit.log
-rw-------. root root system_u:object_r:auditd_log_t:s0 /var/log/audit/audit.log
How to make it work?
Thanks
I find the solution, I follow:
In /etc/syslog.conf I put
*.info #remote.syslog.example.com
In /etc/audisp/plugins.d/syslog.conf:
active = yes
systemctl restart rsyslog
service auditd restart
Related
Configuration
I followed the steps in the below links to set up my GCP dynamic inventory.
https://docs.ansible.com/ansible/latest/scenario_guides/guide_gce.html
http://matthieure.me/2018/12/31/ansible_inventory_plugin.html
In short, it was the below steps
I installed the needed requisites.
$ pip install requests google-auth1
I created a service account with sufficient privileges. and set it's
credentials.
I added the below to the /etc/ansible/ansible.cfg file
[inventory]
enable_plugins = gcp_compute
I created a file called hosts.gcp.yml which holds the dynamic inventory setup (as shown below):
projects:
- my-project-id
hostnames:
- name
filters: []
auth_kind: serviceaccount
service_account_file: my/credentials_path.json
keyed_groups:
- key: zone
and tried to run the below command which worked fine
macbook#MacBooks-MacBook-Pro Ansible % ansible-inventory --graph -i hosts.gcp.yml
#all:
|--#_us_central1_a:
| |--test
|--#ungrouped:
but when running the below command I got the following errors
macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -m ping
test | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname test: nodename nor servname provided, or not known",
"unreachable": true
}
I then commented out the - name option from the hosts.gcp.yml file but got another error.
macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -m ping
34.X.X.8 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: macbook#34.X.X.8: Permission denied (publickey).",
"unreachable": true
}
This raises the following questions
1- Is an SSH setup (creating users and copying ssh-keys) needed on the host machines when using dynamic Inventories (I don't think so)?
2- Why is ansible resorting to SSH though a dynamic Inventory is set? What if the host didn't expose SSH to the public or didn't have a public IP?
Your kind support is highly appreciated.
Thanks.
A more verbose output of the test
macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -vvv -m ping
ansible [core 2.11.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/macbook/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/4.7.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/macbook/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Oct 13 2021, 06:45:31) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
redirecting (type: inventory) ansible.builtin.gcp_compute to google.cloud.gcp_compute
Parsed /Users/macbook/xxxx/Projects/xxxx/Ansible/hosts.gcp.yml inventory source with ansible_collections.google.cloud.plugins.inventory.gcp_compute plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
<34.132.201.8> ESTABLISH SSH CONNECTION FOR USER: None
<34.132.201.8> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/Users/macbook/.ansible/cp/026bb454d7 34.132.201.8 '/bin/sh -c '"'"'echo ~ && sleep 0'"'"''
<34.X.X.8> (255, b'', b'macbook#34.X.X.8: Permission denied (publickey).\r\n')
34.X.X.8 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: macbook#34.X.X.8: Permission denied (publickey).",
"unreachable": true
}
macbook#MacBooks-MacBook-Pro Ansible % ansible -i hosts.gcp.yml all -u ansible -vvv -m ping
ansible [core 2.11.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/Users/macbook/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/4.7.0/libexec/lib/python3.9/site-packages/ansible
ansible collection location = /Users/macbook/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Oct 13 2021, 06:45:31) [Clang 13.0.0 (clang-1300.0.29.3)]
jinja version = 3.0.2
libyaml = True
Using /etc/ansible/ansible.cfg as config file
redirecting (type: inventory) ansible.builtin.gcp_compute to google.cloud.gcp_compute
Parsed /Users/macbook/xxxx/Projects/xxx/Ansible/hosts.gcp.yml inventory source with ansible_collections.google.cloud.plugins.inventory.gcp_compute plugin
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
META: ran handlers
<34.132.201.8> ESTABLISH SSH CONNECTION FOR USER: ansible
<34.132.201.8> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="ansible"' -o ConnectTimeout=10 -o ControlPath=/Users/macbook/.ansible/cp/46d2477dfb 34.132.201.8 '/bin/sh -c '"'"'echo ~ansible && sleep 0'"'"''
<34.X.X.8> (255, b'', b'ansible#34.X.X.8: Permission denied (publickey).\r\n')
34.X.X.8 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ansible#34.X.X.8: Permission denied (publickey).",
"unreachable": true
}
Dynamic inventory used only for collect data of your machines. If you want to get access into it, you should use SSH.
You must add your ssh-public key into VM's config and specify username
Add these lines in your ansible.cfg into the [defaults] section:
host_key_checking = false
remote_user = <username that you specify in VM's config>
private_key_file = <path to private ssh-key>
Most probably Ansible can't establish ssh connection to the hosts (listed in hosts.gcp.yml) because they don't recognize ssh key of the machine that tries to ping them.
Since you're using a macbook it's clear it's not a GCP VM. This means your GCP VM's don't have it's public ssh key by default.
You can add your macboook's key (found in ~ssh/id_rsa.pub) to the list of authorized keys that all GCP VM's will accept without any action on your side.
As for the first question - it's clearly DNS issue - however I'm not versed enough with this tool so You'd have tell if you can ping all the VM's using their DNS names directly from your mac's terminal. If so then the issue will be with Ansible configuration - otherwise it's DNS issue that prevent's your computer from using DNS names of your VM's.
Additionally - ansible-inventory --graph i /file/path works "offline" and will only show the structure of your inventory regardles if it exists or works.
There are a couple of points in your question, one about inventory and one about connections.
Inventory
Your hosts.gcp.yml file is for a dynamic inventory plugin, as you said. What that means is that Ansible will run the GCP inventory plugin using the settings in that file, and the plugin will call GCP's API and generate a list of hosts to use as inventory. What the ansible-inventory command returns is what the ansible command will use also. In the example bit of output you pasted into your question, it looks like "test" is the only host it sees.
Connections
When you run the ansible command it will run the module against each host. It will first get the hostname returned by inventory, and then connect to that host using the transport type you specified. This is true even for the ping module. From the ping module's doc page: "This is NOT ICMP ping, this is just a trivial test module that requires Python on the remote-node." Meaning, it makes a connection.
Potential Gotchas
Is inventory returning the correct hostname for your environment?
What is the connection type you're using?
As for hostname, you set "hostnames" to "name" in your inventory file. Just be sure that's right. It might not be in your case.
As for connection type, if you haven't configured it, then by default it will be "smart", which uses SSH. You can find what you're using by doing this:
ansible-config dump | grep DEFAULT_TRANSPORT
You can change the connection type with the --connection option to the ansible command, or any of the other ways ansible lets you specify config options. Connection type is set independently from inventory type. They are two separate steps. The connection type is set via config or the command line option and is not based on what inventory plugin you're using.
Your Problem
To resolve your problem, figure out what hostnames ansible-inventory is actually returning, and what connection type you're using. Then see if you can connect to that hostname using that connection type. If the hostname being returned is "test" and your connection type is "smart" or "ssh", then try actually connecting with ssh to "test". From the command line, literally do ssh test. If that succeeds, then ansible should successfully connect to that host when it's run. If that doesn't succeed, then you have to do whatever you need to do to fix it in order for ansible to run successfully. Likewise, if you set a connection plugin different from SSH, then you should try to connect to your host using whatever that connection method uses in order to ensure that those types of connections are actually working.
More info about all this can be found in ansible's user guide. See, for example, "Connecting to remote nodes".
Having this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Building the image with:
podman build -t peque/vpn .
If I try to run it with (note $(pwd), where the VPN configuration and credentials are stored):
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
I get the following error:
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Any ideas on how could I fix this? I would not mind changing the base image if that could help (i.e.: to Alpine or anything else as long as it allows me to use openvpn for the connection).
System information
Using Podman 1.4.4 (rootless) and Fedora 30 distribution with kernel 5.1.19.
/dev/net/tun permissions
Running the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then, from the container, I can:
# ls -l /dev/ | grep net
drwxr-xr-x. 2 root root 60 Jul 23 07:31 net
I can also list /dev/net, but will get a "permission denied error":
# ls -l /dev/net
ls: cannot access '/dev/net/tun': Permission denied
total 0
-????????? ? ? ? ? ? tun
Trying --privileged
If I try with --privileged:
podman run -v $(pwd):/vpn:Z --privileged --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then instead of the permission-denied error (errno=13), I get a no-such-file-or-directory error (errno=2):
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
I can effectively verify there is no /dev/net/ directory when using --privileged, even if I pass the --cap-add=NET_ADMIN --device=/dev/net/tun parameters.
Verbose log
This is the log I get when configuring the client with verb 3:
OpenVPN 2.4.7 x86_64-redhat-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 20 2019
library versions: OpenSSL 1.1.1c FIPS 28 May 2019, LZO 2.08
Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
TCP/UDP: Preserving recently used remote address: [AF_INET]xx.xx.xx.xx:1194
Socket Buffers: R=[212992->212992] S=[212992->212992]
UDP link local (bound): [AF_INET][undef]:0
UDP link remote: [AF_INET]xx.xx.xx.xx:1194
TLS: Initial packet from [AF_INET]xx.xx.xx.xx:1194, sid=3ebc16fc 8cb6d6b1
WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
VERIFY OK: depth=1, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=internal-ca
VERIFY KU OK
Validating certificate extended key usage
++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
VERIFY EKU OK
VERIFY OK: depth=0, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=ovpn.server.address
Control Channel: TLSv1.2, cipher TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384, 2048 bit RSA
[ovpn.server.address] Peer Connection Initiated with [AF_INET]xx.xx.xx.xx:1194
SENT CONTROL [ovpn.server.address]: 'PUSH_REQUEST' (status=1)
PUSH: Received control message: 'PUSH_REPLY,route xx.xx.xx.xx 255.255.255.0,route xx.xx.xx.0 255.255.255.0,dhcp-option DOMAIN server.net,dhcp-option DNS xx.xx.xx.254,dhcp-option DNS xx.xx.xx.1,dhcp-option DNS xx.xx.xx.1,route-gateway xx.xx.xx.1,topology subnet,ping 10,ping-restart 60,ifconfig xx.xx.xx.24 255.255.255.0,peer-id 1'
OPTIONS IMPORT: timers and/or timeouts modified
OPTIONS IMPORT: --ifconfig/up options modified
OPTIONS IMPORT: route options modified
OPTIONS IMPORT: route-related options modified
OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
OPTIONS IMPORT: peer-id set
OPTIONS IMPORT: adjusting link_mtu to 1624
Outgoing Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Outgoing Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Incoming Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
ROUTE_GATEWAY xx.xx.xx.xx/255.255.255.0 IFACE=tap0 HWADDR=0a:38:ba:e6:4b:5f
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Exiting due to fatal error
Error number may change depending on whether I run the command with --privileged or not.
It turns out that you are blocked by SELinux: after running the client container and trying to access /dev/net/tun inside it, you will get the following AVC denial in the audit log:
type=AVC msg=audit(1563869264.270:833): avc: denied { getattr } for pid=11429 comm="ls" path="/dev/net/tun" dev="devtmpfs" ino=15236 scontext=system_u:system_r:container_t:s0:c502,c803 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=0
To allow your container configuring the tunnel while staying not fully privileged and with SELinux enforced, you need to customize SELinux policies a bit. However, I did not find an easy way to do this properly.
Luckily, there is a tool called udica, which can generate SELinux policies from container configurations. It does not provide the desired policy on its own and requires some manual intervention, so I will describe how I got the openvpn container working step-by-step.
First, install the required tools:
$ sudo dnf install policycoreutils-python-utils policycoreutils udica
Create the container with required privileges, then generate the policy for this container:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --name ovpn peque/vpn
$ podman inspect ovpn | sudo udica -j - ovpn_container
Policy ovpn_container created!
Please load these modules using:
# semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
Restart the container with: "--security-opt label=type:ovpn_container.process" parameter
Here is the policy which was generated by udica:
$ cat ovpn_container.cil
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
)
Let's try this policy (note the --security-opt option, which tells podman to run the container in newly created domain):
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Ugh. Here is the problem: the policy generated by udica still does not know about specific requirements of our container, as they are not reflected in its configuration (well, probably, it is possible to infer that you want to allow operations on tun_tap_device_t based on the fact that you requested --device /dev/net/tun, but...). So, we need to customize the policy by extending it with few more statements.
Let's disable SELinux temporarily and run the container to collect the expected denials:
$ sudo setenforce 0
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
These are:
$ sudo grep denied /var/log/audit/audit.log
type=AVC msg=audit(1563889218.937:839): avc: denied { read write } for pid=3272 comm="openvpn" name="tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:840): avc: denied { open } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:841): avc: denied { ioctl } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 ioctlcmd=0x54ca scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.947:842): avc: denied { nlmsg_write } for pid=3273 comm="ip" scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tclass=netlink_route_socket permissive=1
Or more human-readable:
$ sudo grep denied /var/log/audit/audit.log | audit2allow
#============= ovpn_container.process ==============
allow ovpn_container.process self:netlink_route_socket nlmsg_write;
allow ovpn_container.process tun_tap_device_t:chr_file { ioctl open read write };
OK, let's modify the udica-generated policy by adding the advised allows to it (note, that here I manually translated the syntax to CIL):
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
; This is our new stuff.
(allow process tun_tap_device_t ( chr_file ( ioctl open read write )))
(allow process self ( netlink_route_socket ( nlmsg_write )))
)
Now we enable SELinux back, reload the module and check that the container works correctly when we specify our custom domain:
$ sudo setenforce 1
$ sudo semodule -r ovpn_container
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
Initialization Sequence Completed
Finally, check that other containers still have no these privileges:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Yay! We stay with SELinux on, and allow the tunnel configuration only to our specific container.
I am using ansijet to automate the ansible playbook to be run on a button click. The playbook is to stop the running instances on AWS. If run, manually from command-line, the playbook runs well and do the tasks. But when run through the web interface of ansijet, following error is encountered
Authentication or permission failure. In some cases, you may have been able to authenticate and did not have permissions on the remote directory. Consider changing the remote temp path in ansible.cfg to a path rooted in "/tmp". Failed command was: mkdir -p $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742 && echo $HOME/.ansible/tmp/ansible-tmp-1390414200.76-192986604554742, exited with result 1:
Following is the ansible.cfg configuration.
# some basic default values...
inventory = /etc/ansible/hosts
#library = /usr/share/my_modules/
remote_tmp = $HOME/.ansible/tmp/
pattern = *
forks = 5
poll_interval = 15
sudo_user = root
#ask_sudo_pass = True
#ask_pass = True
transport = smart
#remote_port = 22
module_lang = C
I try to change the remote_tmp path to /home/ubuntu/.ansible/tmp
But still getting the same error.
By default, the user Ansible connects to remote servers as will be the same name as the user ansible runs as. In the case of Ansijet, it will try to connect to remote servers with whatever user started Ansijet's node.js process. You can override this by specifying the remote_user in a playbook or globally in the ansible.cfg file.
Ansible will try to create the temp directory if it doesn't already exist, but will be unable to if that user does not have a home directory or if their home directory permissions do not allow them write access.
I actually changed the temp directory in my ansible.cfg file to point to a location in /tmp which works around these sorts of issues.
remote_tmp = /tmp/.ansible-${USER}/tmp
I faced the same problem a while ago and solved like this . The possible case is that either the remote server's /tmp directory did not have enough permission to write . Run the ls -ld /tmp command to make sure its output looks something like this
drwxrwxrwt 7 root root 20480 Feb 4 14:18 /tmp
I have root user as super user and /tmp has 1777 permission .
Also for me simply -
remote_tmp = /tmp worked well.
Another check would be to make sure $HOME is present from the shell which you are trying to run . Ansible runs commands via /bin/sh shell and not /bin/bash.Make sure that $HOME is present in sh shell .
In my case I needed to login to the server for the first time and change the default password.
Check the ansible user on the remote / client machine as this error occurs when the ansible user password expires on the remote / client machine.
==========
'WARNING: Your password has expired.\nPassword change required but no TTY available.\n')
<*.*.*.*> Failed to connect to the host via ssh: WARNING: Your password has expired.
Password change required but no TTY available.
Actual error :
host_name | UNREACHABLE! => {
"changed": false,
"msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo /tmp/ansible-$USER `\"&& mkdir /tmp/ansible-$USER/ansible-tmp-1655256382.78-15189-162690599720687 && echo ansible-tmp-1655256382.78-15189-162690599720687=\"` echo /tmp/ansible-$USER/ansible-tmp-1655256382.78-15189-162690599720687 `\" ), exited with result 1",
"unreachable": true
===========
This could happen mainly because on the Remote Server, there is no home directory present for the user.
The following steps resolved the issue for me -
Log into the remote server
switch to root
If the user is linux_user from which Host (in my case Ansible) is trying to connect , then run following commands
mkdir /home/linux_user
chown linux_user:linux_user /home/linux_user
Problem:
After deploying a microservices as a war via AWS EBS Tomcat 7 container...noticed that the log rotation which occurs at UTC day boundary leaves a stale inode file.
The log rotation is more of a copy n truncate, which causes a stale file handler for rsyslog, which is listening for changes to catalina.out. What's the best way to prevent stale inode descriptors? Should I specify a rollover policy in logback.xml or logrotate or ...?
output of sudo lsof /var/log/tomcat7/catalina.out (and sudo stat report latest inode)
rsyslogd 18970 root 2r REG 202,1 1250 134754 /var/log/tomcat7/catalina.out
but doesn't match log output of rsyslog in debug mode.
4638.114765354:7fc839b8c700: stream checking for file change on '/var/log/tomcat7/catalina.out', inode 135952/135952file 7 read 0 bytes
Workaround
Stop Tomcat, remove catalina.out, then restart tomcat. This allowed rsyslog to continue streaming of new records.
However, after a few hours, rsyslog fails to stream newer log records to rsyslog destination server. The debug log of rsyslog contains the same inode as output of stat and lsof. If you run
sudo stat /var/log/tomcat7/catalina.out
rsyslog starts streaming again.
Have you noticed rsyslog stop streaming intermittently outside of the log rollover use case?
Why would a sudo stat /var/log/tomcat7/catalina.out cause rsyslog to stream again?
I also have a problem with rsyslog not sending new records as soon as logrotated does its rotation on catalina.out. issuing a stat doesn't quite cut it for me as the problem is tomcat stops writing to catalina.out (!) ...after perusing various forums and blogs, I was able to address this through the following steps:
make sure that we have $WorkDirectory defined in rsyslog configuration; this allows rsyslog write the "state file" for catalina.out (or for any other log file(s) it watches, for that matter)
As noted here on Loggly's blog, you need to stop rsyslog, delete this state file, then restart rsyslog on a postrotate entry.
my logrotate setting for catalina.out (state files are in /var/lib/rsyslog):
/opt/tomcat/logs/catalina.out {
rotate 7
size 50M
notifempty
missingok
postrotate
service rsyslog stop
rm /var/lib/rsyslog/*
service rsyslog start
endscript
}
I'm using ubuntu 13.04. I'm running uwsgi using sudo service uwsgi start
I've configured log file in django as /home/shwetanka/logs/mysite/mysite.log
But I'm getting this error -
ValueError: Unable to configure handler 'file': [Errno 13] Permission denied: '/home/shwetanka/logs/mysite/mysite.log'
How do I fix it? This should not happen when I run uwsgi as sudo.
You need to fix permissions with the chmod command, like this: chmod 775 /home/shwetanka/logs/mysite/mysite.log.
Take a look at the owner of the file with ls -l /home/shwetanka/logs/mysite/mysite.log and make it writable to uwsgi. If the file isn't owned by uwsgi, you'll have to use the chown command.
Take a look at the username under which your service is running with ps aux | grep 'uwsgi'.
If the security isn't so important to you at the moment, use chmod 777 /home/shwetanka/logs/mysite/mysite.log and that's it. But that's not the way how this is done.
The safest way to do this would be to check the owner and the group of the file and then change them if necessary and adjust the permissions accordingly.
Let's give an example.
If I have a file in /home/shwetanka/logs/mysite/mysite.log and the command ls -l /home/shwetanka/logs/mysite/mysite.log gives the following output:
-rw-rw-r-- 1 shwetanka shwetanka 1089 Aug 26 18:15 /home/shwetanka/logs/mysite/mysite.log
it means that the owner of the file is shwetanka and the group is also shwetanka. Now let's read the rwx bits. First group is related to the file owner, so rw- means that the file is readable and writable by the owner, readable and writeable by the group and readable by the others. You must make sure that the owner of the file is the service that's trying to write something to it or that the file belongs to group of the service or you'll get a permission denied error.
Now if I have a username uwsgi that's used by the USWGI service and want the above file to be writable by that service, I have to change the owner of the file, like this:
chown uwsgi /home/shwetanka/logs/mysite/mysite.log. Since the write bit for the owner (the first rwx group) is already set to 1, that file will now be writable by the UWSGI service. For any further questions, please leave a comment.
Alternatively you can set umask option for uwsgi (http://uwsgi-docs.readthedocs.org/en/latest/Options.html#umask).
I had the same situation, I was running uwsgi via www-data and I used buildout. So this fix in my case looked like this:
[uwsgi]
recipe = buildout.recipe.uwsgi
xml-socket = /tmp/uwsgi.sock
xml-master = True
xml-chmod-socket = 666
xml-umask = 0002
xml-workers = 3
xml-env = ...
xml-wsgi-file = ...
After this log file permissions became 664, so group members of www-data group can also write into it.