Related
We are trying to run a Container from ubi8-init Image as non root user under RHEL8 with podman. We enabled cgroups 2 globally by adding kernel parameters and checked versioins:
cgroup_no_v1=all systemd.unified_cgroup_hierarchy=1
$ podman -v
podman version 2.0.5
$ podman info --debug
host:
arch: amd64
buildahVersion: 1.15.1
cgroupVersion: v2
Subuid and subguid are set:
bob:100000:65536
Due to permission problem, ugly workaround:
Failed to create /user.slice/user-992.slice/session-371.scope/init.scope control group: Permission denied
$ chown -R 992 /sys/fs/cgroup/user.slice/user-992.slice/session-371.scope
Now we are able to run the container and jump into it via exec /bin/bash. Problem is we get following error if we want to copy something into the container using podman cp:
opening file `/sys/fs/cgroup/cgroup.freeze` for writing: Permission denied
Sample output from commands without chown workaround:
# Trying with --cgroup-manager=systemd
$ podman run --name=ubi-init-test --cgroup-manager=systemd -it --rm --systemd=true ubi8-init
Error: writing file `/sys/fs/cgroup/user.slice/user-992.slice/user#992.service/cgroup.subtree_control`: No such file or directory: OCI runtime command not found error
# Trying with --cgroup-manager=cgroupfs
$ podman run --name=ubi-init-test --cgroup-manager=cgroupfs -it --rm --systemd=true ubi8-init
systemd 239 (239-41.el8_3) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy)
Detected virtualization container-other.
Detected architecture x86-64.
Welcome to Red Hat Enterprise Linux 8.3 (Ootpa)!
Set hostname to <b64ed4493a24>.
Initializing machine ID from random generator.
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Failed to create /init.scope control group: Permission denied
Failed to allocate manager object: Permission denied
[!!!!!!] Failed to allocate manager object, freezing.
Freezing execution.
There must be either something completely wrong, misconfigured or buggy. Has anyone done this or any advice regarding the issues we run into?
Trying to solve similar issue.
I did setsebool -P container_manage_cgroup true on top of adding kernel parameters for cgroups v2. But it didn't help. Then I found this comment https://bbs.archlinux.org/viewtopic.php?pid=1895705#p1895705 and moved little bit further with --cgroup-manager=cgroupfs (used podman unshare and then unset DBUS_SESSION_BUS_ADDRESS):
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:path=/run/user/1000/bus
$ podman unshare
$ export DBUS_SESSION_BUS_ADDRESS=
$ podman run --name=ubi-init-test --cgroup-manager=cgroupfs -it --rm --systemd=true ubi8-init
systemd 239 (239-41.el8_3.1) running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=legacy)
Detected virtualization container-other.
Detected architecture x86-64.
Welcome to Red Hat Enterprise Linux 8.3 (Ootpa)!
Set hostname to <3caae9f73645>.
Initializing machine ID from random generator.
Failed to read AF_UNIX datagram queue length, ignoring: No such file or directory
Couldn't move remaining userspace processes, ignoring: Input/output error
[ OK ] Reached target Local File Systems.
[ OK ] Listening on Journal Socket.
[ OK ] Reached target Network is Online.
[ OK ] Started Dispatch Password Requests to Console Directory Watch.
[ OK ] Reached target Remote File Systems.
[ OK ] Reached target Slices.
Starting Rebuild Journal Catalog...
[ OK ] Started Forward Password Requests to Wall Directory Watch.
[ OK ] Reached target Paths.
[ OK ] Listening on initctl Compatibility Named Pipe.
[ OK ] Reached target Swap.
[ OK ] Listening on Process Core Dump Socket.
[ OK ] Listening on Journal Socket (/dev/log).
Starting Journal Service...
Starting Rebuild Dynamic Linker Cache...
Starting Create System Users...
[ OK ] Started Rebuild Journal Catalog.
[ OK ] Started Create System Users.
[ OK ] Started Rebuild Dynamic Linker Cache.
Starting Update is Completed...
[ OK ] Started Update is Completed.
[ OK ] Started Journal Service.
Starting Flush Journal to Persistent Storage...
[ OK ] Started Flush Journal to Persistent Storage.
Starting Create Volatile Files and Directories...
[ OK ] Started Create Volatile Files and Directories.
Starting Update UTMP about System Boot/Shutdown...
[ OK ] Started Update UTMP about System Boot/Shutdown.
[ OK ] Reached target System Initialization.
[ OK ] Started dnf makecache --timer.
[ OK ] Listening on D-Bus System Message Bus Socket.
[ OK ] Reached target Sockets.
[ OK ] Started Daily Cleanup of Temporary Directories.
[ OK ] Reached target Timers.
[ OK ] Reached target Basic System.
Starting Permit User Sessions...
[ OK ] Started D-Bus System Message Bus.
[ OK ] Started Permit User Sessions.
[ OK ] Reached target Multi-User System.
Starting Update UTMP about System Runlevel Changes...
[ OK ] Started Update UTMP about System Runlevel Changes.
Several days ago I asked this question Confuse about fail2ban behavior with firewallD in Centos 7
It is a large text with several comments.
It seems something starts flushing iptables after some hours of fail2ban restart I don't get what it is.
A couple of months ago I moved a few Virtual Hosts from a dedicated server I used for more than 10 years to a Contabo VPS.
All goes fine but fail2ban jail. Prisoners escape. :)
My move was from Centos 6 to Centos 7 Webmin/Virtualmin LAMP fail2ban; leaving /etc/sysconfig/iptables, now using firewalld.
As said, after some hours of fail2ban restart, and after some successfully banned IPs, as #sebres suggested, something is flushing iptables because of the symptom "after effects" like
2019-12-05 16:55:20,856 fail2ban.action [1514]: ERROR iptables -w -n -L INPUT | grep -q 'f2b-proftpd[ \t]' -- stdout: ''
and "already banned" notices.
None of the changes I tried in default configurations changed that.
At the end I deleted the Webmin module to manage fail2ban and reinstalled the service.
Renamed /etc/fail2ban to keep backup configurations.
rpm -qa | grep -i fail2ban
then
yum remove fail2ban-server
yum remove fail2ban-firewalld
yum install fail2ban-firewalld (also installs -server)
yum install fail2ban-systemd
then copied old jail.local to new /etc/fail2ban directory
[DEFAULT]
banaction = iptables-multiport
banaction_allports = iptables-allports
[sshd]
enabled = true
port = ssh
maxretry = 4
bantime = 7200
[ssh-ddos]
enabled = true
port = ssh,sftp
filter = sshd-ddos
[webmin-auth]
enabled = true
port = 10000
[proftpd]
enabled = true
bantime = -1
[postfix]
enabled = true
bantime = -1
[dovecot]
enabled = true
bantime = -1
[postfix-sasl]
enabled = true
bantime = -1
I also checked cron jobs to see if something can be flushing iptables in any way.
At this time I have running, periodically, a script to manually reject those "already banned" IPs once.
firewall-cmd --permanent --add-rich-rule="rule family='ipv4' source address='xxx.xxx.xxx.xxx' reject"
So my question is how to know what is flusing iptables.
UPADATE 1
After update to stable V 0.10 fail2ban release it seemed problems gone but after 5 days they started again.
Previously, after v0.9 restart, problems started after few hours.
UPADATE 2
Running fail2ban-client -d I got "Found no accessible config files for 'filter.d/sshd-ddos'". That's because I kept the old ssh-ddos config in jail.conf.
So, a subquestion is if I'm right simply making this change (at least no errors in fail2ban-client -d
#filter = sshd-ddos
filter = sshd
mode = agressive (as suggested by #sebres)
Here's the output of fail2ban-client -d
"No, the after effect is there because something is flushing rules, not vice versa"
I understand that, I'm not that fluent in English speaking, I meant that that was a symptom that something happens so the effect.
"So which banning action do you use really?"
Sorry my poor knowledge on this matter. Is that what is included in [Default] part of jail.local?
"(for example can you exclude some service implemented by Contabo installed or integrated in your VPS, that doing that?"
I asked them some time ago but their answer was "...we are providing our customers with the basic installations..." nothing that technical. They have several VPS services and I don't see other people complaining about that.
UPDATE 3
The first jail.local (from fresh Webmin/Virtualmin install) actions were
action = firewallcmd-ipset[]
action_ = %(banaction)s[name=%(__name__)s, bantime="%(bantime)s", port="%(port)s", protocol="%(protocol)s", chain="%(chain)s"]
I changed by
banaction = iptables-multiport
banaction_allports = iptables-allports
some time ago.
Now I went back with firewallcmd-ipset as [DEFAULT] and this is the fail2ban-client -d output.
I'll check fail2ban.log. .... After few hours, problems again.
About firewallD Webmin has a section with defined zones/rules and tools to manage them instead of having to write commands in shell. Nothing more.
I cannot imagine fail2ban blame here, but to exclude it (or some action of fail2ban is broken on your side), we should take a look into your configuration...
So provide your whole (unmodified) configuration dump:
fail2ban-client -d
or at least dump of all actions of all jails:
fail2ban-client -d | grep 'action'
now using firewalld
I see pretty sure iptables in your config and the log excerpt (error message).
So which banning action do you use really?
something is flushing iptables because of "after effects" like ...
No, the after effect is there because something is flushing rules, not vice versa.
This can be for example some "firewall" script which you apply to configure iptables (ports, default reject rules, etc), restart of some service due to dependency (restarting or reloading iptables), some script mistakenly implementing knocking, and many others.
I don't think it'd be simple to find it, if you don't have it under your control (for example can you exclude some service implemented by Contabo installed or integrated in your VPS, that doing that?).
UPDATE 1:
I don't see any error in your excerpt... to remove f2b-proftpd (or flush input chain) it should contain:
either <iptables> -D INPUT ... -j f2b-proftpd
or even <iptables> -F INPUT somewhere in actions parameters, excepting actionstop (which is intended stop by shutdown or restart case only).
But it is only available in actionstop as expected:
all entries containing removal from INPUT chain (actionstop only):
<iptables> -D INPUT -p tcp -m multiport --dports ssh -j f2b-sshd
<iptables> -D INPUT -p tcp -m multiport --dports 10000 -j f2b-webmin-auth
<iptables> -D INPUT -p tcp -m multiport --dports ftp,ftp-data,ftps,ftps-data -j f2b-proftpd
<iptables> -D INPUT -p tcp -m multiport --dports smtp,465,submission -j f2b-postfix
<iptables> -D INPUT -p tcp -m multiport --dports pop3,pop3s,imap,imaps,submission,465,sieve -j f2b-dovecot
<iptables> -D INPUT -p tcp -m multiport --dports smtp,465,submission,imap,imaps,pop3,pop3s -j f2b-postfix-sasl
<iptables> -D INPUT -p tcp -m multiport --dports ssh,sftp -j f2b-ssh-ddos
Thus we could exclude the blame of fail2ban.
Just you said "now using firewalld" - where? In your config dump I see iptables-multiport only:
['set', 'sshd', 'addaction', 'iptables-multiport']
['set', 'webmin-auth', 'addaction', 'iptables-multiport']
['set', 'proftpd', 'addaction', 'iptables-multiport']
['set', 'postfix', 'addaction', 'iptables-multiport']
['set', 'dovecot', 'addaction', 'iptables-multiport']
['set', 'postfix-sasl', 'addaction', 'iptables-multiport']
['set', 'ssh-ddos', 'addaction', 'iptables-multiport'],
So for example if firewald would rewrite INPUT chain of iptables (removes fail2ban entries), you would catch exactly this issue.
Thus better find out which major netfilter your system use and use it as banaction (or avoid rewriting of fail2ban rules e. g. by adding it to fail2ban dependencies, etc).
Having this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Building the image with:
podman build -t peque/vpn .
If I try to run it with (note $(pwd), where the VPN configuration and credentials are stored):
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
I get the following error:
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Any ideas on how could I fix this? I would not mind changing the base image if that could help (i.e.: to Alpine or anything else as long as it allows me to use openvpn for the connection).
System information
Using Podman 1.4.4 (rootless) and Fedora 30 distribution with kernel 5.1.19.
/dev/net/tun permissions
Running the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then, from the container, I can:
# ls -l /dev/ | grep net
drwxr-xr-x. 2 root root 60 Jul 23 07:31 net
I can also list /dev/net, but will get a "permission denied error":
# ls -l /dev/net
ls: cannot access '/dev/net/tun': Permission denied
total 0
-????????? ? ? ? ? ? tun
Trying --privileged
If I try with --privileged:
podman run -v $(pwd):/vpn:Z --privileged --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then instead of the permission-denied error (errno=13), I get a no-such-file-or-directory error (errno=2):
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
I can effectively verify there is no /dev/net/ directory when using --privileged, even if I pass the --cap-add=NET_ADMIN --device=/dev/net/tun parameters.
Verbose log
This is the log I get when configuring the client with verb 3:
OpenVPN 2.4.7 x86_64-redhat-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 20 2019
library versions: OpenSSL 1.1.1c FIPS 28 May 2019, LZO 2.08
Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
TCP/UDP: Preserving recently used remote address: [AF_INET]xx.xx.xx.xx:1194
Socket Buffers: R=[212992->212992] S=[212992->212992]
UDP link local (bound): [AF_INET][undef]:0
UDP link remote: [AF_INET]xx.xx.xx.xx:1194
TLS: Initial packet from [AF_INET]xx.xx.xx.xx:1194, sid=3ebc16fc 8cb6d6b1
WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
VERIFY OK: depth=1, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=internal-ca
VERIFY KU OK
Validating certificate extended key usage
++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
VERIFY EKU OK
VERIFY OK: depth=0, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=ovpn.server.address
Control Channel: TLSv1.2, cipher TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384, 2048 bit RSA
[ovpn.server.address] Peer Connection Initiated with [AF_INET]xx.xx.xx.xx:1194
SENT CONTROL [ovpn.server.address]: 'PUSH_REQUEST' (status=1)
PUSH: Received control message: 'PUSH_REPLY,route xx.xx.xx.xx 255.255.255.0,route xx.xx.xx.0 255.255.255.0,dhcp-option DOMAIN server.net,dhcp-option DNS xx.xx.xx.254,dhcp-option DNS xx.xx.xx.1,dhcp-option DNS xx.xx.xx.1,route-gateway xx.xx.xx.1,topology subnet,ping 10,ping-restart 60,ifconfig xx.xx.xx.24 255.255.255.0,peer-id 1'
OPTIONS IMPORT: timers and/or timeouts modified
OPTIONS IMPORT: --ifconfig/up options modified
OPTIONS IMPORT: route options modified
OPTIONS IMPORT: route-related options modified
OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
OPTIONS IMPORT: peer-id set
OPTIONS IMPORT: adjusting link_mtu to 1624
Outgoing Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Outgoing Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Incoming Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
ROUTE_GATEWAY xx.xx.xx.xx/255.255.255.0 IFACE=tap0 HWADDR=0a:38:ba:e6:4b:5f
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Exiting due to fatal error
Error number may change depending on whether I run the command with --privileged or not.
It turns out that you are blocked by SELinux: after running the client container and trying to access /dev/net/tun inside it, you will get the following AVC denial in the audit log:
type=AVC msg=audit(1563869264.270:833): avc: denied { getattr } for pid=11429 comm="ls" path="/dev/net/tun" dev="devtmpfs" ino=15236 scontext=system_u:system_r:container_t:s0:c502,c803 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=0
To allow your container configuring the tunnel while staying not fully privileged and with SELinux enforced, you need to customize SELinux policies a bit. However, I did not find an easy way to do this properly.
Luckily, there is a tool called udica, which can generate SELinux policies from container configurations. It does not provide the desired policy on its own and requires some manual intervention, so I will describe how I got the openvpn container working step-by-step.
First, install the required tools:
$ sudo dnf install policycoreutils-python-utils policycoreutils udica
Create the container with required privileges, then generate the policy for this container:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --name ovpn peque/vpn
$ podman inspect ovpn | sudo udica -j - ovpn_container
Policy ovpn_container created!
Please load these modules using:
# semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
Restart the container with: "--security-opt label=type:ovpn_container.process" parameter
Here is the policy which was generated by udica:
$ cat ovpn_container.cil
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
)
Let's try this policy (note the --security-opt option, which tells podman to run the container in newly created domain):
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Ugh. Here is the problem: the policy generated by udica still does not know about specific requirements of our container, as they are not reflected in its configuration (well, probably, it is possible to infer that you want to allow operations on tun_tap_device_t based on the fact that you requested --device /dev/net/tun, but...). So, we need to customize the policy by extending it with few more statements.
Let's disable SELinux temporarily and run the container to collect the expected denials:
$ sudo setenforce 0
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
These are:
$ sudo grep denied /var/log/audit/audit.log
type=AVC msg=audit(1563889218.937:839): avc: denied { read write } for pid=3272 comm="openvpn" name="tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:840): avc: denied { open } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:841): avc: denied { ioctl } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 ioctlcmd=0x54ca scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.947:842): avc: denied { nlmsg_write } for pid=3273 comm="ip" scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tclass=netlink_route_socket permissive=1
Or more human-readable:
$ sudo grep denied /var/log/audit/audit.log | audit2allow
#============= ovpn_container.process ==============
allow ovpn_container.process self:netlink_route_socket nlmsg_write;
allow ovpn_container.process tun_tap_device_t:chr_file { ioctl open read write };
OK, let's modify the udica-generated policy by adding the advised allows to it (note, that here I manually translated the syntax to CIL):
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
; This is our new stuff.
(allow process tun_tap_device_t ( chr_file ( ioctl open read write )))
(allow process self ( netlink_route_socket ( nlmsg_write )))
)
Now we enable SELinux back, reload the module and check that the container works correctly when we specify our custom domain:
$ sudo setenforce 1
$ sudo semodule -r ovpn_container
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
Initialization Sequence Completed
Finally, check that other containers still have no these privileges:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Yay! We stay with SELinux on, and allow the tunnel configuration only to our specific container.
not sure why but when I run:
ubuntu#ip-10-46-206-16:/etc/init.d$ sudo apache2 stop
Usage: apache2 [-D name] [-d directory] [-f file]
[-C "directive"] [-c "directive"]
[-k start|restart|graceful|graceful-stop|stop]
[-v] [-V] [-h] [-l] [-L] [-t] [-S] [-X]
Options:
-D name : define a name for use in <IfDefine name> directives
-d directory : specify an alternate initial ServerRoot
-f file : specify an alternate ServerConfigFile
-C "directive" : process directive before reading config files
-c "directive" : process directive after reading config files
-e level : show startup errors of level (see LogLevel)
-E file : log startup errors to file
-v : show version number
-V : show compile settings
-h : list available command line options (this page)
-l : list compiled in modules
-L : list available configuration directives
-t -D DUMP_VHOSTS : show parsed settings (currently only vhost settings)
-S : a synonym for -t -D DUMP_VHOSTS
-t -D DUMP_MODULES : show all loaded modules
-M : a synonym for -t -D DUMP_MODULES
-t : run syntax check for config files
-X : debug mode (only one worker, do not detach)
it doesn't seem to stop the server. I still trying to ping the ip and it's returning the default page. Is there a reason why?
Try:
sudo service apache2 stop
See here:
https://help.ubuntu.com/community/ApacheMySQLPHP
The older way would be:
sudo /etc/init.d/apache2 stop
Note that when you do sudo apache2 stop, you are running apache2 from you PATH, not from the current folder (usually, . is not in the PATH). Try sudo ./apache2 stop for that.
See here:
http://www.cyberciti.biz/faq/ubuntu-linux-start-restart-stop-apache-web-server/
This is not a programming question. I guess you have to look for answers here http://serverfault.com
You have to type full path to apache2 init script. Example:
sudo /etc/init.d/apache2 stop
check ubuntu#ip-10-46-206-16:/etc/init.d$ ./apache2 stop
this will slove your problem
One thing is missing in the command, I encountered the same issue:
sudo apache2 stop
Should be
sudo service apache2 stop
Hope no one search too far for missing service part of the command :)
I'm testing out using memcached to cache django views. How can I tell if memcached is actually caching anything from the Linux command line?
You could use the official perl script:
memcached-tool 127.0.0.1:11211 stats
Or just use telnet and the stats command e.g.:
# telnet localhost [memcacheport]
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
stats
STAT pid 2239
STAT uptime 10228704
STAT time 1236714928
STAT version 1.2.3
STAT pointer_size 32
STAT rusage_user 2781.185813
STAT rusage_system 2187.764726
STAT curr_items 598669
STAT total_items 31363235
STAT bytes 37540884
STAT curr_connections 131
STAT total_connections 8666
STAT connection_structures 267
STAT cmd_get 27
STAT cmd_set 30694598
STAT get_hits 16
STAT get_misses 11
STAT evictions 0
STAT bytes_read 2346004016
STAT bytes_written 388732988
STAT limit_maxbytes 268435456
STAT threads 4
END
I know this question is old, but here is another useful approach for testing memcached with django:
As #Jacob mentioned, you can start memcached in very verbose mode (not as a daemon):
memcached -vv
To test your django cache config, you can use the low-level cache api.
First, start up the python interpreter and load your django project settings:
python manage.py shell
From the shell, you can use the low-level cache api to test your memcache server:
from django.core.cache import cache
cache.set('test', 'test value')
If your cache configuration is correct, you should see output in memcache similar to this:
<32 set :1:test 0 300 10
>32 STORED
Start memcache not as a daemon but normal, so just run memcached -vv for very verbose. You will see when get's and sets come in to the memcache server.
Simple way to test for memcache working was to sneak in a commented out timestamp on every page served up. If the timestamp stayed the same on multiple requests to a page, then the page was being cached by memcache.
In Django settings, I also setup the cache mechanism to use a file cache on the filesystem (really slow), but after hitting up the pages I could see that there were actual cache files being placed in the file path so I could confirm caching was active in Django.
I used both these steps to work out my caching problem. I actually did not have caching turned on correctly in Django. The newer method to activate caching is using the 'django.middleware.cache.CacheMiddleware' middleware (not the middleware with two middleware pieces that have to be the first/last middleware settings.)
From the command line, try the command below:
echo stats | nc 127.0.0.1 11211
If it doesn't return anything, memcache isn't running. Otherwise it should return a bunch of stats including uptime (and hit and miss counts)
The reference article is here,
https://www.percona.com/blog/2008/11/26/a-quick-way-to-get-memcached-status/
To see changes every 2 seconds:
watch "echo stats | nc 127.0.0.1 11211"
In Bash, you can check the statistics of memcache by this command:
exec 3<>/dev/tcp/localhost/11211; printf "stats\nquit\n" >&3; cat <&3
To flush the cache, use memflush command:
echo flush_all >/dev/tcp/localhost/11211
and check if the stats increased.
To dump all the cached objects, use memdump or memcdump command (part of memcached/libmemcached package):
memcdump --servers=localhost:11211
or:
memdump --servers=localhost:11211
If you're using PHP, to see whether is supported, check by: php -i | grep memcached.
Tracing
To check what memcached process is exactly processing, you can use network sniffers or debuggers (e.g. strace on Linux or dtrace/dtruss on Unix/OS X) for that. Check some examples below.
Strace
sudo strace -e read,write -fp $(pgrep memcached)
To format output in a better way, check: How to parse strace in shell into plain text?
Dtruss
Dtruss is a dtrace wrapper which is available on Unix systems. Run it as:
sudo dtruss -t read -fp $(pgrep memcached)
Tcpdump
sudo tcpdump -i lo0 -s1500 -w- -ln port 11211 | strings -10
Memcached can actually write to a logfile on its own, without having to resort to restarting it manually. The /etc/init.d/memcached init script (/usr/lib/systemd/system/memcached.service on EL7+; ugh) can call memcached with the options specified in /etc/memcached.conf (or /etc/sysconfig/memcached on EL5+). Among these options are verbosity and log file path.
In short, you just need to add (or uncomment) these two lines to that conf/sysconfig file...
-vv
logfile /path/to/log
...and restart the daemon with service memcached restart(EL3-7) or /etc/init.d/memcached restart(debuntus)
And then you can monitor this log in the traditional way, like tail -f /path/to/log, for example.
For extend Node's response, you can use socat UNIX-CONNECT:/var/run/memcached.sock STDIN to debug a unix socket.
Example:
$ socat UNIX-CONNECT:/var/run/memcached.sock STDIN
stats
STAT pid 931
STAT uptime 10
STAT time 1378574384
STAT version 1.4.13
STAT libevent 2.0.19-stable
STAT pointer_size 32
STAT rusage_user 0.000000
STAT rusage_system 0.015625
STAT curr_connections 1
STAT total_connections 2
STAT connection_structures 2
You can test memcached or any server by below script
lsof -i :11211 | grep 'LISTEN'>/dev/null 2>/dev/null;echo $?
if it returns 0 then the server is actually running or if 1 its not so if you want to know that the server is actually running on some port use the following script
lsof -i :11211 | grep 'LISTEN'>/dev/null 2>/dev/null;
if [ $? -eq 0]; then
echo "Your memcache server is running"
else
echo "No its not running"
fi
Can you use curl to fetch a page a few hundred times and time the results? You could also look at running a process on the server that simulates heavy CPU/disk load while doing this.
I wrote an expect script is-memcached-running that tests if memcached is running on a host/port combination (run as is-memcached-running localhost 11211):
#! /usr/bin/env expect
set timeout 1
set ip [lindex $argv 0]
set port [lindex $argv 1]
spawn telnet $ip $port
expect "Escape character is '^]'."
send stats\r
expect "END"
send quit\r
expect eof
If you run your system from a Makefile rule, you could make your startup depend on a make target that asserts it is up and running (or helps you get that state). It is verbose when the check fails to make it easy for us to debug failed ci runs, installs memcached when it's missing, and is brief and to the point otherwise:
#! /bin/bash
if [[ "$(type -P memcached)" ]]; then
echo 'memcached installed; checking if it is running'
memcached_debug=`mktemp memcache-check.XXXXX`
if is-memcached-running localhost 11211 >$memcached_debug 2>&1; then
echo 'Yep; memcached online'
else
cat $memcached_debug
echo
echo '****** Error: memcached is not running! ******'
if [[ "$OSTYPE" =~ ^darwin ]]; then
echo
echo 'Instructions to auto-spawn on login (or just start now) are shown'
echo 'at the end of a "brew install memcached" run (try now, if you did'
echo 'not do so already) or, if you did, after a "brew info memcached".'
echo
fi
exit 1
fi
rm -f $memcached_debug
else
echo memcached was not found on your system.
if [[ "$OSTYPE" =~ ^darwin ]]; then
brew install memcached
elif [[ "$OSTYPE" =~ ^linux ]]; then
sudo apt-get install memcached
else
exit 1
fi
fi
Following Aryashree post, this helped me to get an error if memcached not running locally:
import subprocess
port = 11211
res = subprocess.Popen(f"echo stats | nc 127.0.0.1 {port}",
shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
if res.stdout:
lines = res.stdout.read()
lineArr = lines.split('\r\n')
pidlineArr = lineArr[0].split(' ')
pid = pidlineArr[-1]
print(f"[MemCached] pid {pid} Running on port {port}")
else:
raise RuntimeError(f"No Memcached is present on port {port}")
I'm using Mezzanine and the only answer that worked for me was Jacobs answer. So stopping the daemon and running memcached -vv
If you're using RHEL or Centos 8
To get memcached log stuff to /var/log/messages (quick without rotation)
https://serverfault.com/questions/208538/how-to-specify-the-log-file-for-memcached-on-rhel-centos/1054741#1054741