LLDB or GDB: break if any child/fork hits a breakpoint - gdb

I'm trying to figure out the following issue with podman in rootless mode:
$ podman run -it --rm docker.io/archlinux
Error: failed to mount overlay for metacopy check with "" options: permission denied
When I strace -f this I get the following:
$ strace -f podman run -it --rm docker.io/archlinux 2>&1 | grep EACCES
[pid 290534] mount("overlay", "/home/user/.local/share/containers/storage/overlay/metacopy-check3697694896/merged", "overlay", 0, "lowerdir=/home/user/.local/s"...) = -1 EACCES (Permission denied)
[pid 290534] openat(10, "work", O_RDONLY|O_CLOEXEC) = -1 EACCES (Permission denied)
I want to look at the permissions of the metacopy directory, but podman immediately deletes it, so I cannot look at it after podman has finished running.
I tried break mount in gdb, but it detaches from the child, so the breakpoint is not hit.
I also cannot just follow-forks as podman creates many children, and I cannot manually sort through them.
How can I tell gdb or lldb to attach to all processes and switch to the correct process when the breakpoint is hit?

Related

Cannot use vscode-cpptools in docker without root permission

I defined a simple docker here : https://github.com/htool-ddm/htool_testing_environments/blob/master/ubuntu/Dockerfile where I defined a user with
ARG USER=mpi
ENV USER ${USER}
ENV USER_HOME /home/${USER}
RUN useradd -s /bin/bash --user-group --system --create-home --no-log-init ${USER}
and when I used this image as a devcontainer with
"image": "pierremarchand/htool_testing_environments:ubuntu_gcc_openmpi",
"workspaceFolder": "/home/mpi",
"workspaceMount": "source=${localWorkspaceFolder},target=/home/mpi/,type=bind",
I get the following error:
Error - 4:48:52 PM] cpptools client: couldn't create connection to server.
Launching server using command /home/mpi/.vscode-server/extensions/ms-vscode.cpptools-1.13.9-linux-x64/bin/cpptools failed. Error: spawn /home/mpi/.vscode-server/extensions/ms-vscode.cpptools-1.13.9-linux-x64/bin/cpptools EACCES
I guess this is an issue with permission because it works when running the devcontainer with root permission (using "remoteUser": "root"). Is there an issue in the way I defined my docker image ? or is this an issue in the way I define my devcontainer ?

Trying to kill process with root user but it says: <kill: (15307): Permission denied>

I have a shell file <save_intraday.sh> that contains
docker-compose exec web python manage.py save_retail_trades_data
After executing this python script
import subprocess
subprocess.run('./save_intraday.sh', shell=True)
some django command in docker container starts running and every things works perfectly.
After some time i capture every task pid that is related to command above
with ps aux | grep save_retail then im executing sudo kill -9 <captured pid>
The problem is after exeuting sudo kill -9 <captured pid> it says kill: (15307): Permission denied (of course 15307 is for demo).
For example
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 59064 7.5 1.0 351076 130612 pts/0 Ssl+ 10:19 0:05 python manage.py save_retail_trades_data
my PID = 59064
and the command i will execure to kill process = sudo kill -9 59064
and the output = kill: (59064): Permission denied
As long as i know root user is the most privileged user but its says permmission denied,what i thinking is the root user doesn't have permission to kill any process that is running in the docker container.
I will appreciate if you help me to solve this problem

Connect to VPN with Podman

Having this Dockerfile:
FROM fedora:30
ENV LANG C.UTF-8
RUN dnf upgrade -y \
&& dnf install -y \
openssh-clients \
openvpn \
slirp4netns \
&& dnf clean all
CMD ["openvpn", "--config", "/vpn/ovpn.config", "--auth-user-pass", "/vpn/ovpn.auth"]
Building the image with:
podman build -t peque/vpn .
If I try to run it with (note $(pwd), where the VPN configuration and credentials are stored):
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
I get the following error:
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Any ideas on how could I fix this? I would not mind changing the base image if that could help (i.e.: to Alpine or anything else as long as it allows me to use openvpn for the connection).
System information
Using Podman 1.4.4 (rootless) and Fedora 30 distribution with kernel 5.1.19.
/dev/net/tun permissions
Running the container with:
podman run -v $(pwd):/vpn:Z --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then, from the container, I can:
# ls -l /dev/ | grep net
drwxr-xr-x. 2 root root 60 Jul 23 07:31 net
I can also list /dev/net, but will get a "permission denied error":
# ls -l /dev/net
ls: cannot access '/dev/net/tun': Permission denied
total 0
-????????? ? ? ? ? ? tun
Trying --privileged
If I try with --privileged:
podman run -v $(pwd):/vpn:Z --privileged --cap-add=NET_ADMIN --device=/dev/net/tun -it peque/vpn
Then instead of the permission-denied error (errno=13), I get a no-such-file-or-directory error (errno=2):
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
I can effectively verify there is no /dev/net/ directory when using --privileged, even if I pass the --cap-add=NET_ADMIN --device=/dev/net/tun parameters.
Verbose log
This is the log I get when configuring the client with verb 3:
OpenVPN 2.4.7 x86_64-redhat-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Feb 20 2019
library versions: OpenSSL 1.1.1c FIPS 28 May 2019, LZO 2.08
Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
TCP/UDP: Preserving recently used remote address: [AF_INET]xx.xx.xx.xx:1194
Socket Buffers: R=[212992->212992] S=[212992->212992]
UDP link local (bound): [AF_INET][undef]:0
UDP link remote: [AF_INET]xx.xx.xx.xx:1194
TLS: Initial packet from [AF_INET]xx.xx.xx.xx:1194, sid=3ebc16fc 8cb6d6b1
WARNING: this configuration may cache passwords in memory -- use the auth-nocache option to prevent this
VERIFY OK: depth=1, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=internal-ca
VERIFY KU OK
Validating certificate extended key usage
++ Certificate has EKU (str) TLS Web Server Authentication, expects TLS Web Server Authentication
VERIFY EKU OK
VERIFY OK: depth=0, C=ES, ST=XXX, L=XXX, O=XXXXX, emailAddress=email#domain.com, CN=ovpn.server.address
Control Channel: TLSv1.2, cipher TLSv1.2 ECDHE-RSA-AES256-GCM-SHA384, 2048 bit RSA
[ovpn.server.address] Peer Connection Initiated with [AF_INET]xx.xx.xx.xx:1194
SENT CONTROL [ovpn.server.address]: 'PUSH_REQUEST' (status=1)
PUSH: Received control message: 'PUSH_REPLY,route xx.xx.xx.xx 255.255.255.0,route xx.xx.xx.0 255.255.255.0,dhcp-option DOMAIN server.net,dhcp-option DNS xx.xx.xx.254,dhcp-option DNS xx.xx.xx.1,dhcp-option DNS xx.xx.xx.1,route-gateway xx.xx.xx.1,topology subnet,ping 10,ping-restart 60,ifconfig xx.xx.xx.24 255.255.255.0,peer-id 1'
OPTIONS IMPORT: timers and/or timeouts modified
OPTIONS IMPORT: --ifconfig/up options modified
OPTIONS IMPORT: route options modified
OPTIONS IMPORT: route-related options modified
OPTIONS IMPORT: --ip-win32 and/or --dhcp-option options modified
OPTIONS IMPORT: peer-id set
OPTIONS IMPORT: adjusting link_mtu to 1624
Outgoing Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Outgoing Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
Incoming Data Channel: Cipher 'AES-128-CBC' initialized with 128 bit key
Incoming Data Channel: Using 160 bit message hash 'SHA1' for HMAC authentication
ROUTE_GATEWAY xx.xx.xx.xx/255.255.255.0 IFACE=tap0 HWADDR=0a:38:ba:e6:4b:5f
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2)
Exiting due to fatal error
Error number may change depending on whether I run the command with --privileged or not.
It turns out that you are blocked by SELinux: after running the client container and trying to access /dev/net/tun inside it, you will get the following AVC denial in the audit log:
type=AVC msg=audit(1563869264.270:833): avc: denied { getattr } for pid=11429 comm="ls" path="/dev/net/tun" dev="devtmpfs" ino=15236 scontext=system_u:system_r:container_t:s0:c502,c803 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=0
To allow your container configuring the tunnel while staying not fully privileged and with SELinux enforced, you need to customize SELinux policies a bit. However, I did not find an easy way to do this properly.
Luckily, there is a tool called udica, which can generate SELinux policies from container configurations. It does not provide the desired policy on its own and requires some manual intervention, so I will describe how I got the openvpn container working step-by-step.
First, install the required tools:
$ sudo dnf install policycoreutils-python-utils policycoreutils udica
Create the container with required privileges, then generate the policy for this container:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --name ovpn peque/vpn
$ podman inspect ovpn | sudo udica -j - ovpn_container
Policy ovpn_container created!
Please load these modules using:
# semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
Restart the container with: "--security-opt label=type:ovpn_container.process" parameter
Here is the policy which was generated by udica:
$ cat ovpn_container.cil
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
)
Let's try this policy (note the --security-opt option, which tells podman to run the container in newly created domain):
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Ugh. Here is the problem: the policy generated by udica still does not know about specific requirements of our container, as they are not reflected in its configuration (well, probably, it is possible to infer that you want to allow operations on tun_tap_device_t based on the fact that you requested --device /dev/net/tun, but...). So, we need to customize the policy by extending it with few more statements.
Let's disable SELinux temporarily and run the container to collect the expected denials:
$ sudo setenforce 0
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
These are:
$ sudo grep denied /var/log/audit/audit.log
type=AVC msg=audit(1563889218.937:839): avc: denied { read write } for pid=3272 comm="openvpn" name="tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:840): avc: denied { open } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.937:841): avc: denied { ioctl } for pid=3272 comm="openvpn" path="/dev/net/tun" dev="devtmpfs" ino=15178 ioctlcmd=0x54ca scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:object_r:tun_tap_device_t:s0 tclass=chr_file permissive=1
type=AVC msg=audit(1563889218.947:842): avc: denied { nlmsg_write } for pid=3273 comm="ip" scontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tcontext=system_u:system_r:ovpn_container.process:s0:c138,c149 tclass=netlink_route_socket permissive=1
Or more human-readable:
$ sudo grep denied /var/log/audit/audit.log | audit2allow
#============= ovpn_container.process ==============
allow ovpn_container.process self:netlink_route_socket nlmsg_write;
allow ovpn_container.process tun_tap_device_t:chr_file { ioctl open read write };
OK, let's modify the udica-generated policy by adding the advised allows to it (note, that here I manually translated the syntax to CIL):
(block ovpn_container
(blockinherit container)
(allow process process ( capability ( chown dac_override fsetid fowner mknod net_raw setgid setuid setfcap setpcap net_bind_service sys_chroot kill audit_write net_admin )))
(allow process default_t ( dir ( open read getattr lock search ioctl add_name remove_name write )))
(allow process default_t ( file ( getattr read write append ioctl lock map open create )))
(allow process default_t ( sock_file ( getattr read write append open )))
; This is our new stuff.
(allow process tun_tap_device_t ( chr_file ( ioctl open read write )))
(allow process self ( netlink_route_socket ( nlmsg_write )))
)
Now we enable SELinux back, reload the module and check that the container works correctly when we specify our custom domain:
$ sudo setenforce 1
$ sudo semodule -r ovpn_container
$ sudo semodule -i ovpn_container.cil /usr/share/udica/templates/base_container.cil
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z --security-opt label=type:ovpn_container.process peque/vpn
<...>
Initialization Sequence Completed
Finally, check that other containers still have no these privileges:
$ podman run -it --cap-add NET_ADMIN --device /dev/net/tun -v $PWD:/vpn:Z peque/vpn
<...>
ERROR: Cannot open TUN/TAP dev /dev/net/tun: Permission denied (errno=13)
Yay! We stay with SELinux on, and allow the tunnel configuration only to our specific container.

ssh tunnel script hangs forever on beanstalk deployment

I'm attempting to create a ssh tunnel, when deploying an application to aws beanstalk. I want to put the tunnel as a background process, that is always connected on application deploy. The script is hanging forever on the deployment and I can't see why.
"/home/ec2-user/eclair-ssh-tunnel.sh":
mode: "000500" # u+rx
owner: root
group: root
content: |
cd /root
eval $(ssh-agent -s)
DISPLAY=":0.0" SSH_ASKPASS="./askpass_script" ssh-add eclair-test-key </dev/null
# we want this command to keep running in the backgriund
# so we add & at then end
nohup ssh -L 48682:localhost:8080 ubuntu#[host...] -N &
and here is the output I'm getting from /var/log/eb-activity.log:
[2019-06-14T14:53:23.268Z] INFO [15615] - [Application update suredbits-api-root-0.37.0-testnet-ssh-tunnel-fix-port-9#30/AppDeployStage1/AppDeployPostHook/01_eclair-ssh-tunnel.sh] : Starting activity...
The ssh tunnel is spawned, and I can find it by doing:
[ec2-user#ip-172-31-25-154 ~]$ ps aux | grep 48682
root 16047 0.0 0.0 175560 6704 ? S 14:53 0:00 ssh -L 48682:localhost:8080 ubuntu#ec2-34-221-186-19.us-west-2.compute.amazonaws.com -N
If I kill that process, the deployment continues as expected, which indicates that the bug is in the tunnel script. I can't seem to find out where though.
You need to add -n option to ssh when run it in background to avoid reading from stdin.

How to increase ulimit in Docker in Elastic Beanstalk?

I would like to increase ulimit in Docker in Elastic Beanstalk to run some apps.
I know that I need to increase ulimit of Docker host and restart docker service but cannot find a way to do it.
I wrote following .ebextensions/01limits.config but still cannot increase ulimit.
commands:
01limits:
command: echo -e "#commands\nroot soft nofile 65536\nroot hard nofile 65536\n* soft nofile 65536\n* hard nofile 65536" >> /etc/security/limits.conf
02restartdocker:
command: service docker restart
ADDED 2014-11-20 09:37 GMT
Also tried with following config file.
commands:
01limits:
command: echo -e "#commands\nroot soft nofile 65536\nroot hard nofile 65536\n* soft nofile 65536\n* hard nofile 65536" >> /etc/security/limits.conf
02restartdocker:
command: service docker stop && ulimit -a 65536 && service docker start
It successfully increased ulimit but showed following error in the management console:
[Instance: i-xxxxxxxx Module: AWSEBAutoScalingGroup ConfigSet: null] Command failed on instance. Return code: 1 Output: [CMD-AppDeploy/AppDeployStage1/AppDeployEnactHook/00flip.sh] command failed with error code 1: /opt/elasticbeanstalk/hooks/appdeploy/enact/00flip.sh Stopping nginx: [ OK ]
Starting nginx: [ OK ]
Stopping current app container: 1c**********... Error response from daemon: Cannot destroy container 1c**********: Driver devicemapper failed to remove root filesystem 1c**************************************************************: Device is Busy 2014/11/20 09:06:36 Error: failed to remove one or more containers.
I am not sure this config is suitable.
This is way late, but here's a (hopefully) working solution to your issue
initctl stop eb-docker && /sbin/service docker stop && ulimit -n 65536 && ulimit -c unlimited && export DMAP=$(df | grep /var/lib | awk '{print $1}') && if [[ $DMAP ]]; then umount $DMAP; fi && /sbin/service docker start && initctl start eb-docker
Explanation is as follows :
Stop docker container service
Stop docker service
Apply your ulimit settings (mine are different from yours)
Find offending devicemapper mount and unmount it (handles if none are mounted)
Start docker
Start docker container service
It's not an elegant solution, but such is the life of hacking around EB.
You might want to break it out into smaller components but the general idea is there.
I was running a version of apachectl that was trying to change the file limit. And it was failing and causing an error with the container.
But, eventually, I ran the ulimit command from inside the container:
root#22806b77a474:/home# ulimit
unlimited
It seems apachectl was trying to raise a limit that isn't actually there in the docker container. I ULIMIT_MAX_FILES to something that didn't cause a problem.
RUN sed -i 's/ULIMIT_MAX_FILES="${APACHE_ULIMIT_MAX_FILES:-ulimit -n 8192}"/ULIMIT_MAX_FILES="ulimit -H -n"/' /usr/sbin/apachectl