Can't connect to PostGreSQL from a same-host container - django
I am using docker to manage my Django apps, and have the same configuration on my laptop and digital ocean :
From my laptop I can connect to PostGreSQL thanks to the adminR image (https://hub.docker.com/_/adminer)
But if I try to connect to PostGreSQL from adminer on the localhost, I can't :
I can ping and find PostGreSQL from the django container :
But I can't migrate my database from django scripts :
Funny enough, I can migrate on the digital ocean cloud from my laptop :
I can see the updated database on my laptop's admineR page :
So the issue is obviously an issue of networking between the containers... But if I can ping the service, why can't django access it ????
EDIT:
1° ip route :
ip route
default via 167.99.80.1 dev eth0 proto static
10.16.0.0/16 dev eth0 proto kernel scope link src 10.16.0.5
10.106.0.0/20 dev eth1 proto kernel scope link src 10.106.0.2
167.99.80.0/20 dev eth0 proto kernel scope link src 167.99.94.16
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev br-ec478ce025ee proto kernel scope link src 172.18.0.1 linkdown
iptable -S
root#docker-s-1vcpu-1gb-lon1-01:~# apt install iptables
Reading package lists... Done
Building dependency tree
Reading state information... Done
iptables is already the newest version (1.8.4-3ubuntu2).
iptables set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 34 not upgraded.
root#docker-s-1vcpu-1gb-lon1-01:~# iptables -S
-P INPUT DROP
-P FORWARD DROP
-P OUTPUT ACCEPT
-N DOCKER
-N DOCKER-ISOLATION-STAGE-1
-N DOCKER-ISOLATION-STAGE-2
-N DOCKER-USER
-N ufw-after-forward
-N ufw-after-input
-N ufw-after-logging-forward
-N ufw-after-logging-input
-N ufw-after-logging-output
-N ufw-after-output
-N ufw-before-forward
-N ufw-before-input
-N ufw-before-logging-forward
-N ufw-before-logging-input
-N ufw-before-logging-output
-N ufw-before-output
-N ufw-logging-allow
-N ufw-logging-deny
-N ufw-not-local
-N ufw-reject-forward
-N ufw-reject-input
-N ufw-reject-output
-N ufw-skip-to-policy-forward
-N ufw-skip-to-policy-input
-N ufw-skip-to-policy-output
-N ufw-track-forward
-N ufw-track-input
-N ufw-track-output
-N ufw-user-forward
-N ufw-user-input
-N ufw-user-limit
-N ufw-user-limit-accept
-N ufw-user-logging-forward
-N ufw-user-logging-input
-N ufw-user-logging-output
-N ufw-user-output
-A INPUT -j ufw-before-logging-input
-A INPUT -j ufw-before-input
-A INPUT -j ufw-after-input
-A INPUT -j ufw-after-logging-input
-A INPUT -j ufw-reject-input
-A INPUT -j ufw-track-input
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-ec478ce025ee -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-ec478ce025ee -j DOCKER
-A FORWARD -i br-ec478ce025ee ! -o br-ec478ce025ee -j ACCEPT
-A FORWARD -i br-ec478ce025ee -o br-ec478ce025ee -j ACCEPT
-A FORWARD -j ufw-before-logging-forward
-A FORWARD -j ufw-before-forward
-A FORWARD -j ufw-after-forward
-A FORWARD -j ufw-after-logging-forward
-A FORWARD -j ufw-reject-forward
-A FORWARD -j ufw-track-forward
-A OUTPUT -j ufw-before-logging-output
-A OUTPUT -j ufw-before-output
-A OUTPUT -j ufw-after-output
-A OUTPUT -j ufw-after-logging-output
-A OUTPUT -j ufw-reject-output
-A OUTPUT -j ufw-track-output
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8080 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5432 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8000 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 9000 -j ACCEPT
-A DOCKER -d 172.17.0.5/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8080 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i br-ec478ce025ee ! -o br-ec478ce025ee -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o br-ec478ce025ee -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A ufw-after-input -p udp -m udp --dport 137 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 138 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 139 -j ufw-skip-to-policy-input
-A ufw-after-input -p tcp -m tcp --dport 445 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 67 -j ufw-skip-to-policy-input
-A ufw-after-input -p udp -m udp --dport 68 -j ufw-skip-to-policy-input
-A ufw-after-input -m addrtype --dst-type BROADCAST -j ufw-skip-to-policy-input
-A ufw-after-logging-input -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW BLOCK] "
-A ufw-before-forward -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-forward -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-forward -j ufw-user-forward
-A ufw-before-input -i lo -j ACCEPT
-A ufw-before-input -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-input -m conntrack --ctstate INVALID -j ufw-logging-deny
-A ufw-before-input -m conntrack --ctstate INVALID -j DROP
-A ufw-before-input -p icmp -m icmp --icmp-type 3 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 11 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 12 -j ACCEPT
-A ufw-before-input -p icmp -m icmp --icmp-type 8 -j ACCEPT
-A ufw-before-input -p udp -m udp --sport 67 --dport 68 -j ACCEPT
-A ufw-before-input -j ufw-not-local
-A ufw-before-input -d 224.0.0.251/32 -p udp -m udp --dport 5353 -j ACCEPT
-A ufw-before-input -d 239.255.255.250/32 -p udp -m udp --dport 1900 -j ACCEPT
-A ufw-before-input -j ufw-user-input
-A ufw-before-output -o lo -j ACCEPT
-A ufw-before-output -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A ufw-before-output -j ufw-user-output
-A ufw-logging-allow -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW ALLOW] "
-A ufw-logging-deny -m conntrack --ctstate INVALID -m limit --limit 3/min --limit-burst 10 -j RETURN
-A ufw-logging-deny -m limit --limit 3/min --limit-burst 10 -j LOG --log-prefix "[UFW BLOCK] "
-A ufw-not-local -m addrtype --dst-type LOCAL -j RETURN
-A ufw-not-local -m addrtype --dst-type MULTICAST -j RETURN
-A ufw-not-local -m addrtype --dst-type BROADCAST -j RETURN
-A ufw-not-local -m limit --limit 3/min --limit-burst 10 -j ufw-logging-deny
-A ufw-not-local -j DROP
-A ufw-skip-to-policy-forward -j ACCEPT
-A ufw-skip-to-policy-input -j DROP
-A ufw-skip-to-policy-output -j ACCEPT
-A ufw-track-forward -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-forward -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-output -p tcp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-track-output -p udp -m conntrack --ctstate NEW -j ACCEPT
-A ufw-user-input -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW -m recent --set --name DEFAULT --mask 255.255.255.255 --rsource
-A ufw-user-input -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW -m recent --update --seconds 30 --hitcount 6 --name DEFAULT --mask 255.255.255.255 --rsource -j ufw-user-limit
-A ufw-user-input -p tcp -m tcp --dport 22 -j ufw-user-limit-accept
-A ufw-user-input -p tcp -m tcp --dport 2375 -j ACCEPT
-A ufw-user-input -p tcp -m tcp --dport 2376 -j ACCEPT
-A ufw-user-limit -m limit --limit 3/min -j LOG --log-prefix "[UFW LIMIT BLOCK] "
-A ufw-user-limit -j REJECT --reject-with icmp-port-unreachable
-A ufw-user-limit-accept -j ACCEPT
root#docker-s-1vcpu-1gb-lon1-01:~#
3°
root#docker-s-1vcpu-1gb-lon1-01:~# docker inspect django | tail -n51
"NetworkSettings": {
"Bridge": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "56733"
}
],
"9000/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "9000"
}
]
},
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:02",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "8eddd72be1915a2d0f5eb1a4812271debc4e4eca103800ede3511f3f4c56ae98",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
}
}
}
]
root#docker-s-1vcpu-1gb-lon1-01:~# docker inspect nginx
[
{
"RepoTags": [
"nginx:latest"
],
"RepoDigests": [
],
"Parent": "",
"Comment": "",
"Created": "2020-11-18T07:48:35.319575714Z",
"Container": "7e8ca989e54001b9955974e36eb6d679ab4fe015066014645ef927fe88c326ec",
"ContainerConfig": {
"Hostname": "7e8ca989e540",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.19.4",
"NJS_VERSION=0.4.4",
"PKG_RELEASE=1~buster"
],
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"CMD [\"nginx\" \"-g\" \"daemon off;\"]"
],
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint#nginx.com>"
},
"StopSignal": "SIGTERM"
},
"DockerVersion": "19.03.12",
"Author": "",
"Config": {
"Hostname": "",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"80/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.19.4",
"NJS_VERSION=0.4.4",
"PKG_RELEASE=1~buster"
],
"Cmd": [
"nginx",
"-g",
"daemon off;"
],
"Volumes": null,
"WorkingDir": "",
"Entrypoint": [
"/docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {
"maintainer": "NGINX Docker Maintainers <docker-maint#nginx.com>"
},
"StopSignal": "SIGTERM"
},
"Architecture": "amd64",
"Os": "linux",
"Size": 132890123,
"VirtualSize": 132890123,
"GraphDriver": {
"Data": {
},
"Name": "overlay2"
},
"RootFS": {
"Type": "layers",
"Layers": [
]
},
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
root#docker-s-1vcpu-1gb-lon1-01:~# docker inspect postgreSQL
[
{
"Id": "c0e06b4a1fa410d0344e7b40fbc7b78308f70638affa65266357c8346570bf4e",
"Created": "2020-11-25T11:54:28.352080019Z",
"Path": "docker-entrypoint.sh",
"Args": [
"-c",
"config_file=/etc/postgresql/postgresql.conf"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 437388,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-11-25T11:54:28.93246511Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"ResolvConfPath": "Name": "/postgreSQL",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "docker-default",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/root/babymusic_django_server/postgreSql/appdata:/var/lib/postgresql/data/pgdata",
"/root/babymusic_django_server/postgreSql/my-postgres.conf:/etc/postgresql/postgresql.conf"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"5432/tcp": [
{
"HostIp": "",
"HostPort": "5432"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DeviceRequests": null,
"KernelMemory": 0,
"KernelMemoryTCP": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": null,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/asound",
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
],
"ReadonlyPaths": [
"/proc/bus",
"/proc/fs",
"/proc/irq",
"/proc/sys",
"/proc/sysrq-trigger"
]
},
"GraphDriver": {
"Data": {
"LowerDir":
},
"Name": "overlay2"
},
"Config": {
"Hostname": "c0e06b4a1fa4",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"5432/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"POSTGRES_USER=romain",
"POSTGRES_DB=baby_music",
"PGDATA=/var/lib/postgresql/data/pgdata",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/lib/postgresql/13/bin",
"GOSU_VERSION=1.12",
"LANG=en_US.utf8",
"PG_MAJOR=13",
"PG_VERSION=13.1-1.pgdg100+1"
],
"Cmd": [
"-c",
"config_file=/etc/postgresql/postgresql.conf"
],
"Image": "postgres:13",
"Volumes": {
"/var/lib/postgresql/data": {}
},
"WorkingDir": "",
"Entrypoint": [
"docker-entrypoint.sh"
],
"OnBuild": null,
"Labels": {},
"StopSignal": "SIGINT"
},
"NetworkSettings": {
"Bridge": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"5432/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "5432"
}
]
},
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:04",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "8eddd72be1915a2d0f5eb1a4812271debc4e4eca103800ede3511f3f4c56ae98",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.4",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:04",
"DriverOpts": null
}
}
}
}
]
root#docker-s-1vcpu-1gb-lon1-01:~#
root#docker-s-1vcpu-1gb-lon1-01:~# docker inspect PostGresqlAdmin
[
{
"Path": "entrypoint.sh",
"Args": [
"docker-php-entrypoint",
"php",
"-S",
"[::]:8080",
"-t",
"/var/www/html"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 454939,
"ExitCode": 0,
"Error": "",
"StartedAt": "2020-11-25T21:00:16.349310968Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "default",
"PortBindings": {
"8080/tcp": [
{
"HostIp": "",
"HostPort": "8081"
}
]
},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
},
"Mounts": [],
"Config": {
"Hostname": "4c76998dc75a",
"Domainname": "",
"User": "adminer",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"8080/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"NetworkSettings": {
"Bridge": "",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"8080/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "8081"
}
]
},
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "172.17.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"MacAddress": "02:42:ac:11:00:05",
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "8eddd72be1915a2d0f5eb1a4812271debc4e4eca103800ede3511f3f4c56ae98",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:05",
"DriverOpts": null
}
}
}
}
]
EDIT 2
Check ipv4 forwarding :
root#docker-s-1vcpu-1gb-lon1-01:~# sysctl net.ipv4.conf.all.forwarding
net.ipv4.conf.all.forwarding = 1
Accept port forwarding :
root#docker-s-1vcpu-1gb-lon1-01:~# sudo iptables -P FORWARD ACCEPT
root#docker-s-1vcpu-1gb-lon1-01:~# iptables -S | grep FORWARD
-P FORWARD ACCEPT
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o br-ec478ce025ee -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-ec478ce025ee -j DOCKER
-A FORWARD -i br-ec478ce025ee ! -o br-ec478ce025ee -j ACCEPT
-A FORWARD -i br-ec478ce025ee -o br-ec478ce025ee -j ACCEPT
-A FORWARD -j ufw-before-logging-forward
-A FORWARD -j ufw-before-forward
-A FORWARD -j ufw-after-forward
-A FORWARD -j ufw-after-logging-forward
-A FORWARD -j ufw-reject-forward
-A FORWARD -j ufw-track-forward
Use 172.17.0.4 to access the database:
works !
Publishing on the --net=host gives a warning and doesn't let the access to the db :
If you look at nmap output you can see the port is reported as filtered. That means one of these:
The routing from Docker network (typocally 172.17. 0.0/16 ) is not setup correctly
Each container is running it's own separate network with a overlapping subnet which prevents packets to route back correctly
or there is a packet filter (iptables) which prevents which prevents packets to reach to the destination correctly.
What I need in addition to debug the issue, is route table (ip route), packet filter output (iptables -S), and docker inspect from each container.
Update:
These are the potential problems that I see:
Fix the current setup:
You have -P FORWARD DROP in your iptables, this prevents the access, use: sudo iptables -P FORWARD ACCEPT to enable that.
Please check sysctl net.ipv4.conf.all.forwarding that should be set to 1 if not edit /etc/sysctl.conf to fix that, and reload the settings with sysctl -p.
OR
Alternatively you can use postgresql ip 172.17.0.4 to access the database.
Another option is to set postgresql network to --net=host and then you should be able to get around the iptables.
Alternatively you can connect your app to postgresql network by specifying --net=container:<postgresql_container_name> and use localhost to access the database.
You can create a separate network in docker and run all the containers there so your are able to access from anywhere to anywhere without routing through your host IP
Probably there are a few other way to achieve this, but I leave it to you to figure out :)
Update 2:
-P INPUT DROP is also an issue, use this for fix it: sudo iptables -P INPUT ACCEPT
If you choose the first option for fix your current settings, make sure iptables changes are persisted otherwise you'll lose them on reboot. Consult your Linux distro manual for figure out how to do that.
Related
When creating directory with Ansible it doesnt appear
I have this simple ansible flow: I want to create a directory on the host: - name: Create rails app dir file: path=/etc/rails-app state=directory mode=0755 register: rails_app_dir And these are the logs when I run the playbook: TASK [instance_deploy_app : Create rails app dir] ************************************************************************************************* task path: /etc/ansible/roles/instance_deploy_app/tasks/main.yml:39 <IPv4 of host> ESTABLISH LOCAL CONNECTION FOR USER: root <IPv4 of host> EXEC /bin/sh -c 'echo ~root && sleep 0' <IPv4 of host> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /root/.ansible/tmp `"&& mkdir "` echo /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297 `" && echo ansible-tmp-1645566978.53-25820-207749605236297="` echo /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297 `" ) && sleep 0' Using module file /usr/lib/python2.7/site-packages/ansible/modules/files/file.py <IPv4 of host> PUT /root/.ansible/tmp/ansible-local-25617Cg_rWo/tmpTPHs3p TO /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py <IPv4 of host> EXEC /bin/sh -c 'chmod u+x /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/ /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py && sleep 0' <IPv4 of host> EXEC /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/AnsiballZ_file.py && sleep 0' <IPv4 of host> EXEC /bin/sh -c 'rm -f -r /root/.ansible/tmp/ansible-tmp-1645566978.53-25820-207749605236297/ > /dev/null 2>&1 && sleep 0' ok: [IPv4 of host] => { "changed": false, "diff": { "after": { "path": "/etc/rails-app" }, "before": { "path": "/etc/rails-app" } }, "gid": 0, "group": "root", "invocation": { "module_args": { "_diff_peek": null, "_original_basename": null, "access_time": null, "access_time_format": "%Y%m%d%H%M.%S", "attributes": null, "backup": null, "content": null, "delimiter": null, "directory_mode": null, "follow": true, "force": false, "group": null, "mode": "0755", "modification_time": null, "modification_time_format": "%Y%m%d%H%M.%S", "owner": null, "path": "/etc/rails-app", "recurse": false, "regexp": null, "remote_src": null, "selevel": null, "serole": null, "setype": null, "seuser": null, "src": null, "state": "directory", "unsafe_writes": null } }, "mode": "0755", "owner": "root", "path": "/etc/rails-app", "size": 41, "state": "directory", "uid": 0 } Read vars_file 'roles/instance_deploy_app/vars/instance_vars.yml' Read vars_file 'roles/instance_deploy_app/vars/aws_cred.yml' According to the logs, the directory should be there but when I try to access /etc/rails-app/ it is not there. I currently have 3 users in the AWS EC2 instance: ec2-user, root and user1 and I tried to check in all of them but the directory doesnt appear. Am I doing something wrong? Thanks!
The reason why it was not creating the folder, as β.εηοιτ.βε suggested, is because in the playbook I had connection: local so it was "never connecting to my EC2 and always acting on my controller". Once I removed that, it worked.
Script exited with non-zero exit status: 100.Allowed exit codes are: [0] : packer error [duplicate]
I have a shell provisioner in packer connected to a box with user vagrant { "environment_vars": [ "HOME_DIR=/home/vagrant" ], "expect_disconnect": true, "scripts": [ "scripts/foo.sh" ], "type": "shell" } where the content of the script is: whoami sudo su whoami and the output strangely remains: ==> virtualbox-ovf: Provisioning with shell script: scripts/configureProxies.sh virtualbox-ovf: vagrant virtualbox-ovf: vagrant why cant I switch to the root user? How can I execute statements as root? Note, I do not want to quote all statements like sudo "statement |foo" but rather globally switch user like demonstrated with sudo su
You should override the execute_command. Example: "provisioners": [ { "execute_command": "echo 'vagrant' | {{.Vars}} sudo -S -E sh -eux '{{.Path}}'", "scripts": [ "scripts/foo.sh" ], "type": "shell" } ],
There is another solution with simpler usage of 2 provisioner together. Packer's shell provisioner can run the bash with sudo privileges. First you need copy your script file from local machine to remote with file provisioner, then run it with shell provisioner. packer.json { "vars": [...], "builders": [ { # ... "ssh_username": "<some_user_other_than_root_with_passwordless_sudo>", } ], "provisioners": [ { "type": "file", "source": "scripts/foo.sh", "destination": "~/shell.tmp.sh" }, { "type": "shell", "inline": ["sudo bash ~/shell.tmp.sh"] } ] } foo.sh # ... whoami sudo su root whoami # ... output <some_user_other_than_root_with_passwordless_sudo> root After provisioner complete its task, you can delete the file with shell provisioner. packer.json updated { "type": "shell", "inline": ["sudo bash ~/shell.tmp.sh", "rm ~/shell.tmp.sh"] }
one possible answer seems to be: https://unix.stackexchange.com/questions/70859/why-doesnt-sudo-su-in-a-shell-script-run-the-rest-of-the-script-as-root sudo su <<HERE ls /root whoami HERE maybe there is a better answer?
Assuming that the shell provisioner you are using is a bash script, you can add my technique to your script. function if_not_root_rerun_as_root(){ install_self if [[ "$(id -u)" -ne 0 ]]; then run_as_root_keeping_exports "$0" "$#" exit $? fi } function run_as_root_keeping_exports(){ eval sudo $(for x in $_EXPORTS; do printf '%s=%q ' "$x" "${!x}"; done;) "$#" } export EXPORTS="PACKER_BUILDER_TYPE PACKER_BUILD_NAME" if_not_root_rerun_as_root "$#" There is a pretty good explanation of "$#" here on StackOverflow.
Packer doesn't import project ssh keys (googlecompute)
Packer seems to exclude ssh keys from the project but I have set the block-project-ssh-keys value to false. The final command fails but that user has an ssh key tied to the project. Any ideas? { "builders": [ { "type": "googlecompute", "project_id": "mahamed901", "source_image_family": "ubuntu-1804-lts", "ssh_username": "packer", "zone": "europe-west1-b", "preemptible": "true", "image_description": "Worker Node for Jenkins (Java + Docker)", "disk_type": "pd-ssd", "disk_size": "10", "metadata": {"block-project-ssh-keys":"false"}, "image_name": "ubuntu1804-jenkins-docker-{{isotime | clean_image_name}}", "image_family": "ubuntu1804-jenkins-worker" } ], "provisioners": [ { "type": "shell", "inline": [ "sudo apt update", "#sudo apt upgrade -y", "#sudo apt-get install -y git make default-jdk", "#curl https://get.docker.com/ | sudo bash", "uptime", "sudo curl -L \"https://github.com/docker/compose/releases/download/1.23.1/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose", "sudo chmod +x /usr/local/bin/docker-compose", "sleep 5", "cat /etc/passwd", "#sudo usermod -aG docker jenkins", "#sudo docker ps", "#rm ~/.ssh/authorized_keys" ] } ] }
This is controlled by metadata option block-project-ssh-keys true or false. See this issue (the format of your metadata is wrong, remove the square brackets [ ].)
AWS Cloudformation - mount to existing file system
Currently, I have a json that will create a EFS in an auto scaling group. However, how can I make it so it mounts an existing EFS that is previously created (so i can pre-load data) this is the current set up "FileSystem": { "Type": "AWS::EFS::FileSystem", "Properties": { "PerformanceMode": "generalPurpose", "FileSystemTags": [ { "Key": "Name", "Value": { "Ref" : "VolumeName" } } ] } }, "MountTarget": { "Type": "AWS::EFS::MountTarget", "Properties": { "FileSystemId": { "Ref": "FileSystem" }, "SubnetId": { "Ref": "Subnet" }, "SecurityGroups": [ { "Ref": "MountTargetSecurityGroup" } ] } },
As per #MaiKaY Suggested, Just Pass the FileSystemId as a Parameter. here is the mounting Script container_commands: 1chown: command: "chown webapp:webapp /wpfiles" 2create: command: "sudo -u webapp mkdir -p wp-content/uploads" 3link: command: "sudo -u webapp ln -s /wpfiles wp-content/uploads" option_settings: aws:elasticbeanstalk:application:environment: FILE_SYSTEM_ID: 'FileSystemId' #Provide your FileSystemId MOUNT_DIRECTORY: '/wpfiles' REGION: '`{"Ref": "AWS::Region"}`' packages: yum: nfs-utils: [] jq: [] commands: 01_mount: command: "/tmp/mount-efs.sh" files: "/tmp/mount-efs.sh": mode: "000755" content : | #!/bin/bash EFS_REGION=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.REGION') EFS_MOUNT_DIR=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.MOUNT_DIRECTORY') EFS_FILE_SYSTEM_ID=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.FILE_SYSTEM_ID') echo "Mounting EFS filesystem ${EFS_DNS_NAME} to directory ${EFS_MOUNT_DIR} ..." echo 'Stopping NFS ID Mapper...' service rpcidmapd status &> /dev/null if [ $? -ne 0 ] ; then echo 'rpc.idmapd is already stopped!' else service rpcidmapd stop if [ $? -ne 0 ] ; then echo 'ERROR: Failed to stop NFS ID Mapper!' exit 1 fi fi echo 'Checking if EFS mount directory exists...' if [ ! -d ${EFS_MOUNT_DIR} ]; then echo "Creating directory ${EFS_MOUNT_DIR} ..." mkdir -p ${EFS_MOUNT_DIR} if [ $? -ne 0 ]; then echo 'ERROR: Directory creation failed!' exit 1 fi else echo "Directory ${EFS_MOUNT_DIR} already exists!" fi mountpoint -q ${EFS_MOUNT_DIR} if [ $? -ne 0 ]; then echo "mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 ${EFS_FILE_SYSTEM_ID}:/ ${EFS_MOUNT_DIR}" mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 ${EFS_FILE_SYSTEM_ID}:/ ${EFS_MOUNT_DIR} if [ $? -ne 0 ] ; then echo 'ERROR: Mount command failed!' exit 1 fi chmod 777 ${EFS_MOUNT_DIR} runuser -l ec2-user -c "touch ${EFS_MOUNT_DIR}/it_works" if [[ $? -ne 0 ]]; then echo 'ERROR: Permission Error!' exit 1 else runuser -l ec2-user -c "rm -f ${EFS_MOUNT_DIR}/it_works" fi else echo "Directory ${EFS_MOUNT_DIR} is already a valid mountpoint!" fi echo 'EFS mount complete.'
Docker Compose env_file in Multicontainer Elastic Beanstalk
I have the following docker compose file: version: '2' services: app: build: . command: > bash -cex " export LC_ALL=C.UTF-8 export LANG=C.UTF-8 /virtualenv/bin/flask run -h 0.0.0.0 -p 5050 " env_file: env links: - postgres ports: - 8080:8080 As you can see I'm using the env_file option to load my environment variables from the file env. Now I'm trying to deploy this container to Elastic Beanstalk. This is my file Dockerrun.aws.json so far: { "AWSEBDockerrunVersion": 2, "containerDefinitions": [ { "name": "app", "image": "myorg/myimage", "essential": true, "memory": 256, "command": [ "/bin/bash", "export LC_ALL=C.UTF-8", "export LANG=C.UTF-8", "/virtualenv/bin/flask run -h 0.0.0.0 -p 5050" ], "portMappings": [ { "hostPort": 8080, "containerPort": 8080 } ], "links": [ "postgres", ] } In the AWS Elastic Beanstalk documentation just mention the environment option to pass an array of env variables, but I can't find how to pass a file instead of an array of variables. Does someone knows how to translate this docker-compose file to Dockerrun.aws.json file properly? Regards.
Try container-transform. $ pip install container-transform $ cat docker-compose.yml | container-transform -v and it will print the ECS format to STDOUT.