ufw forbids docker container to connect to postgres - django

On ubuntu 18.04 with ufw enabled I run docker container which is supposed to connect a django app to a locally installed Postgresql server.
Everything runs perfect when ufw is disabled
docker-compose -f docker-compose.prod.yml run --rm app sh -c 'python manage.py createsuperuser'
But with enabled ufw I get the following error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Operation timed out
Is the server running on host "host.docker.internal" (172.17.0.1) and accepting
TCP/IP connections on port 5432?
I have following ufw rules
$ sudo ufw status
Status: active
To Action From
-- ------ ----
Nginx Full ALLOW Anywhere
OpenSSH ALLOW Anywhere
20/tcp ALLOW Anywhere
21/tcp ALLOW Anywhere
990/tcp ALLOW Anywhere
40000:50000/tcp ALLOW Anywhere
Nginx Full (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
20/tcp (v6) ALLOW Anywhere (v6)
21/tcp (v6) ALLOW Anywhere (v6)
990/tcp (v6) ALLOW Anywhere (v6)
40000:50000/tcp (v6) ALLOW Anywhere (v6)
How to configure ufw properly and let the container connect to Postgres?

Your firewall is blocking the connection from docker container since it originates on another network.
To fix this you should enable access from that docker network to your Postgres instance (I assume its port 5432).
So, when you use docker-compose up, a specific docker network is created. You can look it up by using command:
docker network ls
When you locate your network use command docker inspect {network name} to get additional information about it. The information you are looking for is the network's gateway. The portion of it should look something like this:
...
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
...
The ip you are looking for is in this example 172.18.0.1.
So now you know from which interface your container is connecting to Postgres and you can enable it in your firewall. Since you don't want to open your firewall for everybody you can use something like this:
ufw allow in from 172.18.0.0/16
This will also allow the entire network to access and not just specific IPs. That is a useful option since containers can change IPs when restarted.

Related

EC2 docker container nginx port outside access issue

I am running nginx in a docker container in an EC2 instance. I started nginx using docker run --name mynginx1 -p 80:80 -d nginx and can access it via curl http://localhost from inside the EC2 instance. Now when I try to access it from my outside through my browser, my request is always timing out. I have set the security rules on my EC2 instance to allow all traffic, all protocols from any IP address for the purpose of testing.
I have verified that nginx is listening on any IP address using ss -tuln | grep 80
tcp LISTEN 0 4096 0.0.0.0:80 0.0.0.0:*
tcp LISTEN 0 4096 [::]:80 [::]:*
Any ideas?
Note: When I install nginx on EC2 directly and run it using sudo systemctl start nginx, I am able to go to http://<ec2_dns> and see the nginx welcome page. So I believe this is an issue specific to running docker containers on EC2 and not a problem with the instance security rules.
Edit 1: Subnet network ACLs inbound rules are as follows:

How to open port 80 on AWS EC2

I want to open port 80 to allow HTTP connections on my EC2 server. But when I'm entering "telnet xx.xx.xx.xx 80" on a terminal the following is displayed
"Trying xx.xx.xx.xx..."
telnet: Unable to connect to remote host: Connection timed out
In AWS I've opened port 80 by defining an Inbound Rule on the Security group (only one security group is defined for this EC2 server)
I'm using the Public IPv4 address to make a telnet connection
I noticed you have a fresh install -- fresh installs do not have software listening over HTTP by default.
If there is no application listening on a port, incoming packets to that port will simply be rejected by the computer's operating system. Ports can be "closed" through the use of a firewall, which you have disabled, therefore the ports are open just unresponsive which makes them appear closed.
If the port is enabled in the firewall in terminal using
sudo apt-get install ufw
sudo ufw allow ssh
sudp ufw allow https
sudo ufw allow http
sudo reboot
and enabled in the aws console as a rule, the port is open and just not responsive so it's seen as closed. By installing either nginx or something that binds to port 80, external requests to that port will be connected successfully, and the port will therefore be recognized as open. The reason ssh is recognized as open is because 1. it has firewall transparency, and 2. it is always listening (unlike port 80!).
Before installing nginx even though ports are allowed thru firewall:
sudo apt-get install nginx
sudo ufw allow 'Nginx HTTP'
sudo systemctl status nginx
(more nginx info)
After:
Simple port tester tool here

Connection refused error when using Stripe webhooks

I'm constantly getting a connection refused error when trying to receive webhooks.
(venv) alexa#main:/etc/nginx/sites-available$ stripe listen --forward-to localhost:5000/webhook/
go package net: built with netgo build tag; using Go's DNS resolver
> Ready! Your webhook signing secret is whsec_************************* (^C to quit)
2021-04-05 18:13:03 --> customer.subscription.updated [evt_1Icwv5HrsuAsSZROjKy4Z5CK]
2021-04-05 18:13:03 [ERROR] Failed to POST: Post "http://localhost:5000/webhook/": dial tcp 127.0.0.1:5000: connect: connection refused)
The port is enabled through my firewall:
To Action From
-- ------ ----
5000 ALLOW Anywhere
5000/tcp ALLOW Anywhere
22/tcp ALLOW Anywhere
80/tcp ALLOW Anywhere
443/tcp ALLOW Anywhere
5555 ALLOW Anywhere
5000 (v6) ALLOW Anywhere (v6)
5000/tcp (v6) ALLOW Anywhere (v6)
22/tcp (v6) ALLOW Anywhere (v6)
80/tcp (v6) ALLOW Anywhere (v6)
443/tcp (v6) ALLOW Anywhere (v6)
5555 (v6) ALLOW Anywhere (v6)
My webapp is running on Ubuntu 20.10
After running curl -v -X POST http://localhost:5000/webhook/ as suggested by Justin Michael in the comments I got the following :
* Trying 127.0.0.1:5000...
* TCP_NODELAY set
* connect to 127.0.0.1 port 5000 failed: Connection refused
* Failed to connect to localhost port 5000: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 5000: Connection refused
Based on your latest comment it sounds like you have Stripe CLI running on your local machine and you're trying to use it to forward Stripe Events to the code running on your Linode.
Stripe CLI is designed for local testing only, and while forwarding from your local machine to your Linode can probably work, it's not recommended.
The best approach here would be to set up an actual webhook endpoint in your Stripe Dashboard or create one using the Stripe API and point it to your Linode.
Alternatively you could install Stripe CLI on the Linode itself and forward locally there, but the actual webhook endpoint will be a better way to test as you'll get actual webhook endpoint behavior, such as retries.

Could not access to my application in digital ocean through public IP?

I faced an issue regarding the ufw ubuntu firewall rule in the digital ocean, I already allow the port in from anywhere but still, I can not access my application with public ip with the port that I allowed. How to allow the ufw firewall rule for my application in secure ways. Thank you.
Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip
To Action From
-- ------ ----
22/tcp ALLOW IN Anywhere
80/tcp ALLOW IN Anywhere
443/tcp ALLOW IN Anywhere
Anywhere ALLOW IN 134.238.123.131
8080 ALLOW IN Anywhere
3000 ALLOW IN Anywhere
22/tcp (v6) ALLOW IN Anywhere (v6)
80/tcp (v6) ALLOW IN Anywhere (v6)
443/tcp (v6) ALLOW IN Anywhere (v6)
8080 (v6) ALLOW IN Anywhere (v6)
3000 (v6) ALLOW IN Anywhere (v6)
The problem is, I can not access to my MERN stack application via public IP 134.238.123.131:3000, the thing is, before I abled to access to it. It shows site could not be reached as below:
This site can’t be reached
104.248.153.121 took too long to respond.
Try:
Checking the connection
Checking the proxy and the firewall
Running Windows Network Diagnostics
ERR_CONNECTION_TIMED_OUT
ufw firewall rule status

Deploying Docker to AWS Elastic Beanstalk -- how to forward port to host? (port binding)

I have a project set up with CircleCI that I am using to auto-deploy to Elastic Beanstalk. My EBS environment is a single container, auto-scaling, web environment. I am trying to run a service that listens on raw socket port 8080.
My Dockerfile:
FROM golang:1.4.2
...
EXPOSE 8080
My Dockerrun.aws.json.template:
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "<bucket>",
"Key": "<key>"
},
"Image": {
"Name": "project/hello:<TAG>",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
I have made sure to expose port 8080 on the "role" assigned to my project environment.
I used the exact deployment script from the CircleCI tutorial linked above (except with changed names).
Within the EC2 instance that is running my EBS application, I can see that the Docker container has run successfully, except that Docker did not forward the exposed port to the host container. I have encountered this in the past when I ran docker run .... without the -P flag.
Here is an example session after SSH-ing into the machine:
[ec2-user#ip-xxx-xx-xx-xx ~]$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a036bb061aea aws_beanstalk/staging-app:latest "/bin/sh -c 'go run 3 days ago Up 3 days 8080/tcp boring_hoover
[ec2-user#ip-xxx-xx-xx-xx ~]$ curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
What I expect to see is the ->8080 or whatever in the container that forwards it onto the host.
When I do docker inspect on my container, I also see that these two configurations are not what I want:
"PortBindings": {},
"PublishAllPorts": false,
How can I trigger a port binding in my application?
Thanks in advance.
It turns out I made a misunderstanding in how Docker's networking stack works. When a port is exposed but not published, it is still available to the local network interface through the Docker container's private IP address. You can obtain this IP address by checking docker inspect <container>.
Rather than doing curl localhost:8080 I could do curl <containerIP>:8080.
In my EBS deploy, nginx was automatically setup to forward (HTTP) traffic from Port 80 to this internal private port as well.
I had the same problem in a rails container (port 3000 using puma) by default rails server only binds localhost to the listening interface, I had to use -b option to bind 0.0.0.0 and that solved the problem.
In react I have no the same problem cause npm serve package binds all interfaces by default