AWS Port in Security Group but Can't Connect - amazon-web-services

I have a Security Group that has 80, 443, 22, and 8089.
Ports Protocol Source security-group
22 tcp 0.0.0/0 [check]
8089 tcp 0.0.0/0 [check]
80 tcp 0.0.0/0 [check]
443 tcp 0.0.0/0 [check]
However, when I test the connection using a Python program I wrote:
import socket
import sys
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
p = sys.argv[1]
try:
s.connect(('public-dns', int(p)))
print 'Port ' + str(p) + ' is reachable'
except socket.error as e:
print 'Error on connect: %s' % e
s.close()
However, I'm good with all ports but 8089:
python test.py 80
Port 80 is reachable
python test.py 22
Port 22 is reachable
python test.py 443
Port 443 is reachable
python test.py 8089
Error on connect: [Errno 61] Connection refused

The reason why you are able to connect successfully via localhost (127.0.0.1) and not externally is because your server application is listening on the localhost adapter only. This means that only connections originating from the instance itself will be able to connect to that process.
To correct this, you will want to configure your application to listen on either the local IP address of the interface or on all interfaces (0.0.0.0).
This shoes that it is wrong (listening on 127...):
~ $ sudo netstat -tulpn | grep 9966
tcp 0 0 127.0.0.1:9966 0.0.0.0:* LISTEN 4961/python
Here is it working right (using all interfaces):
~ $ sudo netstat -tulpn | grep 9966
tcp 0 0 0.0.0.0:9966 0.0.0.0:* LISTEN 5205/python

Besides the AWS security groups (which look like you have set correctly), you also need to make sure that if there is an internal firewall on the host, that it is also open for all the ports specified.

Related

502 Bad Gateway error on fastapi app hosted on EC2 instance + ELB

I have a FastAPI app that is hosted on EC2 instance with ELB for securing the endpoints using SSL.
The app is running using a docker-compose.yml file
version: '3.8'
services:
fastapi:
build: .
ports:
- 8000:8000
command: uvicorn app.main:app --host 0.0.0.0 --reload
volumes:
- .:/kwept
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- redis
worker:
build: .
command: celery worker --app=app.celery_worker.celery --loglevel=info --logfile=app/logs/celery.log
volumes:
- .:/kwept
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- fastapi
- redis
redis:
image: redis:6-alpine
command: redis-server --appendonly yes
volumes:
- redis_data:/data
volumes:
redis_data:
Till Friday evening, the elb endpoint was working absolutely fine and I could use it. But since today morning, I have suddenly started getting a 502 Bad Gateway error. I had made no changes in the code or the settings on AWS.
The ELB listener settings on AWS:
The target group that is connected to the EC2 instance
When I log into the EC2 instance & check the logs of the docker container that is running the fastapi app, I see the following:
These logs show that the app is starting correctly
I have not configured any health checks specifically. I just have the default settings
Output of netstat -ntlp
I have the logs on the ELB:
http 2022-07-21T06:47:12.458060Z app/dianee-tools-elb/de7eb044e99165db 162.142.125.221:44698 172.31.31.173:443 -1 -1 -1 502 - 41 277 "GET http://18.197.14.70:80/ HTTP/1.1" "-" - - arn:aws:elasticloadbalancing:eu-central-1:xxxxxxxxxx:targetgroup/dianee-tools/da8a30452001c361 "Root=1-62d8f670-711975100c6d9d4038d73544" "-" "-" 0 2022-07-21T06:47:12.457000Z "forward" "-" "-" "172.31.31.173:443" "-" "-" "-"
http 2022-07-21T06:47:12.655734Z app/dianee-tools-elb/de7eb044e99165db 162.142.125.221:43836 172.31.31.173:443 -1 -1 -1 502 - 158 277 "GET http://18.197.14.70:80/ HTTP/1.1" "Mozilla/5.0 (compatible; CensysInspect/1.1; +https://about.censys.io/)" - - arn:aws:elasticloadbalancing:eu-central-1:xxxxxxxxxx:targetgroup/dianee-tools/da8a30452001c361 "Root=1-62d8f670-5ceb74c8530832f859038ef6" "-" "-" 0 2022-07-21T06:47:12.654000Z "forward" "-" "-" "172.31.31.173:443" "-" "-" "-"
http 2022-07-21T06:47:12.949509Z app/dianee-tools-elb/de7eb044e99165db 162.142.125.221:48556 - -1 -1 -1 400 - 0 272 "- http://dianee-tools-elb-yyyyyy.eu-central-1.elb.amazonaws.com:80- -" "-" - - - "-" "-" "-" - 2022-07-21T06:47:12.852000Z "-" "-" "-" "-" "-" "-" "-"
I see you are using EC2 launch type. I'll suggest ssh into the container and try curling the localhost on port 8080, it should return your application page. After that check the same on the instance as well since you have made the container mapping to port 8080. If this also works, try modifying the target group port to 8080 which is the port on which your application works. If the same setup is working on other resources, it could be you are using redirection. If this doesn't help fetch the full logs using - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-logs-collector.html
If your application is working on port 8000. You need to modify the target group to perform health check there. Once the Target Group port will change to 8000 the health check should go through
what is "502 Bad Gateway"?
The HyperText Transfer Protocol (HTTP) 502 Bad Gateway server error response code indicates that the server, while acting as a gateway or proxy, received an invalid response from the upstream server.
HTTP protocols
http - port number: 80
https - port number: 443
From docker-compose.yml file you are exposing port "8000" which will not work.
Possible solutions
using NGINX
install the NGINX and add the server config
server {
listen 80;
listen 443 ssl;
# ssl on;
# ssl_certificate /etc/nginx/ssl/server.crt;
# ssl_certificate_key /etc/nginx/ssl/server.key;
# server_name <DOMAIN/IP>;
location / {
proxy_pass http://127.0.0.1:8000;
}
}
Changing the port to 80 or 443 in the docker-compose.yml file
My suggestion is to use the nginx.
Make sure you've set Keep-Alive parameter of you webserver (in your case uvicorn) to something more than the default value of AWS ALB, which is 60s. Doing it this way you will make sure the service doesn’t close the HTTP Keep-Alive connection before the ALB.
For uvicorn it will be: uvicorn app.main:app --host 0.0.0.0 --timeout-keep-alive=65

Django with postgreSQL DB on Docker - django.db.utils.OperationalError: could not connect to server: Connection refused

I am following this tutorial on 'Dockerizing' Django + PgSQL + gunicorn + nginx .
Relevant info
host machine OS is Ubuntu 20.0.4 LTS
Docker-desktop: Docker version 20.10.16, build aa7e414
This is my setup so far:
(relevant portions of) settings.py
import os
from pathlib import Path
from decouple import config
import dj_database_url
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = config('DEBUG', cast=bool)
ALLOWED_HOSTS = config('ALLOWED_HOSTS', cast=lambda v: [s.strip() for s in v.split(',')])
# Database
# https://docs.djangoproject.com/en/3.2/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': config('DATABASE_NAME'),
'USER': config('DB_USER_NAME'),
'PASSWORD': config('DB_USER_PWD'),
'HOST': config('DB_HOST'),
'PORT': config('DB_PORT'),
}
}
# required for db to work on Heroku
db_from_env = dj_database_url.config(conn_max_age=600)
DATABASES['default'].update(db_from_env)
my-django-proj/my-django-proj/.env
ALLOWED_HOSTS=0.0.0.0,localhost,127.0.0.1,www.example.com,example.com,example.herokuapp.com
# Docker compose seems to need this
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
DEBUG=True
DATABASE_NAME=thedbname
DB_USER_NAME=myusername
DB_USER_PWD=thepassword
DB_HOST=localhost
DB_PORT=5432
Dockerfile
# pull official base image
FROM python:3.9.6-alpine
# set work directory
WORKDIR /usr/src/app
#############################
# set environment variables #
#############################
# Prevents Python from writing pyc files to disc (equivalent to python -B option)
ENV PYTHONDONTWRITEBYTECODE 1
# Prevents Python from buffering stdout and stderr (equivalent to python -u option)
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# install dependencies
RUN pip install --upgrade pip
COPY ./my-django-proj/requirements.txt ./my-django-proj/
RUN pip install -r ./my-django-proj/requirements.txt
# copy project
COPY ./my-django-proj ./my-django-proj
compose.yml
services:
web:
build: .
command: python my-django-proj/manage.py runserver 0.0.0.0:8000
volumes:
- ./my-django-proj/:/usr/src/app/my-django-proj/
ports:
- 8000:8000
env_file:
- ./my-django-proj/my-django-proj/.env
depends_on:
- db
db:
image: postgres:14.3-alpine
restart: always
container_name: postgres14_3
#user: postgres
ports:
- 5432:5432
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=thepassword
- POSTGRES_DB=thedbname
volumes:
postgres_data:
Output log from sudo docker compose up -d --build
postgres14_3 |
postgres14_3 | 2022-06-01 12:03:35.017 UTC [1] LOG: starting PostgreSQL 14.3 on x86_64-pc-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
postgres14_3 | 2022-06-01 12:03:35.017 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres14_3 | 2022-06-01 12:03:35.017 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres14_3 | 2022-06-01 12:03:35.022 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres14_3 | 2022-06-01 12:03:35.028 UTC [50] LOG: database system was shut down at 2022-06-01 12:03:34 UTC
postgres14_3 | 2022-06-01 12:03:35.032 UTC [1] LOG: database system is ready to accept connections
Migration attempt output by running: sudo docker compose exec web python my-django-proj/manage.py migrate --noinput
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 200, in connect
self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 26, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Address not available
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
So I decided to do a bit of investigation on my host machine
output of sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 118855/docker-proxy
tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 1017/redis-server 1
tcp 0 0 127.0.0.1:11211 0.0.0.0:* LISTEN 934/memcached
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 754/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 997/sshd: /usr/sbin
tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 72912/cupsd
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN 118727/docker-proxy
tcp6 0 0 :::44543 :::* LISTEN 58417/code
tcp6 0 0 :::8000 :::* LISTEN 118862/docker-proxy
tcp6 0 0 ::1:6379 :::* LISTEN 1017/redis-server 1
tcp6 0 0 :::22 :::* LISTEN 997/sshd: /usr/sbin
tcp6 0 0 ::1:631 :::* LISTEN 72912/cupsd
tcp6 0 0 :::5432 :::* LISTEN 118734/docker-proxy
PG is obviously running on port 5432 (so the Django message is misleading - I think). I try to connect to the Db on the container as follows:
sudo docker compose exec db psql -U myusername thedbname
psql (14.3) Type "help" for help.
thedbname=#
So clearly, PG is running in the container - so, what is causing the Db connection to be refused? - and how do I fix it, so that I can run my migrations, and access the Django project on http://localhost:8000 on my local machine?
If you do not need to access postgres from your host then no need for the port mapping.
Containers can communicate together as they should be part of the same network using the service name, in your case you can try db which is the name of your postgres container instead of localhost.
To answer your exact question, to access the host machine from a docker container then you need to access host.docker.internal.
host.docker.internal would point to your host localhost.
See this SO post
For Linux you might still need to add:
extra_hosts:
- "host.docker.internal:host-gateway"
to your docker-compose's web service.
I would still recommend using the first approach.

connection to server at "localhost" , port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections?

I am doing a project using django rest framework. It was working ok, but at the moment I am getting error
connection to server at "localhost"(127.0.0.1), port 5432 failed:
Connection refused Is the server running on that host and accepting TCP/IP connections?
I knew that the problem with postgresql. Because I've tried to connect it with pgadmin, and also gives me this error.
I am using Ubuntu OS, when I checked ubuntu not listening port 5432.
postgresql.conf
# - Connection Settings -
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for >
# (change requires restart)
port = 5432 # (change requires restart)
max_connections = 100 # (change requires restart)
#superuser_reserved_connections = 3 # (change requires restart)
#unix_socket_directories = '/var/run/postgresql' # comma-separated list >
# (change requires restart)
When I run service postgresql status, it says following.
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor pr>
Active: active (exited) since Tue 2022-05-17 09:22:03 +05; 1h 9min ago
Process: 6270 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 6270 (code=exited, status=0/SUCCESS)
May 17 09:22:03 mirodil-vivobook systemd[1]: Starting PostgreSQL RDBMS...
May 17 09:22:03 mirodil-vivobook systemd[1]: Finished PostgreSQL RDBMS.
Here is the output of the ss -nlt:
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 511 127.0.0.1:40915 0.0.0.0:*
LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:*
LISTEN 0 5 127.0.0.1:631 0.0.0.0:*
LISTEN 0 4096 127.0.0.1:9050 0.0.0.0:*
LISTEN 0 5 127.0.0.1:39261 0.0.0.0:*
LISTEN 0 5 0.0.0.0:35587 0.0.0.0:*
LISTEN 0 128 0.0.0.0:25672 0.0.0.0:*
LISTEN 0 511 127.0.0.1:6379 0.0.0.0:*
LISTEN 0 511 *:80 *:*
LISTEN 0 4096 *:4369 *:*
LISTEN 0 5 [::1]:631 [::]:*
LISTEN 0 511 *:35711 *:*
LISTEN 0 128 *:5672 *:*
LISTEN 0 511 [::1]:6379 [::]:*
Seems something caused blocked 5432 port. How to solve this problem ?
postgresql.log
See this thread discussion:
PostgreSQL won't start: "server.key" has group or world access
The problem is that /etc directory needs to be root user, so I changed it to root user, and opened 5432 port, in my case /etc directory owned by other user, now everything is working.

AWS EC2 Instance not showing access to Port 8000

I have setup an AWS EC2 Instance (g4dn.2xlarge). I wanted to setup a flask app on the same and run it using gunicorn and nginx on port 8000. Following all steps listed on multiple sites I did the following:
Updated Inbound Rules on my security group to allow HTTP:
Screenshot of Inbound Rules
Checked Outbound Rules:
Screenshot of Outbound Rules
Connected to the VM using SSH and ran sudo netstat -tulpn | grep LISTEN.
The output was:
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
786/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1058/sshd
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 592/rpcbind
tcp6 0 0 :::22 :::* LISTEN 1058/sshd
tcp6 0 0 :::111 :::* LISTEN 592/rpcbind
tcp6 0 0 127.0.0.1:9200 :::* LISTEN 966/java
tcp6 0 0 ::1:9200 :::* LISTEN 966/java
tcp6 0 0 127.0.0.1:9300 :::* LISTEN 966/java
tcp6 0 0 ::1:9300 :::* LISTEN 966/java
Why is the system not showing port 8000 as available. I even ran grep 8000 and it gave no results. What should I do?
You can change the flask app port in app.py file
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
configure your web server (ex: nginx or apache) to proxy queries to flask port.

postgresql - django.db.utils.OperationalError: could not connect to server: Connection refused

Is the server running on host "host_name" (XX.XX.XX.XX)
and accepting TCP/IP connections on port 5432?
typical error message while trying to set up db server. But I just cannot fix it.
my django db settings:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'db_pwd',
'HOST': 'host_name',
'PORT': '5432',
}
}
I added to pg_hba.conf
host all all 0.0.0.0/0 md5
host all all ::/0 md5
I replaced in postgresql.conf:
listen_addresses = 'localhost' to listen_addresses = '*'
and did postgresql restart:
/etc/init.d/postgresql stop
/etc/init.d/postgresql start
but still getting the same error. What interesting is:
I can ping XX.XX.XX.XX from outside and it works. but I cannot telnet:
telnet XX.XX.XX.XX
Trying XX.XX.XX.XX...
telnet: connect to address XX.XX.XX.XX: Connection refused
telnet: Unable to connect to remote host
If I telnet the port 22 from outside, it works:
telnet XX.XX.XX.XX 22
Trying XX.XX.XX.XX...
Connected to server_name.
Escape character is '^]'.
SSH-2.0-OpenSSH_6.7p1 Debian-5+deb8u3
If I telnet the port 5432 from inside the db server, I get this:
telnet XX.XX.XX.XX 5432
Trying XX.XX.XX.XX...
Connected to XX.XX.XX.XX.
Escape character is '^]'.
same port from outside:
telnet XX.XX.XX.XX 5432
Trying XX.XX.XX.XX...
telnet: connect to address XX.XX.XX.XX: Connection refused
telnet: Unable to connect to remote host
nmap from inside:
Host is up (0.000020s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
5432/tcp open postgresql
nmap from outside:
Starting Nmap 7.60 ( https://nmap.org ) at 2018-01-24 07:01 CET
and no response.
It sounds like firewall issue, but I dont know where to look for. What am I doing wrong and what can be the issue?
any help is appreciated.
btw: I can login to postgresql inside the server, it works:
psql -h host_name -U user_name -d db_name
psql (9.4.15)
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)
Type "help" for help.
db_name =>
issue was, as I guessed, firewall blocking these ports. I tried to communicate with the hosting company but at the end I had to change the server to some other hosting company and it worked with the exact settings