Configuring hidden services for Tor in AWS - amazon-web-services

Can someone check what's wrong with this configuration?
AWS info:
EC2: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-45-generic x86_64)
Security Group:
HTTP TCP 80 0.0.0.0/0
SSH TCP 22 0.0.0.0/0
ubuntu#ip-172-31-58-168:~$ tor --version
Tor version 0.2.8.9 (git-cabd4ef300c6b3d6).
ubuntu#ip-172-31-58-168:~$ nginx -v
nginx version: nginx/1.10.2
ubuntu#ip-172-31-58-168:~$ sudo service tor status
● tor.service - Anonymizing overlay network for TCP (multi-instance-master)
Loaded: loaded (/lib/systemd/system/tor.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2016-10-20 10:03:51 ART; 1h 2min ago
Process: 667 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 667 (code=exited, status=0/SUCCESS)
Tasks: 0
Memory: 0B
CPU: 0
CGroup: /system.slice/tor.service
Oct 20 10:03:50 ip-172-31-58-168 systemd[1]: Starting Anonymizing overlay network for TCP (multi-instance-master)...
Oct 20 10:03:51 ip-172-31-58-168 systemd[1]: Started Anonymizing overlay network for TCP (multi-instance-master).
ubuntu#ip-172-31-58-168:~$ sudo service nginx status
● nginx.service - LSB: Stop/start nginx
Loaded: loaded (/etc/init.d/nginx; bad; vendor preset: enabled)
Active: active (running) since Thu 2016-10-20 10:04:23 ART; 1h 2min ago
Docs: man:systemd-sysv-generator(8)
Process: 1284 ExecStart=/etc/init.d/nginx start (code=exited, status=0/SUCCESS)
Tasks: 2
Memory: 2.6M
CPU: 14ms
CGroup: /system.slice/nginx.service
├─1332 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.con
└─1333 nginx: worker process
Oct 20 10:04:23 ip-172-31-58-168 systemd[1]: Starting LSB: Stop/start nginx...
Oct 20 10:04:23 ip-172-31-58-168 systemd[1]: Started LSB: Stop/start nginx.
torrc (Tor configuration file)
ubuntu#ip-172-31-58-168:~$ cat /etc/tor/torrc
HiddenServiceDir /var/lib/tor/sitio1
HiddenServicePort 80 127.0.0.1:81
hostname and private_key files:
root#ip-172-31-58-168:/home/ubuntu# cat /var/lib/tor/sitio1/hostname
zptym3k5xi2dyngl.onion
root#ip-172-31-58-168:/home/ubuntu# cat /var/lib/tor/sitio1/private_key
-----BEGIN RSA PRIVATE KEY-----
MIICWwIBAAKBgQDPVfcNF7uaBTqgLZqfr9zOQKCXF4g5FsMFa+u8I46d4/UAgODD
w+DxpUf/wPM7ibSLPuuVU/WTq2+fMu8QXTX+AuMboca0REeSuxb+NUOQxpEBxKHy
vqKB6emRA3D6X1e2X1i2f/dC2kqa/8nkuTOw+nUJthGYHHlN5xlAyVl72QIDAQAB
AoGARIzDlcyW9iFsdLEfQlS+yGKNtebN3zIrYIuB8T5AVOudgYEazx7gLITc/S4q
PlalalallalalalalalalalalalalalalalalalalaxTsb3lKt1EAyF049lX9MKj
qPDLOyAFFW9SQq/HCe5stnQl1zLfRIbbhTX6esArvLnv7VECQQD9PsT8AnVvh9J4
ybzJr5M2KZxy90rGmeWCZLB0l3UHxX2AKOIWC9qekeAURqHRN9Ys9iWY4TgEQunN
vRK+4YM3AkEA0ZdZKsx/s1DDaaieSn4h7zez7bpXYCTnSGzYTelPaiRMrpo9Lmyu
3GFsW9zOWzJmHNxsSczxQWDeLx3t/FShbwJARtppApktgibeHC1VRJh694xs2T2X
DjnAnNrPA8/cTnBSzKijmMd4QyVNLF8Wpxputoelqueleeat84IS3JT7wQJAEICC
HMSNKWkqeZ81F1hnA5a3K/iH+KHvM9yeC0RbZFgHUZgDSSx1eBSTm4f/F18Yex0/
yW/BbwxZcgxBOKTRholaZHM2UZ3TG6DfdYxY/Pur9/rlbXMGhx1RJnbdWFkC1CnW
9D9i6oto0iWHS9c46o3phYDceWC9/tuh04NboXBsKg==
-----END RSA PRIVATE KEY-----
nginx site configuration file
root#ip-172-31-58-168:/home/ubuntu# cat /etc/nginx/conf.d/sitio1.onion
server {
listen 81;
server_name zptym3k5xi2dyngl.onion;
root /directorio/carpeta/sitio1;
index index.php index.html index.htm;
access_log /directorio/de/los/logs/hidden-access.log;
error_log /directorio/de/los/logs/hidden-error.log;
error_page 404 /404.html;
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
Finally, syslog and tor log
root#ip-172-31-58-168:/home/ubuntu# cat /var/log/syslog
Oct 20 10:04:21 ip-172-31-58-168 systemd[1]: Starting Anonymizing overlay network for TCP...
Oct 20 10:04:22 ip-172-31-58-168 tor[1162]: Oct 20 10:04:22.078 [notice] Tor v0.2.8.9 (git-cabd4ef300c6b3d6) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.2g and Zlib 1.2.8.
Oct 20 10:04:22 ip-172-31-58-168 tor[1162]: Oct 20 10:04:22.079 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Oct 20 10:04:22 ip-172-31-58-168 tor[1162]: Oct 20 10:04:22.080 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
Oct 20 10:04:22 ip-172-31-58-168 tor[1162]: Oct 20 10:04:22.080 [notice] Read configuration file "/etc/tor/torrc".
Oct 20 10:04:22 ip-172-31-58-168 tor[1162]: Configuration was valid
Oct 20 10:04:22 ip-172-31-58-168 tor[1168]: Oct 20 10:04:22.215 [notice] Tor v0.2.8.9 (git-cabd4ef300c6b3d6) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.2g and Zlib 1.2.8.
Oct 20 10:04:22 ip-172-31-58-168 tor[1168]: Oct 20 10:04:22.229 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Oct 20 10:04:22 ip-172-31-58-168 tor[1168]: Oct 20 10:04:22.229 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
Oct 20 10:04:22 ip-172-31-58-168 tor[1168]: Oct 20 10:04:22.229 [notice] Read configuration file "/etc/tor/torrc".
Oct 20 10:04:22 ip-172-31-58-168 tor[1168]: Oct 20 10:04:22.241 [notice] Opening Socks listener on 127.0.0.1:9050
Oct 20 10:04:22 ip-172-31-58-168 systemd[1]: Started Anonymizing overlay network for TCP.
root#ip-172-31-58-168:/home/ubuntu# cat /var/log/tor/log
Oct 20 10:04:22.000 [notice] Tor 0.2.8.9 (git-cabd4ef300c6b3d6) opening log file.
Oct 20 10:04:22.215 [notice] Tor v0.2.8.9 (git-cabd4ef300c6b3d6) running on Linux with Libevent 2.0.21-stable, OpenSSL 1.0.2g and Zlib 1.2.8.
Oct 20 10:04:22.229 [notice] Tor can't help you if you use it wrong! Learn how to be safe at https://www.torproject.org/download/download#warning
Oct 20 10:04:22.229 [notice] Read configuration file "/usr/share/tor/tor-service-defaults-torrc".
Oct 20 10:04:22.229 [notice] Read configuration file "/etc/tor/torrc".
Oct 20 10:04:22.241 [notice] Opening Socks listener on 127.0.0.1:9050
Oct 20 10:04:22.000 [notice] Parsing GEOIP IPv4 file /usr/share/tor/geoip.
Oct 20 10:04:22.000 [notice] Parsing GEOIP IPv6 file /usr/share/tor/geoip6.
Oct 20 10:04:22.000 [notice] Bootstrapped 0%: Starting
Oct 20 10:04:22.000 [notice] Bootstrapped 80%: Connecting to the Tor network
Oct 20 10:04:22.000 [notice] Signaled readiness to systemd
Oct 20 10:04:23.000 [notice] Opening Socks listener on /var/run/tor/socks
Oct 20 10:04:23.000 [notice] Opening Control listener on /var/run/tor/control
Oct 20 10:04:24.000 [notice] Bootstrapped 85%: Finishing handshake with first hop
Oct 20 10:04:24.000 [notice] Bootstrapped 90%: Establishing a Tor circuit
Oct 20 10:04:24.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Oct 20 10:04:24.000 [notice] Bootstrapped 100%: Done
Traffic gets to my hidden service but "Unable to connect"
arm - screenshot
This exact configuration works on a server at my home.

Related

Django nginx gunicorn 502 bad gateway

I was deploying my Django project on fedora 33 nginx server. But domain and localhost tell me 502 bad gateway. I made diagnostic and learned that:
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2021-01-08 01:34:08 +04; 1min 17s ago
Process: 4348 ExecStart=/home/robot/venv/bin/gunicorn --workers 3
--bind unix:/home/robot/plagiat/plagiat.sock plagiat.wsgi:application (code=exited, status=203/EXEC)
Main PID: 4348 (code=exited, status=203/EXEC) CPU: 5ms
Jan 08 01:34:10 baku systemd[1]: Failed to start gunicorn daemon.
Jan 08 01:34:10 baku systemd[1]: gunicorn.service: Start request repeated too quickly.
Jan 08 01:34:10 baku systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Jan 08 01:34:10 baku systemd[1]: Failed to start gunicorn daemon.
Jan 08 01:34:11 baku systemd[1]: gunicorn.service: Start request repeated too quickly.
Jan 08 01:34:11 baku systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Jan 08 01:34:11 baku systemd[1]: Failed to start gunicorn daemon.
Jan 08 01:34:11 baku systemd[1]: gunicorn.service: Start request repeated too quickly.
Jan 08 01:34:11 baku systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Jan 08 01:34:11 baku systemd[1]: Failed to start gunicorn daemon.

Custom systemd service to run Gunicorn not working

I am trying to deploy my Django website to a Ubuntu server. I am following this tutorial: linuxhint.com/create_django_app_ubuntu/. However, the Gunicorn service doesn't work.
I have my site at /home/django/blog.
My Python 3.6 virtualenv is activated at /home/django/.venv/bin/activate (-rwxr-xr-x 1 django root 2207 Sep 21 14:07 activate).
The script for starting the server is at /home/django/bin/start-server.sh (-rwxr-xr-x 1 django root 69 Sep 21 15:50 start-server.sh), with the following content:
cd /home/django
source .venv/bin/activate
cd blog
gunicorn blog.wsgi
Running this script manually works just fine.
The Gunicorn service is at /etc/systemd/system/gunicorn.service, with this content:
[Unit]
Description=Gunicorn
After=network.target
[Service]
Type=simple
User=django
ExecStart=/home/django/bin/start-server.sh
Restart=on-failure
[Install]
WantedBy=multi-user.target
Running systemctl status gunicorn.service gives this:
● gunicorn.service - Gunicorn
Loaded: loaded (/etc/systemd/system/gunicorn.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2020-09-21 16:15:17 UTC; 6s ago
Process: 1114 ExecStart=/home/django/bin/start-server.sh (code=exited, status=203/EXEC)
Main PID: 1114 (code=exited, status=203/EXEC)
Sep 21 16:15:17 example.com systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Sep 21 16:15:17 example.com systemd[1]: gunicorn.service: Service hold-off time over, scheduling restart.
Sep 21 16:15:17 example.com systemd[1]: gunicorn.service: Scheduled restart job, restart counter is at 5.
Sep 21 16:15:17 example.com systemd[1]: Stopped Gunicorn.
Sep 21 16:15:17 example.com systemd[1]: gunicorn.service: Start request repeated too quickly.
Sep 21 16:15:17 example.com systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Sep 21 16:15:17 example.com systemd[1]: Failed to start Gunicorn.
Sep 21 16:15:18 example.com systemd[1]: gunicorn.service: Start request repeated too quickly.
Sep 21 16:15:18 example.com systemd[1]: gunicorn.service: Failed with result 'exit-code'.
Sep 21 16:15:18 example.com systemd[1]: Failed to start Gunicorn.
Sep 21 14:22:36 example.com systemd[7906]: gunicorn.service: Failed to execute command: Permission denied
Sep 21 14:22:36 example.com systemd[7906]: gunicorn.service: Failed at step EXEC spawning /home/django/bin/start-server.sh: Permission denied
Sep 21 14:23:40 example.com systemd[7940]: gunicorn.service: Failed to execute command: Permission denied
Sep 21 14:23:40 example.com systemd[7940]: gunicorn.service: Failed at step EXEC spawning /home/django/bin/start-server.sh: Permission denied
Sep 21 14:24:47 example.com systemd[7958]: gunicorn.service: Failed to execute command: Permission denied
Sep 21 14:24:47 example.com systemd[7958]: gunicorn.service: Failed at step EXEC spawning /home/django/bin/start-server.sh: Permission denied
Permission denied
.
.
.
I ran chown -R django:django /home/django. Now, the output of ls -lah /home/django is:
total 32K
drwxr-xr-x 5 django django 4.0K Sep 21 14:19 .
drwxr-xr-x 3 root root 4.0K Sep 21 14:04 ..
-rw-r--r-- 1 django django 220 Apr 4 2018 .bash_logout
-rw-r--r-- 1 django django 3.7K Apr 4 2018 .bashrc
-rw-r--r-- 1 django django 807 Apr 4 2018 .profile
drwxr-xr-x 4 django django 4.0K Sep 21 14:07 .venv
drwxr-xr-x 2 django django 4.0K Sep 21 15:58 bin
drwxr-xr-x 3 django django 4.0K Sep 21 14:08 blog
Solution
Thanks to Dmitry Belaventsev, the solution to this is to change
ExecStart=/home/django/bin/start-server.sh
to
ExecStart=/bin/bash /home/django/bin/start-server.sh
In the file /etc/systemd/system/gunicorn.service.
Your systemd service is setup to execute the script from behalf of django user. In the meantime:
ls -lah /home/django
total 32K
drwxr-xr-x 5 django django 4.0K Sep 21 14:19 .
drwxr-xr-x 3 root root 4.0K Sep 21 14:04 ..
-rw-r--r-- 1 django django 220 Apr 4 2018 .bash_logout
-rw-r--r-- 1 django django 3.7K Apr 4 2018 .bashrc
-rw-r--r-- 1 django django 807 Apr 4 2018 .profile
drwxr-xr-x 4 django root 4.0K Sep 21 14:07 .venv
drwxr-xr-x 2 root root 4.0K Sep 21 15:58 bin
drwxr-xr-x 3 root root 4.0K Sep 21 14:08 blog
As you can see:
drwxr-xr-x 3 root root 4.0K Sep 21 14:04 ..
and
drwxr-xr-x 2 root root 4.0K Sep 21 15:58 bin
which means:
/home directory belongs to root:root
/home/django/bin belongs to root:root
To let systemd execute a bash script from behalf of django user:
That script should be executable
All parent directories should have execution rights
All those directories and the script should be available for django user
The quickest solution:
chown -R /home/django django:django
Also you could play with group and group rights as well.

why am I getting error for nginx web server down or busy?

Suddenly, my Django website has stopped service over the internet. I have no idea what changed.
So, when I launch the website in browser, I am getting bunch of error messages(attached screenshot). The error is complaining about the webserver(nginx) which is hosting my website.
My environment:
Ubuntu 18
Gunicorn
Nginx
Website hosted on AWS. (inbound/outbond rule screenshot attached)
I have checked the sudo journalctl -u nginx.service
Aug 15 04:15:39 primarySNS.schoolnskill.com systemd[1]: Starting A high performance web server and a reverse proxy server...
Aug 15 04:15:39 primarySNS.schoolnskill.com systemd[1]: Started A high performance web server and a reverse proxy server.
Aug 15 04:18:53 primarySNS.schoolnskill.com systemd[1]: Stopping A high performance web server and a reverse proxy server...
Aug 15 04:18:53 primarySNS.schoolnskill.com systemd[1]: Stopped A high performance web server and a reverse proxy server.
Aug 15 04:18:53 primarySNS.schoolnskill.com systemd[1]: Starting A high performance web server and a reverse proxy server...
Aug 15 04:18:53 primarySNS.schoolnskill.com systemd[1]: nginx.service: Failed to parse PID from file /run/nginx.pid: Invalid argument
Aug 15 04:18:53 primarySNS.schoolnskill.com systemd[1]: Started A high performance web server and a reverse proxy server.
I could see something "invalid argument" line. Not sure if that has anything to do with my situation.
I have also checked the nginx error log. its 0 bytes
-rw-r----- 1 xxx yyy 0 Aug 15 06:25 /var/log/nginx/error.log
The syslog dies have some interesting logs:
Aug 15 06:25:01 primarySNS rsyslogd: [origin software="rsyslogd" swVersion="8.32.0" x-pid="920" x-info="http://www.rsyslog.com"] rsyslogd was HUPed
Aug 15 06:26:07 primarySNS systemd-timesyncd[600]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Aug 15 06:26:50 primarySNS systemd-networkd[771]: eth0: Configured
Aug 15 06:26:50 primarySNS systemd-timesyncd[600]: Network configuration changed, trying to establish connection.
Aug 15 06:26:50 primarySNS systemd-timesyncd[600]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Aug 15 06:35:01 primarySNS CRON[2678]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 06:45:01 primarySNS CRON[2693]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 06:50:54 primarySNS gunicorn[1432]: Not Found: /robots.txt
Aug 15 06:53:30 primarySNS gunicorn[1432]: Not Found: /profile1/
Aug 15 06:55:01 primarySNS CRON[2718]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 06:56:50 primarySNS systemd-networkd[771]: eth0: Configured
Aug 15 06:56:50 primarySNS systemd-timesyncd[600]: Network configuration changed, trying to establish connection.
Aug 15 06:56:50 primarySNS systemd-timesyncd[600]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Aug 15 07:05:01 primarySNS CRON[2734]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 07:15:01 primarySNS CRON[2750]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 07:17:01 primarySNS CRON[2756]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
Aug 15 07:25:01 primarySNS CRON[2769]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 07:26:50 primarySNS systemd-networkd[771]: eth0: Configured
Aug 15 07:26:50 primarySNS systemd-timesyncd[600]: Network configuration changed, trying to establish connection.
Aug 15 07:26:50 primarySNS systemd-timesyncd[600]: Synchronized to time server 91.189.91.157:123 (ntp.ubuntu.com).
Aug 15 07:35:01 primarySNS CRON[2802]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 07:41:11 primarySNS systemd-resolved[784]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Aug 15 07:41:11 primarySNS systemd-resolved[784]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Aug 15 07:45:01 primarySNS CRON[2852]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: 2020-08-15 07:50:43 INFO Backing off health check to every 3600 seconds for 10800 seconds.
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: 2020-08-15 07:50:43 ERROR Health ping failed with error - EC2RoleRequestError: no EC2 instance role found
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: caused by: EC2MetadataError: failed to make EC2Metadata request
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: #011status code: 404, request id:
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: caused by: <?xml version="1.0" encoding="iso-8859-1"?>
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: #011"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: <head>
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: <title>404 - Not Found</title>
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: </head>
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: <body>
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: <h1>404 - Not Found</h1>
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: </body>
Aug 15 07:50:43 primarySNS amazon-ssm-agent.amazon-ssm-agent[898]: </html>
Aug 15 07:52:26 primarySNS systemd-resolved[784]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Aug 15 07:52:26 primarySNS systemd-resolved[784]: Server returned error NXDOMAIN, mitigating potential DNS violation DVE-2018-0001, retrying transaction with reduced feature level UDP.
Aug 15 07:55:01 primarySNS CRON[2870]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Follow up questions and its answers:
Are you using route53 as the DNS resolver?
Yes
Has your EC2 been stopped and started again, and if so, have you checked that the ip address is still the same?
Yes, but I have made sure that the new IP is updated in my Route 53
Is your ec2 in a public subnet? Can you reach google.com or 8.8.8.8 from the command line on it?
ping google.com
PING google.com (172.217.2.110) 56(84) bytes of data.
64 bytes from yyz10s05-in-f14.1e100.net (172.217.2.110): icmp_seq=1 ttl=112 time=1.31 ms
64 bytes from yyz10s05-in-f14.1e100.net (172.217.2.110): icmp_seq=2 ttl=112 time=1.29 ms
64 bytes from yyz10s05-in-f14.1e100.net (172.217.2.110): icmp_seq=3 ttl=112 time=1.33 ms
64 bytes from yyz10s05-in-f14.1e100.net (172.217.2.110): icmp_seq=4 ttl=112 time=1.33 ms
64 bytes from yyz10s05-in-f14.1e100.net (172.217.2.110): icmp_seq=5 ttl=112 time=1.34 ms
is nginx actually listening on the ec2? If you ssh to it, and curl -vvvv http://localhost/ do you actually get a response?
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 80 (#0)
> GET / HTTP/1.1
> Host: localhost
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.14.0 (Ubuntu)
< Date: Sat, 15 Aug 2020 13:56:06 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Fri, 10 Jul 2020 11:16:00 GMT
< Connection: keep-alive
< ETag: "5f084df0-264"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host localhost left intact
What happens when you run curl -vvvv http://(ec2.public.ip.address)/ ?
* Rebuilt URL to: http://<public_ip>/
* Trying <public_ip>...
* TCP_NODELAY set
* Connected to <public_ip> (<public_ip>) port 80 (#0)
> GET / HTTP/1.1
> Host: <public_ip>
> User-Agent: curl/7.58.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.14.0 (Ubuntu)
< Date: Sat, 15 Aug 2020 13:57:13 GMT
< Content-Type: text/html
< Content-Length: 612
< Last-Modified: Fri, 10 Jul 2020 11:16:00 GMT
< Connection: keep-alive
< ETag: "5f084df0-264"
< Accept-Ranges: bytes
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host <public_ip> left intact
Is your site running at the path or virtual domain you think it is? Has your nginx config perhaps changed?
I didn't make any changes to my nginx configuration.
What happens if you run curl http://169.254.169.254/latest/meta-data - do you get a response?
curl http://169.254.169.254/latest/meta-data
ami-id
ami-launch-index
ami-manifest-path
block-device-mapping/
events/
hibernation/
hostname
identity-credentials/
instance-action
instance-id
instance-life-cycle
instance-type
local-hostname
local-ipv4
mac
metrics/
network/
placement/
profile
public-hostname
public-ipv4
public-keys/
reservation-id
security-groups
Based on the comments.
I went to the OP's website url and the website has been running. Therefore, there don't seem to be any issue on the EC2 nor its settings.
It should be noted, that the site works only for HTTP, not HTTPS. Thus attempts to access it using https:// will fail. This could potentially explain why it was not reachable when tested initially.
Without more information this is hard to debug. Things to check:
Are you using route53 as the DNS resolver? Has your EC2 been stopped and started again, and if so, have you checked that the ip address is still the same?
Is your ec2 in a public subnet? Can you reach google.com or 8.8.8.8 from the command line on it?
is nginx actually listening on the ec2? If you ssh to it, and curl -vvvv http://localhost/ do you actually get a response?
What happens when you run curl -vvvv http://(ec2.public.ip.address)/ ?
As above, what happens with curl -k -vvvv https://ec2.public.ip.address)/ ?
Is your site running at the path or virtual domain you think it is? Has your nginx config perhaps changed?
What happens if you run curl http://169.254.169.254/latest/meta-data - do you get a response?
The ssm agent timeout is curious.
As an aside, your security group egress rules are unnecessarily complex. You can remove the http, https and ssh rules because your all traffic rule overrides them anyway.

Postgres11 can't start when I changed its default data directory in CenoOS

I changed the postgres data directory following these steps:
systemctl stop postgresql-11.service
cp -r /var/lib/pgsql/11/data /home/eshel/pgsql-11/
chown -R postgres:postgres /home/eshel/pgsql-11/
chmod 700 /home/eshel/pgsql-11/
vi /usr/lib/systemd/system/postgresql-11.service
Environment=PGDATA=/home/eshel/pgsql-11/data/
systemctl daemon-reload
systemctl start postgresql-11.service
So far, the postgres-11 is normal.
After I changed the new data_directory's postgres.conf.
1. vim pgsql-11/data/postgresql.conf
data_directory = '/home/eshel/pgsql-11/data'
I can't start postgres-11. I don't know how to handle this.
The error follows:
[root#localhost 11]# systemctl start postgresql-11.service
* Job for postgresql-11.service failed because the control process exited with error code. See "systemctl status postgresql-11.service" and "journalctl -xe" for details.
[root#localhost 11]# systemctl status postgresql-11.service
● postgresql-11.service - PostgreSQL 11 database server
* Loaded: loaded (/usr/lib/systemd/system/postgresql-11.service; enabled; vendor preset: disabled)
* Active: failed (Result: exit-code) since Fri 2019-10-11 23:05:00 CST; 27s ago
* Docs: https://www.postgresql.org/docs/11/static/
* Process: 12758 ExecStartPre=/usr/pgsql-11/bin/postgresql-11-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE)
* Main PID: 11582 (code=exited, status=0/SUCCESS)
* Oct 11 23:05:00 localhost.localdomain systemd[1]: Starting PostgreSQL 11 database server...
* Oct 11 23:05:00 localhost.localdomain systemd[1]: postgresql-11.service: control process exited, code=exited status=1
* Oct 11 23:05:00 localhost.localdomain systemd[1]: Failed to start PostgreSQL 11 database server.
* Oct 11 23:05:00 localhost.localdomain systemd[1]: Unit postgresql-11.service entered failed state.
* Oct 11 23:05:00 localhost.localdomain systemd[1]: postgresql-11.service failed.
[root#localhost 11]# journalctl -xe
* Oct 11 23:01:21 localhost.localdomain dbus[6461]: [system] Activating service name='org.freedesktop.problems' (using servicehe
* Oct 11 23:01:21 localhost.localdomain dbus[6461]: [system] Successfully activated service 'org.freedesktop.problems'
* Oct 11 23:04:51 localhost.localdomain nautilus-deskto[9318]: g_simple_action_set_enabled: assertion 'G_IS_SIMPLE_ACTION (simpl
* Oct 11 23:05:00 localhost.localdomain polkitd[6478]: Registered Authentication Agent for unix-process:12752:1022414 (system bu
* Oct 11 23:05:00 localhost.localdomain systemd[1]: Starting PostgreSQL 11 database server...
* -- Subject: Unit postgresql-11.service has begun start-up
* -- Defined-By: systemd
* -- Unit postgresql-11.service has begun starting up.
* Oct 11 23:05:00 localhost.localdomain postgresql-11-check-db-dir[12758]: "/home/eshel/pgsql-11/data/" is missing or empty.
* Oct 11 23:05:00 localhost.localdomain postgresql-11-check-db-dir[12758]: Use "/usr/pgsql-11/bin/postgresql-11-setup initdb" to
* Oct 11 23:05:00 localhost.localdomain postgresql-11-check-db-dir[12758]: See /usr/share/doc/postgresql11-11.4/README.rpm-dist
* Oct 11 23:05:00 localhost.localdomain systemd[1]: postgresql-11.service: control process exited, code=exited status=1
* Oct 11 23:05:00 localhost.localdomain systemd[1]: Failed to start PostgreSQL 11 database server.
* -- Subject: Unit postgresql-11.service has failed
* -- Defined-By: systemd
* -- Unit postgresql-11.service has failed.
* -- The result is failed.
* Oct 11 23:05:00 localhost.localdomain systemd[1]: Unit postgresql-11.service entered failed state.
* Oct 11 23:05:00 localhost.localdomain systemd[1]: postgresql-11.service failed.
Process: 10839 ExecStartPre=/usr/pgsql-11/bin/postgresql-11-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE)
The postgres can't read the PGDATA direcotry.With the help of someone else, We solved this by following:
Before modification:
drwx------. 19 eshel eshel 4096 Oct 12 21:00 eshel
After modification:
[root#localhost eshel]# chmod 755 ../eshel/
[root#localhost eshel]# ll ../
drwxr-xr-x. 19 eshel eshel 4096 Oct 12 21:00 eshel
Then I start postgres11, and it's OK!
systemctl start postgresql-11

centos 7 postgresql not starting

I installed Centos 7, 64 bit. Installed postgresql-9.4.
Initialized postgresql with command
/usr/pgsql-9.4/bin/postgresql94-setup initdb
then used this command to start postgresql
systemctl start postgresql-9.4
and showed error like this:
service failed because the control process exited with error code. See "systemctl status postgresql-9.4.service" and "journalctl -xe" for details.
entered command
systemctl status postgresql-9.4.service
● postgresql-9.4.service - PostgreSQL 9.4 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-9.4.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2016-03-02 18:09:27 CST; 1min 59s ago
Process: 31266 ExecStart=/usr/pgsql-9.4/bin/pg_ctl start -D ${PGDATA} -s -w -t 300 (code=exited, status=1/FAILURE)
Process: 31261 ExecStartPre=/usr/pgsql-9.4/bin/postgresql94-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Mar 02 18:09:26 localhost.localdomain pg_ctl[31266]: < 2016-03-02 18:09:26.862 CST >LOG: could not bind IPv6 socket: Address already in use
Mar 02 18:09:26 localhost.localdomain pg_ctl[31266]: < 2016-03-02 18:09:26.862 CST >HINT: Is another postmaster already running on...retry.
Mar 02 18:09:26 localhost.localdomain pg_ctl[31266]: < 2016-03-02 18:09:26.862 CST >LOG: could not bind IPv4 socket: Address already in use
Mar 02 18:09:26 localhost.localdomain pg_ctl[31266]: < 2016-03-02 18:09:26.862 CST >HINT: Is another postmaster already running on...retry.
Mar 02 18:09:26 localhost.localdomain pg_ctl[31266]: < 2016-03-02 18:09:26.862 CST >WARNING: could not create listen socket for "localhost"
Mar 02 18:09:26 localhost.localdomain pg_ctl[31266]: < 2016-03-02 18:09:26.862 CST >FATAL: could not create any TCP/IP sockets
Mar 02 18:09:27 localhost.localdomain systemd[1]: postgresql-9.4.service: control process exited, code=exited status=1
Mar 02 18:09:27 localhost.localdomain systemd[1]: Failed to start PostgreSQL 9.4 database server.
Mar 02 18:09:27 localhost.localdomain systemd[1]: Unit postgresql-9.4.service entered failed state.
Mar 02 18:09:27 localhost.localdomain systemd[1]: postgresql-9.4.service failed.
1 - install psmisc package to give a killall on Postgres
# yum install psmisc
2 - Run the kill
# killall -9 postgres
3 - Start / Stop
# systemctl stop postgresql- 9.4
# systemctl start postgresql- 9.4
4 - to check if the service is in execution,
# ps -aux | grep postgres