I've newly registered with Amazon VPS service, Amazon Lightsail. After properly setup my Django app, Gunicorn and Nginx, it seems that there's some problem with the traffic?
I set up the same Django app on two different VPS using the same process, both with Ubuntu 18.
The IP isn't working on AWS (13.124.94.92):
~# ping -c3 13.124.94.92
PING 13.124.94.92 (13.124.94.92) 56(84) bytes of data.
--- 13.124.94.92 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2044ms
The IP is working perfectly (5.63.152.4) in another VPS Ubuntu 18:
~# ping -c3 5.63.152.4
PING 5.63.152.4 (5.63.152.4) 56(84) bytes of data.
64 bytes from 5.63.152.4: icmp_seq=1 ttl=64 time=0.063 ms
64 bytes from 5.63.152.4: icmp_seq=2 ttl=64 time=0.108 ms
64 bytes from 5.63.152.4: icmp_seq=3 ttl=64 time=0.082 ms
--- 5.63.152.4 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2043ms
rtt min/avg/max/mdev = 0.063/0.084/0.108/0.019 ms
The first IP isn't showing 'Welcome to Nginx' page while the second does (http://5.63.152.4/)
I'm not sure where to start to debug this... I've fiddled with iptables a bit, for failing to have mosh working. Please help! Thanks!!!
Additional Info:
Firewall active
~$ sudo ufw status
Status: active
To Action From
-- ------ ----
60000:61000/udp ALLOW Anywhere
Nginx Full ALLOW Anywhere
OpenSSH ALLOW Anywhere
mosh ALLOW Anywhere
5432/tcp ALLOW Anywhere
60000:61000/udp (v6) ALLOW Anywhere (v6)
Nginx Full (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
mosh (v6) ALLOW Anywhere (v6)
5432/tcp (v6) ALLOW Anywhere (v6)
nginx status
~$ sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
/etc/nginx/{sites-available, sites-enabled}/myproject
server {
listen 80;
server_name server_domain_or_IP;
location = /favicon.ico { access_log off; log_not_found off; }
location /static/ {
root /home/sammy/myprojectdir;
}
location /media/ {
root /home/sammy/myprojectdir;
}
location / {
include proxy_params;
proxy_pass http://unix:/run/gunicorn.sock;
}
}
gunicorn status
~$ sudo systemctl status gunicorn
● gunicorn.service - gunicorn daemon
Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2020-05-01 06:14:48 UTC; 1h 17min ago
Main PID: 14512 (gunicorn)
Tasks: 4 (limit: 517)
CGroup: /system.slice/gunicorn.service
...
For other newbie like me:
Problem solved... Amazon Lightsail has its own firewall under 'Networking' tag on the control panel. So no need to set ufw yourself in Ubuntu. The welcome page of Nginx simply does not show. Once you are done pointing your domain to the public IP of the Lightsail instance, you are all set.
My particular problem was, Django require SSL access because of the settings I made. Hence, after SSL is installed using cerbot the website is accessible.
PS. It's a good idea to disable the 8000 port before finishing off your deployment.
Related
I am using an AzureVM to host my development API server written in django.
There, I cloned my codebase and did all the requirements.txt installation. Before executing the manage.py runserver 0.0.0.0:8000 command, I changed the firewall permission rule: sudo ufw allow 8000.
The sudo ufw status commads shows the following result:
Status: active
To Action From
-- ------ ----
Nginx HTTP ALLOW Anywhere
8000 ALLOW Anywhere
OpenSSH ALLOW Anywhere
Nginx HTTP (v6) ALLOW Anywhere (v6)
8000 (v6) ALLOW Anywhere (v6)
OpenSSH (v6) ALLOW Anywhere (v6)
and then I run python3 manage.py runserver 0.0.0.0:8000.
I should be able to see the default django page # http://SERVER_IP_OR_DOMAIN:8000
But nothing shows.
whereas if I hit http://SERVER_IP_OR_DOMAIN, I can see the NGINX landing page.
I am following this blog from digital ocean.
Here is my VM network rules:
How can I fix this?
I tried to reproduce the same in my environment and got the results like below:
To resolve this issue:
In your virtual machine make sure to Add inbound port Destination port ranges as 8000 like below:
I have installed the Nginx in my ec2 machine
The ubuntu version is
Distributor ID:Ubuntu
Description:Ubuntu 20.04.3 LTS
Release:20.04
Codename:focal
I have installed using the following command
sudo apt install Nginx
After that, I can able to see that the Nginx service is up and running
nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2022-06-19 07:27:28 UTC; 9min ago
Docs: man:nginx(8)
Process: 4938 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Process: 4940 ExecStart=/usr/sbin/nginx -g daemon on; master_process on; (code=exited, status=0/SUCCESS)
Main PID: 4941 (nginx)
Tasks: 3 (limit: 1116)
Memory: 4.9M
CGroup: /system.slice/nginx.service
├─4941 nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
├─5003 nginx: worker process
└─5004 nginx: worker process
Jun 19 07:27:28 ip-172-31-42-178 systemd[1]: Starting A high performance web server and a reverse proxy server...
Jun 19 07:27:28 ip-172-31-42-178 systemd[1]: Started A high performance web server and a reverse proxy server.
But when i access the ip of the instance i m getting the error site can't be reached
Security rule groups is added for port 80 and 443.
SSH is active on port 22.
Checked the var/log/nginx but seems all files are empty.
UPDATE
When I check the ufw status using the command sudo ufw status i can see that Status: inactive
But not sure if I enabled the ufw via sudo ufw allow 'Nginx HTTP' it will impact the current security rule group settings.
Solution
Always make sure you reserve your IPs when using a Static IP
Versions
VirtualBox Version: 6.0.0 ( I think )
Vagrant Version: 2.2.3
CentosBox: "centos/7"
Nginx Version: 1.16.1
uWSGI Version: 2.0.18
Django Version: 2.2.1
Background
I have two vagrant boxes running, a test and a production. The only difference is IP and core count. I've set up both so I can ssh directly into the boxes, instead of having to ssh into the host machine and then run 'vagrant ssh'
General Issue
The production version will randomly boot me out of the ssh (Connection reset by IP port 22) and then i'll get Connection Refused. If I ssh into the Host machine and then 'vagrant ssh' I can still get in and everything seems to be fine, I can even still ping other computers on the network. But I can't access it from outside the host, this goes for the nginx server as well (IP refused to connect.) on chrome
The issue will occasionally fix itself in a couple minutes, but the majority of the time requires a 'vagrant destroy' and 'vagrant up --provision' / recreate the box. I also occasionally get booted out of the Host Machine and well as the test box, but both I can still access externally after (even the nginx server on test) I'm working over a VPN and I also occasionally get booted out of that as well, but i can reconnect when I notice
VagrantFile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Please don't change it unless you know what you're doing.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.hostname = "DjangoProduction"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network", ip: "IP"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "./", "D:/abcd", type: "sshfs", group:'vagrant', owner:'vagrant'
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |v|
v.name = "DjangoProduction"
# test has these two commented out
v.memory = 6000
v.cpus = 4
end
#
# View the documentation for the provider you are using for more
# information on available options.
## Keys
### For SSH directly into the Box
# Work Laptop Key
config.vm.provision "file", source: ".provision/keys/work.pub", destination: "~/.ssh/work.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/work.pub >> ~vagrant/.ssh/authorized_keys"
# Personal Laptop Key
config.vm.provision "file", source: ".provision/keys/msi.pub", destination: "~/.ssh/msi.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/msi.pub >> ~vagrant/.ssh/authorized_keys"
##
required_plugins = %w( vagrant-sshfs )
required_plugins.each do |plugin|
exec "vagrant plugin install #{plugin};vagrant #{ARGV.join(" ")}" unless Vagrant.has_plugin? plugin || ARGV[0] == 'plugin'
end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision :shell, path: ".provision/boot.sh"
end
boot.sh
# networking
sudo yum -y install net-tools
ifconfig eth1 IP netmask 255.255.252.0
route add -net 10.1.0.0 netmask 255.255.252.0 dev eth1
route add default gw 10.1.0.1
# I manually set the gateway so It can be accessed through VPN
## install, reqs + drop things to places - gonna leave all that out
Error messages
Django
This issue starting popping up earlier this week with django sending me error emails saying. it's always random URLs there's no consistency
OperationalError at /
(2003, 'Can\'t connect to MySQL server on \'external-ip\' (110 "Connection timed out")')
I used to get this email once every other day and paid it no attention, but currently it's sending me at least 20 a day and the site is almost unusable- it's either really slow or I get chrome errors: 'ERR_CONNECTION_TIMED_OUT' or 'ERR_CONNECTION_REFUSED' or 'ERR_CONNECTION_RESET' .. it will be fine for an hour and then everything hits the fan
I originally thought it was an issue with the db or uwsgi or django, but working with it yesterday I realized there was a correlation with the timed out and getting kicked out of ssh.
Nginx Server Settings ( I have't changed nginx.conf )
upstream django {
server unix:///vagrant/abcd.sock;
}
server{
listen 8080;
return 301 https://$host$request_uri;
}
server{
charset utf-8;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
uwsgi_pass django;
include /vagrant/project/uwsgi_params;
uwsgi_read_timeout 3600;
uwsgi_ignore_client_abort on;
}
location /static {
alias /vagrant/static;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /vagrant/templates/core;
}
}
UWSGI command used
uwsgi --socket abcd.sock --module project.wsgi --chmod-socket=664 --master --processes 8 --threads 4 --buffer-size=65535 --lazy
Nginx Error Logs
Nothing.
Messages file
only shows the '(110 "Connection timed out")' dump when it happens
Can you test the behaviour but commenting the line "config.vm.synced_folder..."?
I recently got a new PC and put my Fedora 23/Apache VM on it. I had Django site serving successfully previously through this VM before switching PCs. Now when i run:
systemctl start httpd.service
i don't appear to get errors, but when i put the ip address (inet, non-loopback) in my web browser, nothing happens, no data is sent.
my etc/hosts file:
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
my httpd.conf file:
ServerName localhost
when i run 'hostname -f', i get "localhost".
is there another IP address i need to add to any of the above files in order to get this working? i looked at other posts related to this, but couldn't figure out which IP address, if any, to add.
help?
actually, it looks like the firewall never got permanently disabled, so when i did:
systemctl stop firewalld
systemctl disable firewalld
it booted up fine.
I am a newbie to the whole website thing... Would really appreciate if you could give some help here...
What I want to do is host a Django project on a remote server (red hat, CentOS release 6.5)
I've been running test of the project on a remote server using the development server and port 8000:
python manage.py runserver *.*.*.*:8000 --insecure
In this case, the website works fine and accessible from other machines.
0 errors found
September 04, 2014 - 08:13:03
Django version 1.6.4, using settings 'mysite.settings'
Starting development server at http://*.*.*.*:8000/
Quit the server with CONTROL-C.
Now I want to put it in production, and I've chosen to use Apache http server and mod_wsgi. I have httpd and wsgi installed and activated. I changed the httpd.conf configuration file to:
Listen *:80 (I've also tried Listen *:8000 and Listen (IP address):8000)
#DocumentRoot "/var/www/html"
DocumentRoot "/testsite" (I put a plan html file under the directory just for test)
ServerName <here is the url of the site,with no port number>
However, when I try to open the webpage I am always having a 503 error:
Service Temporarily Unavailable
The server is temporarily unable to service your request due to maintenance downtime
or capacity problems. Please try again later.
Apache/2.2.3 (CentOS) Server at <site url> Port 80
I tried a couple of things (1) checked what's using the port 80:
~# sudo lsof -i :80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
httpd 28732 root 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28734 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28735 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28736 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28737 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28738 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28739 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28740 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
httpd 28741 apache 4u IPv6 19802111 0t0 TCP *:http (LISTEN)
~# service httpd status
httpd (pid 28732) is running...
(2) restart the apache server:
service httpd restart
Stopping httpd: [ OK ]
Starting httpd: [ OK ]
(3) placed a plain .html in /var/www/html/testsite, the DocumentRoot directory for testing.
(4) I tried to run the django on a different port (such as 8008, 8001 and 80)
e.g. python manage.py runserver *.*.*.*:8008 --insecure
0 errors found
September 04, 2014 - 07:56:18
Django version 1.6.4, using settings 'mysite.settings'
Starting development server at http://*.*.*.*:8008/
Quit the server with CONTROL-C.
As shown above, in the terminal it looks like it's working , but I cannot even access the website from remote machines even using the development server. I tried different port numbers but only the port 8000 can be used. But why can I open the webpage on localhost when I change the port number? e.g. 127.0.0.1:8008 or 127.0.0.1:8080 will work.
I guess it can be the firewall setting, then I went to /etc/sysconfig/iptables, I found under the web section, there was only one line:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8000 -j ACCEPT
Then I added another line for testing:
-A INPUT -m state --state NEW -m tcp -p tcp --dport 8001 -j ACCEPT
Then tried the development again with port 8001. Again, it looks like it's woking on the remote server but not accessible from remote machines.
Sorry if I made this confusing and if I asked something really silly. Now, I have three questions that I really don't understand. First of all, the 503 error really annoys me. Even it shows the apache server is running (restart httpd is OK), nothing actually displays... Second of all, when using the development server why can I only use port 8000 but not any else? Finally, in the 503 error message, it shows apache runs on Port 80 even after I changed the Listen port to 8000 in the configuration file, why is this?
Thanks ahead for any help!
If that is your only configuration I don't see how Apache could be aware of your Django running 8000. There is no indication that you are making Apache to connect or proxy requests to running Django instance.
What you need to do is
Configure mod_wsgi for Apache
or
Configure fgci for Apache
You are free to choose any port with Django development server. You can configure the IP address and the port the development server listens to with command line parameters.
You can make the Django development server to listen all IP addresses, including public IP addresses on the server, as:
python manage.py runserver 0.0.0.0:8000
Also Apache logs can be read at /var/log/apache (or similar directory), so it should explain why you are getting 503.
I doubt iptables are not related to any way to your problem, but somehow Django development server is not listening to public IP address. You can easily try this by disabled iptables firewall on the server.