Fastapi deployed on aws elastic beanstalk gives 504 Error - amazon-web-services

I deployed my Fastapi api on aws elastic beanstalk the root endpoint '/' works fine but my api is using telethone and any endpoint that uses telethon I try to access gives 504 here is an example of a function that gives 504:
client = TelegramClient(session_file, api_id, api_hash)
#app.get("/api/authenticated-account")
async def is_authenticated():
try:
await client.connect()
isAuthenticated = await client.is_user_authorized()
if(isAuthenticated):
me = await client.get_me()
return {'mobile': me.phone}
else:
return False
except Exception as e:
return {"Exception": str(e)}
once it reaches this line: await client.connect() it gives the 504 after around a minute or 90 secs but everything works fine on my local machine and it takes less than 1 sec. I tried to update nginx.conf as following:
client_header_timeout 60;
client_body_timeout 60;
keepalive_timeout 60;
proxy_read_timeout 120;
gzip off;
gzip_comp_level 4;
but still I am getting 504 on any endpoint that uses telethon
any suggestions?

Related

Some Flask routes works on HTTP but not on HTTPS

I have a flask app that runs fine on my local machine with all routes working fine. However, when I run the same flask app using gunicorn over an EC2 ubuntu server, only "/" route works and others doesn't.
app.py
#app.route('/')
def index():
return render_template("someForm.html")
#app.route('/someForm', methods=['GET', 'POST'])
def interestForm():
args = request.args
args = args.to_dict()
emailID = args.get('email')
<Some python Logic here>
return render_template("someForm.html", somevariable = Value1, somevariable = value2)
#app.route('/submit', methods=['GET'])
def submit():
args = request.args
args = args.to_dict()
email = args.get('email')
<Some python backend logic that updates form value of the corresponding email in the database>
if updateTable(email, updateDic)==1:
return redirect("someURL")
else:
return render_template("someForm.html", error_message = "Issue with updating")
if __name__ == '__main__':
app.jinja_env.auto_reload = True
app.config['TEMPLATES_AUTO_RELOAD'] = True
app.run(debug=True, host='0.0.0.0', port=8000, threaded=True)
More Info on the deployment
I run the flask app using the command below
nohup gunicorn -b 0.0.0.0:8000 app:app &
Also configured the NGINX server to point to localhost:8000
Inside /etc/nginx/sites-available/default
upstream flaskhelloworld {
server 127.0.0.1:8000;
}
# Some code above
location / {
try_files $uri $uri/ =404;
include proxy_params;
proxy_set_header Host $host;
proxy_pass http://flaskhelloworld;
}
# some code below
I have configured DNS using Cloudflare and issued an SSL certificate using certbot. All works fine as I can access https://www.domain_name.com but I can't access https://www.domain_name.com/someForm neither I can access https://www.domain_name.com/someForm?=email#gmail.com through browser (tested on Chrome and edge)
TEST cases and other checks
I tried to curl https://www.domain_name.com and it returns the HTML correctly. But when I curl https://www.domain_name.com/someForm I get the result below
<!doctype html>
<html lang=en>
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
Confusing case
Interestingly, when I use HTTP instead of HTTPS while using Thunderclient or Curl command, it returns the proper results with a 200 status code.
curl http://domain_name.com/someForm?email=kaushal#gmail.com
[![results][2]][2]
Only thing that works with HTTPS is the home route (i.e. "/")
[1]: https://i.stack.imgur.com/WPZW4.png
[2]: https://i.stack.imgur.com/3LF9m.png
Based on what you have said, I think all of your requests are going to /
Here is a working configuration for https using certbot that will pass things to the correct spaces
server {
server_name mywebsite.com www.mywebsite.com;
location / {
include proxy_params; # this is what I think you are missing
proxy_pass http://unix:/home/user/project/myProject.sock; # this is from a gunicorn configuration but it could just be a proxy pass
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/mywebsite.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/mywebsite.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = mywebsite.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name mywebsite.com www.mywebsite.com;
listen 80;
return 404; # managed by Certbot
}
You should only write this part yourself:
location / {
include proxy_params; # this is what I think you are missing
proxy_pass http://unix:/home/user/project/myProject.sock; # this is from a gunicorn configuration but it could just be a proxy pass
}
and allow certbot to do the rest

Nginx 502 bad gateway for deploying Django Project even it works with uwsgi

the nginx web server is successfully installed and working. But I get 502 bad gateway error.
When I check the socket file path I can't see any file.
/home/bilal/uwsgi/sites/testproject.ini ->
[uwsgi]
home = /home/bilal/my-first-project/venv
chdir = /home/bilal/my-first-project
wsgi-file = /home/bilal/my-first-project/projectname/wsgi.py
socket = /home/bilal/uwsgi/testproject.sock
vacuum = true
chown-socket = bilal:www-data
chmod-socket = 660
/etc/nginx/sites-available/testproject ->
server {
listen 80;
server_name <domain> <www.domain>;
location / {
include uwsgi_params;
uwsgi_pass unix:/home/bilal/uwsgi/testproject.sock;
}
}
When I try to connect to server with ip address I get welcome to nginx page because ip isn't written for server_name. But when I try to connect to server with domain I get this error:
502 Bad Gateway
nginx/1.14.0 (Ubuntu)
I think this problem is about the .sock file but I don't know how can I handle it.

Nginx+uWSGi+Django long task Bad Gateway error

I have a long task in one of my django's views that take 120-200 seconds to generate the response.
For this particular view, Nginx raises 502 Bad Gateway after 1 minute with this error message in the logs:
[error] 7719#7719: *33 upstream prematurely closed connection while reading response header from upstream,
here are my Nginx configurations:
upstream DjangoServer {
server 127.0.0.1:8000;
keepalive 300;
}
location / {
include proxy_params;
proxy_pass http://DjangoServer;
allow all;
proxy_http_version 1.1;
proxy_set_header X-Cluster-Client-Ip $remote_addr;
client_max_body_size 20M;
keepalive_timeout 300;
proxy_connect_timeout 300;
proxy_send_timeout 300;
proxy_read_timeout 300;
send_timeout 300;
}
And here are my uWSGI Configurations:
uid=www-data
gid=www-data
http=127.0.0.1:8000
http-keepalive=300
master=1
vacuum=1
workers=2
threads=5
log-5xx=1
Note:
The Nginx and uWSGI work fine for all other views.
Django development server can run the task with no problem.
After the Nginx 502 error, uWSGI keeps running in the background and completes the job, (according to in view print statements).
If I try to connect to uWSGI via the browser, after a while (less than 120s) it will say ERR_EMPTY_RESPONSE.
You can assume the task like this
def long_task_view(request):
start_time = time.time()
print(start_time)
# doing stuff
time.sleep(130)
print(time.time() - start_time)
return HttpResponse("The result")
you can try increase the time out time of nginx:
vim /etc/nginx/nginx.conf
Add this in http:
http {
...
fastcgi_read_timeout 300;
...
}
But the best practice is to create an asynchronous process to process the method that takes. I usually celery for async task.

Setting up Django Channels

I have been trying to integrate Django channels into my existing Django application.
Here's my routing.py:
from channels.routing import route
channel_routing = [
route('websocket.receive', 'chat.consumers.ws_echo', path=r'^/chat/$'),
]
Here's my consumers.py:
def ws_echo(message):
message.reply_channel.send({
'text': message.content['text'],
})
I am trying to create a socket by doing this:
ws = new WebSocket('ws://' + window.location.host + '/chat/');
ws.onmessage = function(message) {
alert(message.data);
};
ws.onopen = function() {
ws.send('Hello, world');
};
When I run this code, I get the following error in my console:
WebSocket connection to 'ws://localhost:8000/chat/' failed: Error during WebSocket handshake: Unexpected response code: 404
On my server, I get the following error:
HTTP GET /chat/ 404
Based on the error, I think that Django is giving a http connection rather than a ws connection.
Any help on this issue is greatly appreciated.
The issue with my setup was with my nginx configuration. All I had to do was add the forwarding lines and it fixed the problem.
location /chat { # Django Channels
proxy_pass http://0.0.0.0:8001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}

nginx location and Django auth

I'm trying to create a NGINX redirect based on an URL param in the querystring. Basically having:
http://localhost/redirect/?url=https://www.google.it/search?dcr=0&source=hp&q=django&oq=django
and
location /redirect/ {
proxy_cache STATIC;
# cache status code 200 responses for 10 minutes
proxy_cache_valid 200 1d;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
# use the cache if there's a error on app server or it's updating from another request
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
# don't let two requests try to populate the cache at the same time
proxy_cache_lock on;
# Strip out query param "timestamp"
if ($args ~ (.*)&timestamp=[^&]*(.*)) {
set $args $1$2;
}
return 302 $arg_url$args;
}
Now, only Django authenticated users (JWT/Cookie) can use the /redirect?url= end point, hence is it possible to implement a session/cookie check without opening a proxy to the entire world?
Anyway I could do it at the Django level (https://github.com/mjumbewu/django-proxy/blob/master/proxy/views.py) but I suppose it's faster and less computationally expensive at the NGINX level.
Thanks,
D
redirecting & proxying is different things, for getting django-proxy functionality you need to use nginx reverse proxy option instead of redirect.
# django-proxy code fragment
response = requests.request(request.method, url, **requests_args)
proxy_response = HttpResponse(
response.content,
status=response.status_code)
Nginx config for reverse proxying & auth
server {
listen 80;
server_name youtdomain.com;
location / {
# use django for authenticating request
auth_request /django-app/;
# a proxy to otherdomain
proxy_pass http://otherdomain.com;
proxy_set_header Host otherdomain.com;
}
location /django-app/{
internal; # protect from public access
proxy_pass http://django-app;
}
}
Django app should return 200 status code for authenticated users 401 otherwise, you can read more details about auth_request here
Based on the previous answers (thanks!) this is the solution:
http {
upstream app_api {
# server 172.69.0.10:8000;
server api:8000;
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response (in case the Unicorn master nukes a
# single worker for timing out).
# server unix:/var/www/gmb/run/gunicorn.sock fail_timeout=0;
}
server {
location = /auth {
proxy_pass http://app_api/api-auth/login/;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
location /redirect/ {
auth_request /auth;
proxy_cache STATIC;
# cache status code 200 responses for 10 minutes
proxy_cache_valid 200 1d;
proxy_cache_revalidate on;
proxy_cache_min_uses 3;
# use the cache if there's a error on app server or it's updating from another request
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
# don't let two requests try to populate the cache at the same time
proxy_cache_lock on;
# Strip out query param "timestamp"
if ($args ~ (.*)&timestamp=[^&]*(.*)) {
set $args $1$2;
}
return 302 $arg_url$args;
}