Django/Gunicorn Logging Redis exceptions - django

Background
I noticed that when Django is incorrectly configured to connect with Redis, say for example transport encryption is enabled but the auth token is incorrect, then the application fails silently without anything in the logs, other than NGINX reporting the following:
2023/02/13 05:17:44 [info] 35#35: *248 epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream, client: *.*.*.*, server: _, request: "GET /sup/pleasant/ HTTP/1.1", upstream: "http://127.0.0.1:8000/sup/pleasant/", host: "myhost.com"
The application logs do not report anything until I refresh the page a few times (F5). At this point, the container locks up for some reason and stops responding to ALB health checks, and is terminated by the Fargate service.
Interestingly, when I turn the Redis server off and have Django Redis settings set to "IGNORE_EXCEPTIONS": False, then connection errors are reported in the application logs.
django_redis.exceptions.ConnectionInterrupted: Redis ConnectionError: Connection closed by server.
NOTE: Just to make it clear, even with "IGNORE_EXCEPTIONS": False, nothing is reported when the Redis server is online but the Django/Redis configuration details are misconfigured.
Redis is being used for channels and caching.
I've tested with DJANGO_REDIS_LOG_IGNORED_EXCEPTIONS but it hasn't had an impact.
Question
How do enable exception logging for other types of Redis connection issues other than when the server is offline?
Also, why does the container stop responding to health checks?

Related

Istio-proxy did not receive request from host

I have an express js client app and an express server app with almost the same istio configuration. Client cannot be accessed through its host URL while the server is working well. Curl client host URL just gives me infinite waiting. And I cannot find any related traffic log in the istio-proxy of client pod. This is very confusing. What could be the possible reason for this problem?
istioctl analyse on live cluster dose not give any helpful information

Getting "curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)"

I have a Django app which returns a large JSON while calling an API.
The problem is when I'm requesting the data, the data itself is truncated which is crashing the frontend.
I'm using cloud front for DNS and SSL and other feature provided by them for caching and improved performance.
I tried curling the API and got the following error from curl:
curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err
2)
I tried disabling the Cloudflare but didn't work. On my localhost, however, everything works fine.
HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
Closing connection 0
TLSv1.2 (OUT), TLS alert, Client hello (1): curl: (92) HTTP/2 stream 1 was not closed cleanly: INTERNAL_ERROR (err 2)
The JSON should be fetched entirely without getting chunked.
I got the same error with an application behind an AWS Application Load Balancer, using command:
curl "https://console.aws.example/api/xxx" -b "SESSION=$SESSION"
15:14:30 curl: (92) HTTP/2 stream 1 was not closed cleanly:
PROTOCOL_ERROR (err 1)
I had to force the use of HTTP/1.1 with the argument --http1.1
So the final command is:
curl "https://console.aws.example/api/xxx" -b "SESSION=$SESSION" --http1.1
I had this issue with AWS's Application Load Balancer (ALB). The problem was that I had Apache configured to use http2, but behind an ALB. The ALB supports http2 by default:
Application Load Balancers provide native support for HTTP/2 with HTTPS listeners. You can send up to 128 requests in parallel using one HTTP/2 connection. The load balancer converts these to individual HTTP/1.1 requests and distributes them across the healthy targets in the target group. Because HTTP/2 uses front-end connections more efficiently, you might notice fewer connections between clients and the load balancer. You can’t use the server-push feature of HTTP/2. 1
So, curl was using HTTP/2 to connect with the ALB, which was then converting it into an HTTP/1 request. Apache was adding headers to the response asking the client to Upgrade to HTTP/2, which the ALB just passed back to the client, and curl read it as invalid since it was already using an HTTP/2 connection. I solved the problem by disabling HTTP/2 on my Apache instance. Since it will always be behind an ALB, and the ALB is never going to make use of HTTP/2, then there is no point of having it.
Fix or Remove the Content-Length header in your HTTP request.
I was trying to connect to an AWS Gateway when this issue occurred to me. I was able to get the correct response using POSTMAN but if I copied over the same headers to curl, it would give me this error.
What finally worked for me was removing the Content-Length header as the length of the request in curl wasn't matching the same as it was in POSTMAN.
Since in my case I was only testing the API so this is fine, but I wouldn't suggest removing this header in production. Check to make sure the length is calculated correctly if this is occurring to you in a codebase.
With Nginx, you can experience this error in curl by having two http2 virtual hosts listening on the same server name. Running a check on your nginx config file will throw a warning letting you know that something isn't right. Fixing/removing the duplicate listing fixes this problem.
# nginx -t
nginx: [warn] conflicting server name "example.com" on 0.0.0.0:443, ignored
nginx: [warn] conflicting server name "example.com" on [::]:443, ignored

OperationalError, connecting to another Django/DB project

I'm trying to connect to another database (project-B) that also uses Django. I would like to ask for help on how to resolve the following error?
Here's the error from Django debug:
could not connect to server: Connection refused Is the server running
on host "111.222.333.444" and accepting TCP/IP connections on port
5432?
Here's the firewall from project-B
Status: active Logging: on (low) Default: deny (incoming), allow
(outgoing), disabled (routed) New profiles: skip
To Action From 5432
ALLOW IN 111.222.333.444
I also put it in the allowed host
ALLOWED_HOSTS = [ '111.222.333.444']
Other than that, I have not modified anything from project-B.
The firewall allows requests from 111.222.333.444, which is effectively blocking all hosts other than same host connection.

Mezzanine contact form produces "upstream prematurely closed" error

My website uses Mezzanine 4.2.3 with Django-Oscar 1.5.2 and Django 1.10.8, running on Ubuntu 16.04 on Digitalocean. When I use the Mezzanine contact form on the demo page created with createdb, and from my own computer, it successfully sends out emails. But when I test it on my Digitalocean droplet running Ubuntu 16.04, I get 502 bad gateway.
The nginx error log records this error: *13 upstream prematurely closed connection while reading response header from upstream, client: [an IP I can't identify], server: [my website url], request: "POST /contact/ HTTP/1.1", upstream: "http://unix:/home/my-django-app/my-django-app.sock:/contact/", host: "[my website url]", referrer: "[my website url]/contact/". The number varies between *1, *7, and *13, but the text is the same.
I googled this and found various possible solutions:
Increasing the timeout for nginx proxy_pass. This involved adding proxy_connect_timeout 75s; and proxy_read_timeout 300s; to nginx config, and then adding --timeout 300 to gunicorn. This produced an actual timeout error: *21 upstream timed out (110: Connection timed out) while reading response header from upstream,
Uncommenting precedence ::ffff:0:0/96 100 in /etc/gai.conf..
Allowing port 587 in UFW. This shouldn't matter because if I'm using gmail, then this should be a port on Google's side of things, right? I'm only doing this because I see various solutions (most unresolved) talking about the need to unblock this port.
Making nginx listen on port 587: server {listen 80; listen 587; ... list 443 ssl; ...}.
With nginx listening on port 587, sudo netstat -tulnp | grep 587 shows:
tcp 0 0 0.0.0.0:587 0.0.0.0:* LISTEN 12815/nginx -g daem
My email settings seem fine:
EMAIL_USE_TLS = True
EMAIL_HOST = "smtp.gmail.com"
EMAIL_HOST_USER = "!#%%&&*%^#$^*%#gmail.com"
EMAIL_HOST_PASSWORD = "^*#^##$%&#$%%#$"
EMAIL_PORT = 587
EMAIL_BACKEND = "django.core.mail.backends.smtp.EmailBackend"
I tried SSL with port 465 too. It worked with my local copy but not on the server. Same error message of 502.
I think "upstream" means gunicorn, so I set an error log for it, but all it recorded were status codes 200 and 302 when the page loaded. It didn't log anything when 502 happened.
I'm out of ideas. What am I missing?
Update 3 June 2018:
$ telnet smtp.gmail.com 587
Trying 108.177.96.109...
Trying 108.177.96.108...
Trying 2a00:1450:4013:c01::6c...
telnet: Unable to connect to remote host: Network is unreachable
Tried this with 465 and 25 too. Does this mean Digitalocean is blocking the connection? There's precedent.
Yes, Digitalocean blocks SMTP. Their reply to my email:
To assist with the restriction of SMTP services on your account, can
you please let us know the following:
Your name.
What business or individual you are going to send mail on behalf of as well as their website (if one exists).
What kind of mail you're going to be sending (password resets, newsletters, marketing mail, transactional mail such as order
confirmations).
If you're sending on behalf of a business or an individual that is not yourself, what is your relationship to that business or
individual.
Also, as we are a US based company, I'd like to make sure you
understand that we require all users of our network to follow both the
requirements of the CAN-SPAM (
https://www.ftc.gov/tips-advice/business-center/guidance/can-spam-act-compliance-guide-business
)act in regards to any non-transactional mail sent to any subscriber
anywhere in the world, as well as the CASL (
http://fightspam.gc.ca/eic/site/030.nsf/eng/home ) for any email you
send to any subscribers in Canada.
Additionally, there are additional restrictions to sending email to
users in Europe created by both the EU itself and its member
countries, and would recommend that you investigate and follow all
relevant guidelines for the countries of any European subscribers you
may have.
I answered them and they replied:
Thank you for the information you have provided.
We've reviewed the information and have removed the SMTP block from
your account.
Just to reiterate - we require our subscribers to follow the CAN-SPAM
act for all email, and the CASL for any email sent to a subscriber in
Canada.
If you do not, and we receive complaints of violations, we can revoke
access to SMTP at our discretion with no further warning.

How to close jetty Http Connection after the web service method is executed

While configuring embedded jetty server in jetty 9.3.8, I am adding a connection listener to the server connector for keeping track of opening and closing of jetty HttpConnection.
The jetty thread [qtp..........] that is serving current request opens up a HttpConnection. After finishing the current request, how do i inform the jetty to close this HttpConnection. I do see closing of all opened connections in the callback from the listener from different clients after some time of serving the request.
I need to close the connection once i finish with the request, that is when i am done with a particular client.
Closing of the connection is the responsibility of the HTTP spec and protocol.
Note: Be aware that connection close is HTTP version specific, following different semantics for HTTP/1.0 vs HTTP/1.1 vs HTTP/2
In general, the connection open/close handling is negotiated between the HTTP client and the HTTP server, and needs to follow those rules. Having the server arbitrarily close the connection based on some non HTTP spec behavior is ripe for abuse and will cause problems with intermediaries (such as proxies, routers, load balancers, caching servers, etc).
Try setting this header in the request: Connection: close