sentry Does not work when i deploy project on the server - django

I have a Django project with sentry configurations.
when i run project in my local, i can see errors in my sentry panel, but when i push the project on server and run it, i cant see the errors in sentry panel.
This is my config code
import sentry_sdk
from sentry_sdk.integrations.django import DjangoIntegration
from sentry_sdk.integrations.celery import CeleryIntegration
sentry_sdk.init(
dsn="https://********#****.ingest.sentry.io/*****",
integrations=[DjangoIntegration(), CeleryIntegration()],
# Set traces_sample_rate to 1.0 to capture 100%
# of transactions for performance monitoring.
# We recommend adjusting this value in production.
traces_sample_rate=1.0,
# If you wish to associate users to errors (assuming you are using
# django.contrib.auth) you may enable sending PII data.
send_default_pii=True
)
I also dockerized the project, and I had a problem with Gunicorn that I was able to fix, but it still sentry doesn't work when I run the project on the server.

In some cases people says that setting sentry parameters send_default_pii=False helps or debug=True
I have similar issue, Sentry works when uwsgi started by hand but does not work when process started by Fabric.

Related

Internal server error running Django on Heroku accessing from browser

I think this is a simple fix, but I've deployed quite a few Django apps to Heroku and I still can't figure out what's going on.
Accessing https://dundjeon-finder.herokuapp.com/ gives me a 500 error when using the browser/curl, but if I shell into the app using heroku run ./manage.py shell I can render the views no problem. My logs aren't telling me anything (just that the response is 500) despite DEBUG being set to True, and Sentry isn't receiving an error (it has previously when the database env variable was set badly), so I'm assuming it's something to do with the way the request works.
The repo is public, any help would be much appreciated! The settings file is here.
Well it was because of using asgi instead of wsgi. I'm not sure why that caused the errors, but will do some more searching.

What is an efficient way to develop Airflow plugins? (without restarting the webserver for each change)

I am looking for an efficient way to develop plugins within Airflow.
Current behavior: I change something in Python files e.g. test_plugin.py, reload the page in browser and nothing happens until I restart the webserver. This is most annoying and time consuming.
Desired behavior: I change something in Python files and the change is reflected after reloading the app in the browser.
As Airflow is based on Flask and in Flask the desired behavior is achievable by running Flask in debug mode (export FLASK_DEBUG=1, then start Flask app): Can I achieve the Flask behavior somehow in Airflow?
It turns out that this was indeed a bug with the Airflow CLI's webserver --debug mode; future versions will have the expected behavior.
Issue: https://issues.apache.org/jira/browse/AIRFLOW-5867
PR: https://github.com/apache/airflow/pull/6517
In order to run Airflow with live reloading, run the following command (10.7+):
$ airflow webserver --debug
In contrast to the code modification suggested by #herrjeh42, make sure that your configuration does not include unit_test_mode = True in order to enable reloading.
Cheers!
You can force reloading the python code by starting the airflow webserver in debug & reload mode. As of Airflow 1.10.5 I had to modify airflow/bin/cli.py (from my opinion the line is buggy).
old:
app.run(debug=True, use_reloader=False if app.config['TESTING'] else True,
new:
app.run(debug=True, use_reloader=True if json.loads(app.config['TESTING'].lower()) else False,
Change in airflow.cfg
unit_test_mode = True
Start the webserver with
airflow webserver -d

Django Postgresql Heroku : Operational Error - 'FATAL too many connections for role "usename"'

I am running a web application using Django and Django Rest Framework on Heroku with a postgresql and redis datastore. I am on the free postgresql tier which is limited to 20 connections.
This hasn't been an issue in the past, but recently I started using django channels 2.0 and the daphne server (switched my Procfile from gunicorn to daphne like this tutorial) and now I have been running into all sort of weird problems.
The most critical is that connections to the database are being left open so as the app runs, the number of connections keep increasing until it reaches 20 and gives me the following error message: Operational Error - 'FATAL too many connections for role "usename"'
Then I have to manually go to shell and type heroku pg:killall each time, this is obviously not a feasible solution and this is production so my users cant get access to site and get 500 errors. Would really appreciate any help.
I have tried:
Adding this to my different views in different places
from django.db import connections
conections.close_all()
for con in connections:
con.close()
I also tried doing SELECT * from pg_activity and saw a bunch of stuff but have no idea what to make of it:
We figured out whats the problem. I assume that you are using dj_database_url like in heroku manual. All you have to do is to drop conn_max_age.
db_from_env = dj_database_url.config()
There is the solution:
Nowadays heroku provide the django_heroku package that deal with default django-heroku app configuration, so when you call django_heroku.config(locals()) on the end of your settings.py the default CONN_MAX_AGE database config is set to 600 seconds, so the default of django is 0 what mean all database connections are been closed after request complete, if you don't replace the value of CONN_MAX_AGE after calling django_heroku.config(locals()) the value of this field is default to 600 what mean the DB connections still alive for 600 seconds causing this trouble.
Put this line on the end of your settings.py, its mandatory to be after heroku config:
django_heroku.config(locals())
DATABASES['default']['CONN_MAX_AGE'] = 0
I think I may have solved it.
One of the changes I made was modifying how I closed my connections.
The key is to close old connections before and after various view functions.
from django.db import close_old_connections
#csrf_exempt
#api_view(['GET', ])
def search(request):
close_old_connections()
# do stuff
close_old_connections()

django html5 cache in django 1.7

I've got some problems implementing 3rd party django package django-html5-appcache.
Documentation especify that migrate command must be executed, but when i execute command:
python manage.py migrate html5_appcache
Outputs:
"No migrations to apply"
However I decided to complete installation steps. but testing it maniffest file appears to be empty (according to docs, urls suppose to autodiscover):
CACHE MANIFEST
# version: $0$
# date: $-$
NETWORK:
*
And Chrome Console Outputs:
Creating Application Cache with manifest http://127.0.0.1:8000/manifest.appcache
127.0.0.1/:1 Application Cache Checking event
127.0.0.1/:1 Application Cache Downloading event
127.0.0.1/:1 Application Cache Progress event (0 of 0)
127.0.0.1/:1 Application Cache Cached event
Im using Django 1.7
Any body has expirience with this django package?
I suspect that you're supposed to put the following code in your urls.py to get autodiscover.
Enable appcache discovery by adding the lines below in urls.py:
import html5_appcache
html5_appcache.autodiscover()
(Source: documentation here: https://django-html5-appcache.readthedocs.org/en/latest/installation.html)
Also, python manage.py migrate commands are used to alter the DB structure and should not affect your urls.py.

WSGI loads settings of the wrong project: how to debug?

I have two django based web applications on the same server.
One of them i'll call CORRECT_PROJECT and the other one WRONG_PROJECT
The last one, CORRECT_PROJECT, is installed using a virtual environment and uses a different version of django (1.4). There's a very strange problem: sometimes, usually after a log out or an email confirmation (but sometimes looks just random!), the server returns a 500 internal server error and the error log says
"Could not import settings 'WRONG_PROJECT.settings' (Is it on sys.path?): No module name WRONG_PROJECT.settings, refer: CORRECT_PROJECT/URL"
That is, by loading CORRECT_PROJECT, sometimes the system (WSGI? Apache? Django?) tries to load the settings from WRONG_PROJECT.
By hitting refresh aggressively the error disappears.
What could be wrong? How can I debug?
CORRECT_PROJECT uses WSGI in deamon mode.
Solution
Use deamon mode: http://modwsgi.readthedocs.org/en/latest/configuration-directives/WSGIDaemonProcess.html
You are using wsgi.py from Django 1.4. That will not work when hosting multiple web apps in the same process.
Best solution is to use daemon mode and delegate each to a distinct daemon process group.
If can't do that, change the wsgi.py files of both so they do not use:
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "mysite.settings")
but instead use:
os.environ["DJANGO_SETTINGS_MODULE"] = "mysite.settings"
Change mysite.settings as necessary.