Django deployment error deango.core.exceptions.ImproperlyConfigured - django

Hey i have an django application which is working fine locally but its not working when it is hosted on a web showing below error
django.core.exceptions.ImproperlyConfigured: Error loading pyodbc module: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/site/wwwroot/antenv/lib/python3.7/site-packages/pyodbc.cpython-37m-x86_64-linux-gnu.so)
Did i miss anything at the time of hosting?

Assuming you got this issue during deployment via DevOps pipeline, you could specify an exact version of python in the UsePythonVersion (including minor version) task.
Supported python versions, you could check the software of the agent image:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#software
Also, you could try the solution in the following case, by adding deadsnakes repo, installing 3.7 and symlink python to python3.7:
https://github.com/actions/virtual-environments/issues/2634#issuecomment-775808754

Related

/opt/alt/python39/bin/lswsgi: No such file or directory

I have a shared Cpanel host with the Litespeed web server. I want to deploy a Django application on it. After creating a Python application inside the Cpanel where I have not deployed the application on the host I try loading the website, and instead of displaying the Django version, I face 503 Unavailable!!
Also inside the "stderr.log" file, there is the following error.
/usr/local/lsws/fcgi-bin/lswsgi_wrapper: line 9: /opt/alt/python39/bin/lswsgi: No such file or directory
I'm creating the application with Python 3.9.
But it works when I create it with Python 3.8 and show the following message when I load the web,
It works!
Python 3.8.6
The issue is mostly caused by the lack of the Python 3.9 WSGI package. On out-of-date versions of LiteSpeed, the package needs to be installed manually.
To work around this, first ensure that LiteSpeed is up to date. LiteSpeed must be at version 5.4.10 for this to work. Once that is confirmed, execute the following script from LiteSpeed. It will pull the required Python Selector packages:
/usr/local/lsws/admin/misc/enable_ruby_python_selector.sh
Refer cpanel support

Using google-cloud-tasks with python 2.7 on app-engine

I'm working on migrating a Python 2.7/1st Gen/GAE app to Python 3/2nd Gen/GAE.
My current step is replacing google.appengine.ext.deferred with the Python Client for Cloud Tasks API.
Here is where I'm at:
Still using Python 2.7
Latest updates with gcloud components update
Following the Python Client docs, I added google-cloud-tasks==1.5.0 to my requirements.txt
google-cloud-tasks needs grpcio so I added - name: grpcio\n version: latest to app.yaml
Now dev_appserver.py is giving me the error ImportError: No module named enum
I'm not finding much documentation online so I'm wondering... Is it possible to use google-cloud-tasks with Python 2.7 on app engine?
If so, how do I fix the last error above?

Python thread Error: Thread is running with limited resource on GCE vm instance

We have our code build in python which runs on Google Compute engine. The code processes data files from Cloud Storage to Bigquery. We are using 8 threads for multiprocessing. It has been tested successfully in some environments but in One environment, it keeps giving error:
{'status':'Service Running with limited resources-one or more worker threads have been terminated' deadthreads':7,'threadpoolsize':8,'alivethreads':1}
second and all other threads are dying after it .
Can anyone help with above error message ?
The potential reason for the issue was that the code was not comaptible with latest version of google-auth package . With vm spin up the default version installed google-auth 1.4.1 however on other environments it was
google-auth 1.3.0.
We downgraded this package to 1.3.0 and also downgraded grpcio package from 1.9.1 to 1.8.6 to bring environment in synch with tested environment.
Threading issue is resolved now.

Python/Django Elastic Beanstalk now failing on deploy

I'm working on a project that I haven't touched in about 4 months. Before everything on the deploy was working fine, but now I'm getting an error when trying to deploy an update.
Failed to pull Docker image amazon/aws-eb-python:3.4.2-onbuild-3.5.1: Pulling repository amazon/aws-eb-python time="2016-01-17T01:40:45Z" level="fatal" msg="Could not reach any registry endpoint" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
In the eb-activity log, it further states [CMD-AppDeploy/AppDeployStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: Pulling repository amazon/aws-eb-python before repeating what was shown in the UI.
The original was using a Preconfigured Docker 64bit Debian jessie v1.3.1 running Python 3.4. I've tried upgrading to the latest, which is version 2.0.6, but it never completes (don't need to get into specifics of that error, separate issue and I'd like to stay on 1.3.1 if possible). I've also tried upgrading to the latest 1.x but it has the same result of upgrading to 2.0.6.
Any ideas, or anything else I should be looking for clues?
Docker Hub has deprecated pulls from Docker clients on 1.5 and earlier. Make sure that your docker client version is at least above 1.5. See https://blog.docker.com/2015/10/docker-hub-deprecation-1-5/ for more information.

Django Analytical Google Analytics Display Advertising working on development, staging but not production

Running Django 1.6 and Analytical 0.16.0
I have the following in my settings.py
GOOGLE_ANALYTICS_PROPERTY_ID = env_var('GOOGLE_ANALYTICS_PROPERTY_ID')
GOOGLE_ANALYTICS_DISPLAY_ADVERTISING = True
and the google analytics code shows up as expected when I run the site locally and on the staging server (ie. running the doubleclick dc.js analytics script), however when running on production it still shows the default Google Analytics ga.js script.
It isn't affected by DEBUG being on or off and as I can tell the settings and env are the same on production and staging servers (both runnning on Heroku). Can anyone offer an explanation of why this might be the case?
edit: SOLVED. Turns out I was still running Analytical 0.15.0 on the production server. I had wrongly assumed that heroku automatically installed the latest version if the version wasn't specified in the pip requirements.
Check that Heroku is running the same versions of each program:
heroku pip freeze
It turns out it was still running an old version of django-analytical as the version number wasn't specified in the pip requirements file. Heroku won't upgrade an existing program unless explicitly specified. Changing the requirements.txt to the following solved it.
django-analytical==0.16.0