Why is my Django App failing on Azure with UUID invalid syntax - django

My Django App runs fine locally on macOS Catalina with Python 3.8.2 and Django 3.0.5. I am deploying it to Azure as WebApp from Github selecting Python 3.8.
I have provisioned the Postgres DB, the Storage Account and the WebApp. The build process is successful.
The WebApp fails at startup with:
File "/antenv/lib/python3.8/site-packages/django/db/models/fields/__init__.py", line 6, in <module>
import uuid
File "/antenv/lib/python3.8/site-packages/uuid.py", line 138
if not 0 <= time_low < 1<<32L:
^
SyntaxError: invalid syntax
I have verified that the uuid package is not within my requirements.txt file.
DB environment variables are set up.
Collectstatic successfully replicated my static data.
The WebApp is running with Docker.
Any help on this greatly appreciated.
EDIT
Rebuilt virtual environment and regenerated requirements.txt file and redeployed. This solved the issue.

Don't leave uuid == 1.30 or any version in your requirements.txt. If you want to use uuid library, simply a import uuid is enough cuz it is built in python3 already. This solution works for me. I am using Python 3.9.7.
Here is my Azure error log with uuid == 1.30 included in requirements.txt. Hope this helps.
Azure log

This is python-2.x syntax. In python-2.x there were two types of integer values: int and long. long had an L suffix at the end. An int had a fixed range, long had an arbitrary range: it could represent numbers as long as there was sufficient memory.
One could specify with the L suffix that this was a long, not an int. For example in python-2.x, one can write:
>>> type(1)
<type 'int'>
>>> type(1L)
<type 'long'>
In python-3.x, the two merged together to int, and an int can represent arbitrary large numbers, so there is no need anymore for such suffix. You are thus using a library designed for python-2.x with an interpreter that interprets python-3.x.
I would advise not to use this (version of this) library. Take a look if there is a release for python-3.x, or try to find an alternative. python-2.x is no longer supported since january 1, 2020, so it is also not a good idea to keep developing on python-2.x. Furthermore python-2.x and python-3.x differ in quite a large number of areas. It is not just an "extended" language. The way map and filter work is for example different. Therefore you better do not try to "fix" this issue, since likely a new one will pop up, or even worse, is hidden under the radar.

Rebuilding virtual environment from scratch, regenerating the requirements.txt file and then redeploying resolved the issue.

Related

Can't find pvlib.pvsystem.Array

I am using pvlib-python to model a series of photovoltaic installations.
I have been running the normal pvlib-python procedural code just fine (as described in the intro tutorial.
I am now trying to extend my model to be able to cope with several arrays of panels in different directions etc, but connected to the same inverter. For this I though the easiest way would be to use pvlib.pvsystem.Array to create a list of Array-objects that I can then pass to the pvlib.pvsytem.PVSystem class (as described here).
My issue now is that I can't find pvsystem.Array at all? eg I'm just getting:
AttributeError: module 'pvlib.pvsystem' has no attribute 'Array'
when I try to create an instance of Array using:
from pvlib import pvsystem
module_parameters = {'pdc0': 5000, 'gamma_pdc': -0.004}
array_one = pvsystem.Array(module_parameters=module_parameters)
array_two = pvsystem.Array(module_parameters=module_parameters)
system_two_arrays = pvsystem.PVSystem(arrays=[array_one, array_two],
inverter_parameters=inverter_parameters)
as described in the examples in the PVSystem and Arrays page.
I am using pvlib-python=0.8.1, installed in my conda env using conda install -c conda-forge pvlib-python.
I am quite confused about this since I can obviously see all the documentation on pvsystem.Array on read-the-docs and see the source code on pvlib's github.
When I look at the code in my conda env it doesn't have Array under pvsystem (or if I list it using dir(pvlib.pvsystem)), so it is something wrong with the installation, but I simply can't figure out what. I've tried installing pvlib again and using different installation but always the same issue.
Am I missing something really obvious here?
Kind regards and thank you,
This feature is not present in the current stable version (8.1). If you want to use it already you could download the latest source as a zip file and install it, or clone the pvlib git repository on your computer.

'TypeError at /api/chunked_upload/ Unicode-objects must be encoded before hashing' errorwhen using botocore in Django project

I have hit a dead end with this problem. My code works perfectly in development but when I deploy my project and configure DigitalOcean Spaces & S3 bucket I get the following error when uploading media:
TypeError at /api/chunked_upload/
Unicode-objects must be encoded before hashing
I'm using django-chucked-uploads and it doesn't play well with Botocore
I'm using Python 3.7
My code is taken from this demo: https://github.com/juliomalegria/django-chunked-upload-demo
Any help will be massively helpful
This library was implemented for Python 2, so there might be a couple of things that don't work out of the box with Python 3.
This issue that you're facing is one of them since files in Python 3 are read directly as Unicode (since now py3's str is py2's unicode). The md5 hashing is the part of the code triggering this exception (this line) because it doesn't expect Unicode strings.
If you have created your own model inheriting from AbstractChunkedUpload, you can override the md5 property to encode the chunks before updating the hash. See this other SO question on how to solve this specific.
Hopefully this helped!
Disclaimer: I'm the creator of this library. However, I haven't maintained it in a long time to the point that it might be no longer usable.

Jenkins: automatic tool installers missing JSON

Normally, in ${JENKINS_HOME}/updates/ there are several JSON files for automatically installing various tools. Namely, the one I need is hudson.tasks.Maven.MavenInstaller . Two others are suddenly missing: for Ant, and for JDK.
End result is, my build fails because it can't install Maven from Apache automatically (as detailed here).
I am deploying Jenkins to AWS. What's strange is, I have an AMI (image) that previously was working fine, that suddenly is encountering this problem. I've banged my head on this one extensively with no solution.
Looks like you can find the JSON that I'm failing to download here:
http://mirrors.jenkins-ci.org/updates/current/updates/
Except the JSON there is prepended with "downloadService.post()", indicating that hudson.model.DownloadService is probably doing something (other hints point to that, as well).
Any ideas?
EDIT: Actually, it looks like the last AMI that worked does, in fact, still work.
Should mention: The project is, creating a Jenkins AMI via Chef and Packer
Found the answer to this about a week after posing. Turns out, the issue was on Jenkins update center side of things, suddenly changing to a smaller RSA Key:
https://issues.jenkins-ci.org/browse/JENKINS-31089
At the time, the workaround was this:
sed -i s/'jdk.certpath.disabledAlgorithms=MD2, RSA keySize < 1024'/'jdk.certpath.disabledAlgorithms=MD2, RSA keySize < 512'/ /usr/lib/jvm/jre/lib/security/java.security
Which allowed Java to grab the updates, even though the update center was using a smaller RSA key.

How can I use a doctrine connection to import a SQL file?

I have an app that needs to import a .sql file. I can import the file from the command line with mysql -u my_user -pMyPassword db_name < import.sql, but I'd like to move this into my app. I have some things that need to be done before the import and others after. Right now I have to break it into 3 steps. The closest to a solution I've found was to get the connection (Doctrine\DBAL\Connection) and use exec() but it throws syntax errors even though my source file is correct. I'm guessing it's trying to escape things and double escaping the SQL. The file was generated with mysqldump.
With Symfony using Doctrine, you can do it with:
php app/dev_console doctrine:database:import import.sql
You can use the DBAL "import" command and get the sql executed. This is anyway less performant than using the mysql command directly, since it loads the entire file into memory.
Otherwise, I'd suggest you to use your own Symfony Console Command.
in my case it was :
php bin/console doctrine:database:import my_sql_file.sql
Status September 2021:
I rather trust the code in Doctrine\Bundle\DoctrineBundle\Command\Proxy\ImportDoctrineCommand which calls a deprecation-warning in the execute function. It is not good programming practice to ignore deprecation warnings. Calling dual:run-sql would not be efficient enough because of the overhead.
Alternatively, you can also call e.g. mysql on the operating system level. This causes problems in multi-user environments, for example because the database password has to be specified on the command line. In addition, the server must also activate this function; exec() is switched off for security reasons in many environments, especially with low-cost providers. Furthermore, this function would not be database abstract. This abstraction is one of the most outstanding features of Doctrine.
Therefore, I recommend reading in the SQL data yourself and executing it line by line (see entityManager->createNativeSQL() function). You then have better possibilities to react to possible errors.

Error Saving geodjango PointField

I have a geo model with a PointField property. Everything works perfectly locally, but when I try to save an instance on the server, I get the following error:
django.db.utils.DatabaseError: invalid byte sequence for encoding "UTF8": 0x00
I dug into the source and found that the values are being serialized differently; specifically, that value isn't being escaped before the query is executed on the server. It looks like the escaping is being done by psycopg2.Binary.getquoted() and sure enough, it doesn't return the correct value on the server.
On my machine:
from psycopg2 import Binary
Binary('\0').getquoted() # > "'\\\\000'::bytea"
On the server:
from psycopg2 import Binary
Binary('\0').getquoted() # > "'\\000'::bytea"
Okay, that explains why it thinks I'm trying to insert a null byte. (Because I am.) So now I know enough about what's going wrong to find a similar report by Jonathan S. on the django-users group but, like Jonathan, I don't know if this is a bug or configuration error.
Can somebody point me in the right direction?
Here's some info about the setups:
My computer Server
OS OSX 10.7 CentOS 5.5
Python 2.7 2.6
Django 1.3 1.3
Postgres 9.0.4 9.9.1
postgis 1.5.2 1.5.3-2.rhel5
geos 3.3.0 3.3.0-1.rhel5
Finally managed to figure it out.
The difference, as documented in this ticket, is that Postgres 9.1 has standard_conforming_strings on by default. Which wouldn't be a problem, really, except Django's adapter has a bug that basically ignores it. A patch was submitted and it's working for me.
For those unwilling or unable to apply the patch or upgrade, you can just use this database adapter instead.