Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I just upgraded Ubuntu from 18.04 to 20.04, and ALL my Django projects (tens of them) were not working.
One of the problem related to psycopg2 when executing pip:
For example, there is "psycopg2==2.7.3.1" in my "requirements.txt" file, and running "pip install -r requirements.txt" resulted in errors when building wheel for psycopg2.
Change "psycopg2==2.7.3.1" to ""psycopg2-binary" solved the problem.
So, is such change necessary for all projects running on Ubuntu 20.04?
Other error examples from various projects when running server:
RuntimeError: __class__ not set defining 'AbstractBaseUser' as <class 'django.contrib.auth.base_user.AbstractBaseUser'>. Was __classcell__ propagated to type.__new__?
SyntaxError: Generator expression must be parenthesized (widgets.py, line 151)
AssertionError: SRE module mismatch
ModuleNotFoundError: No module named 'decimal'
... etc.
How to I fix these problems? I've been in a headache for weeks.
Your problem with psycopg2 is apparently due to an incompatibility between python 3.8 and older versions of psycopg2. (Issue #854)
The problem has been fixed, but it was not backported to the release that you are using. It has however been backported to the binary psycopg2 releases in various Linux distros.
So ... yes ... you are going to have to make some changes for all of your Django projects that use psycopg2 to get them to run on vanilla Ubuntu 20.04.
However, it looks like there are at least 3 ways to "fix" this:
Change to the "binary" package as you have done.
Change the psycopg2 version in your "requirements.txt" to 2.8.6 or later1.
Build, install and use python 3.7 or earlier. (Probably a bad idea in this case.)
1 - Or maybe 2.8.0_BETA2 or later because it looks like the fix has been backported that far in the source code repo. It depends on whether the patched releases were uploaded to PyPi ... which I didn't check. But all things being equal, updating to the most recent compatible version is preferable.
Related
I cannot install gensim==3.5.0 in my elastic beanstalk environment (python 3.4). I get an error that gensim needs python >= 3.5 to run.
This was not a problem until a mid-day deployment today, that made only project code changes, nothing related to elastic beanstalk, requirements or settings.
At the same time, I'm succesfully running the same version in another identical environment. That means the same pip, same python version, same required dependencies.
I tried lowering the gensim requirement to gensim==0.13.4 which officially supports python 3.4, but I get the same error.
EDIT: I managed to make things work by installing gensim==0.10.0 and then redeploying with gensim=3.5.0. I still don't know the cause of the issue and the solution is not really a solution, so I'm still interested in insights.
Note that currently (July 2019), Python 3.4 itself will no longer be supported with fix releases. Per https://www.python.org/downloads/release/python-3410/:
Release Date: March 18, 2019
Python 3.4 has reached end-of-life. Python 3.4.10 is the final release
of 3.4.
Python 3.4.10 was released on March 18th, 2019.
Python 3.4.10 is the final release in the Python 3.4 series. As of
this release, the 3.4 branch has been retired, no further changes to
3.4 will be accepted, and no new releases will be made. This is standard Python policy; Python releases get five years of support and
are then retired.
If you're still using Python 3.4, you should consider upgrading to the
current version. Newer versions of Python have many new features,
performance improvements, and bug fixes, which should all serve to
enhance your Python programming experience.
That said, if you're truly getting an error about gensim's require Python level, truly rolling back to whatever version of gensim you were using successfully, previously, with Python 3.4 should work. (That might not need to go as far back as gensim-0.13.4.1, which is almost 2.5 years old, but if you're sure that's the version that was working for you, you could use that version.)
You should edit your question to show exactly what installation commands you've run, and exactly which message is received in response to which step, to more clearly indicate what's been tried, where the error is arising, and why a simple attempt to install-an-older-version might be getting a similar error message.
I am running anaconda on OS 10.11.6. I am not certain of the precise anaconda version that I previously had, but it was about 4-6 months old, I believe, and it was running Python 2.7.11. I wanted to update both python (to 2.7.12) and anaconda (while I was at it) and so I used the standard procedure of
conda update conda
conda update anaconda
This has worked swimmingly for me in the past. This time, however, it is taking me back to an earlier version of python (2.7.10) when I do this. From running conda --version I see that I have conda 4.1.11, which is the latest, as I understand it. However, when I run conda update anaconda I get a display saying:
anaconda 2.3.0 np19py27_0
Similarly, with python --version I get:
Python 2.7.10 :: Anaconda 2.3.0 (x86_64)
In my folder ~/anaconda/bin/ (which folder I also have in my PATH such that calls to python direct here) I have an alias named python which says that it was just updated today (which is when I ran the conda update, etc.). But, it just points to a file python 2.7 which is ins ~/anaconda/ and hasn't been modified since May 2015.
I figure if I did a complete uninstall and reinstall of anaconda, I could presumably clear this up. I'd rather avoid that if possible though, since it would mean reinstalling all the rest of my other python packages, etc.
I also saw this SO Post: Anaconda not updating to latest . but when I try:
conda install anaconda=4.1.1
I get the following error:
Fetching package metadata .......
Solving package specifications: ....
The following specifications were found to be in conflict:
- anaconda 4.1.1*
- gevent-websocket -> gevent 0.13.7|0.13.8|1.0|1.0.1|1.0.2|1.1.0
- gevent-websocket -> python 2.6*
Use "conda info <package>" to see the dependencies for each package.
Update: I ended up just wiping my old installation and installing a fresh version of Anaconda. It was a bit of a pain but it seemed like it would be less work than trying to track down what was happening with this bug. Still though, I'd be delighted in any solutions people have to this issue for future reference by me and others who encounter this.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have CentOs 6.4 64 bit installed on my VPS, when i am trying to install openssl-devel, it is giving me issue given below:
Error: Multilib version problems found. This often means that the root
cause is something else and multilib version checking is just
pointing out that there is a problem. Eg.:
1. You have an upgrade for openssl which is missing some
dependency that another package requires. Yum is trying to
solve this by installing an older version of openssl of the
different architecture. If you exclude the bad architecture
yum will tell you what the root cause is (which package
requires what). You can try redoing the upgrade with
--exclude openssl.otherarch ... this should give you an error
message showing the root cause of the problem.
2. You have multiple architectures of openssl installed, but
yum can only see an upgrade for one of those arcitectures.
If you don't want/need both architectures anymore then you
can remove the one with the missing update and everything
will work.
3. You have duplicate versions of openssl installed already.
You can use "yum check" to get yum show these errors.
...you can also use --setopt=protected_multilib=false to remove
this checking, however this is almost never the correct thing to
do as something else is very likely to go wrong (often causing
much more problems).
Protected multilib versions: openssl-1.0.0-27.el6_4.2.i686 != openssl-1.0.1e-16.el6_5.4.x86_64
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Can anyone help me about this?
Finally sorted out.
Here is the problem:
I was install python 2.7 First and then openssl-devel. Whenever i access any https repository it gives me error.
Here is the steps to solve the issue:
1. Install openssl
2. Install openssl-devel
3. Reinstall Python 2.7
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I'm taking a class in OpenGL. Everything must be compiled and run on a machine that has the OpenGL 3.1 and GLUT 3 library, so I need to make sure that is what I have.
I have a fresh copy of Ubuntu 12.04 installed so nothing extra outside of the basic installation.
Any help with setting me up?
Since you wrote that you're taking an OpenGL course, I'll assume that you need the development files. Then try simply this:
sudo apt-get install freeglut3-dev
The OpenGL development files should be installed as a dependency of GLUT (the corresponding (virtual) package is gl-dev and one possible package is libgl1-mesa-dev).
Regarding the version of OpenGL this will get you, it will depend on both your hardware and the software drivers installed on your machine. Use glxinfo (from the mesa-utils package) to find out: the supported version should be in the OpenGL version string.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have been using an outdated Mac operating system (10.5.8), but recently updated to 10.8. However, now django isn't being found anymore.
Operations such as:
python manage.py runserver
that worked before now return:
Traceback (most recent call last):
File "manage.py", line 8, in <module>
from django.core.management import execute_from_command_line
ImportError: No module named django.core.management
I read another post here where the user suggested checking if django was available using python -c 'import django'. Django is not available using that command, and I tried to modify my PYTHONPATH to point to where the django package was (in my Downloads folder), but that didn't work either.
Anyways, I'm confused as to why it worked before but not now? Maybe because this version of OSX uses a different version of Python?
PS I am not using a virtualenv. Thanks for any ideas!
I think when you upgraded to Mac OS X 10.8 your site-packages/ libraries were removed and now it sounds like you just need to install django again. Make pip install django.
Besides usage of virtualenv is a good practice to follow. In that particular case your virtualenv contained all necessary packages and you just had to install virtualenv instead of a bunch of libraries.
When you upgraded OSX, you almost certainly got a new default version of Python. 10.5 had Python 2.5, but 10.8 has 2.7. Libraries are installed for a specific version, so you'll just need to reinstall with the new version.