fbprophet failed to build wheel in google cloud function - google-cloud-platform

I am hoping to use Fbprophet on my cloud function in a Python 3.7 environment, but it fails to build and gives me the following error.
Build failed: `pip_download_wheels` had stderr output:
ERROR: Command errored out with exit status 1:
command: /opt/python3.7/bin/python3.7 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py'"'"'; __file__='"'"'/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-3_5khs54
cwd: /tmp/pip-wheel-srnqu7b5/fbprophet/
Complete output (40 lines):
running bdist_wheel
running build
running build_py
creating build
creating build/lib
creating build/lib/fbprophet
creating build/lib/fbprophet/stan_model
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py", line 148, in <module>
"""
File "/opt/python3.7/lib/python3.7/site-packages/setuptools/__init__.py", line 140, in setup
return distutils.core.setup(**attrs)
File "/opt/python3.7/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/python3.7/lib/python3.7/site-packages/wheel/bdist_wheel.py", line 202, in run
self.run_command('build')
File "/opt/python3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/opt/python3.7/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/opt/python3.7/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/opt/python3.7/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py", line 48, in run
build_models(target_dir)
File "/tmp/pip-wheel-srnqu7b5/fbprophet/setup.py", line 36, in build_models
from fbprophet.models import StanBackendEnum
File "/tmp/pip-wheel-srnqu7b5/fbprophet/fbprophet/__init__.py", line 8, in <module>
from fbprophet.forecaster import Prophet
File "/tmp/pip-wheel-srnqu7b5/fbprophet/fbprophet/forecaster.py", line 14, in <module>
import numpy as np
ModuleNotFoundError: No module named 'numpy'
----------------------------------------
ERROR: Failed building wheel for fbprophet
ERROR: Failed to build one or more wheels
error: `pip_download_wheels` returned code: 1; Error ID: 618AA8E7
This is what my requirements.txt file looks like:
cython
pystan
numpy
pandas==1.0.3
google-cloud-storage==1.29.0
fbprophet
geopy==1.22.0
google-cloud-bigquery==1.25.0
Everything works perfectly fine locally in a Python 3.7 virtual environment on jupyter notebook. Would appreciate any help because I've spent almost an entire day trying to fix this but to no avail.

I'm having similar issues to this. My goal is to deploy a function that, when passed some input, will feed this into a Prophet model in order to make a prediction before passing the prediction to another part of my system.
As far as I can tell, there are a few things that make this complicated.
First, there is the issue of build dependencies that #mgoya mentioned in the comment above. This manifests both when installing Prophet and when installing Pystan (a dependency of Prophet). In my cloudbuild.yaml I'm attempting to circumvent this by installing the dependencies in sequence, like this:
steps:
- name: 'docker.io/library/python:3.9'
entrypoint: /bin/sh
# Run pip install and pytest in the same build step
# (pip packages won't be preserved in future steps!)
args: [-c, 'pip uninstall pystan; pip install convertdate==2.1.2 lunarcalendar==0.0.9 holidays==0.10.3 pgen tqdm cython pandas numpy setuptools; pip install pystan==2.19.1.1; pip install -r requirements.txt']
timeout: 1200s
This first uninstalls Pystan (if one exists locally), then installs the build dependencies for Pystan (a build dependency for Prophet), then installs Prophet (which is included in my requirements.txt).
Note the semicolons. I ran into issues when using double-ampersands instead (i.e. pip install ... && pip install ... etc.. I believe this may be due to the way that the installed packages make their way to the local file system and are made available to the following commands. By running them with && this seems to try and run them all at once, which won't give the earlier parts of the installation time to propagate and be discoverable by later steps. Using semicolons seems to help with this but it's anecdotal evidence at best.
You could also try using python -m pip install ... rather than pip install though I'm not familiar enough with Python to tell you the difference off the top of my head.
Secondly, there are the memory requirements for installing Pystan. I've read somewhere that Pystan requires 4GB of memory to install. The default machine_type for cloudbuild does not have that much. You can try increasing the size of the machine by changing its type.
Finally, there's the resources allocated for function execution. Again, I read somewhere that Prophet / Pystan require 2GB of memory just to run one of the models. So, if your cloud function doesn't have this, it may run into memory issues when trying to execute. In my experience so far, memory issues are not that transparent within Google Cloud.
--
My current thinking (and perhaps what I'd recommend to anyone else reading this post) is to consider whether Cloud Functions is the right tool for this, given the weight of the dependencies. Pystan and Prophet are rather special-case dependencies, given their build and runtime resource requirements.
What I've opted to do is build (locally) a container with these dependencies baked in and pushing this to Google's Container Registry. My plan from here is to use that container as the base image for a Cloud Run application, which is significantly easier to deploy. This has obvious drawbacks but if my base image changes infrequently (which it will) I think this approach would be fine. Unfortunately, this model ("bring your own container") is not supported by Cloud Functions - it's designed where all you bring is code.

Related

cling on Jupyter on Windows: Kernel cannot start

Background: I am trying to install the cling c++ interpreter here. I am on a Windows and have had Anaconda running well, Jupyter notebook also working fine with the existing Python kernels. The installation process was smooth on the surface but there is Kernel error once I try to open Jupyter notebook on the installed Kernel.
(In the end I would hope to be able to use c++ with Jupyter notebook so if anyone has had any success please could you share your experience. On that, while the xeus-cling is not usable for Windows as many say, this cling appears to be a separate thing)
The installation: Here is what I have done:
Download the binary cling_2019-11-28_arm64.tar.bz2 (is this correct for Windows?) from
https://root.cern.ch/download/cling/
Extract and place in Program Files folder
Following the instruction in here, add C:\Program Files\cling_2019-11-28_arm64\bin to the PATH variable
Activate base Anaconda environment
cd .../share/cling/Jupyter/kernel
pip install -e .
jupyter-kernelspec install --user cling-cpp11
Every thing seems to be fine up to here, no warning/error.
The error: Then I load up my Jupyter notebook and try to run the cpp11 kernel, but it is unable to start with a long error traceback, the first/last items of which read:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\web.py", line 1699, in _execute
result = await result
File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\gen.py", line 736, in run
yielded = self.gen.throw(*exc_info) # type: ignore
File "C:\ProgramData\Anaconda3\lib\site-packages\notebook\services\sessions\handlers.py", line 73, in post
type=mtype))
... (omitted) ...
File "C:\ProgramData\Anaconda3\lib\site-packages\jupyter_client\launcher.py", line 138, in launch_kernel
proc = Popen(cmd, **kwargs)
File "C:\ProgramData\Anaconda3\lib\subprocess.py", line 775, in __init__
restore_signals, start_new_session)
File "C:\ProgramData\Anaconda3\lib\subprocess.py", line 1178, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] The system cannot find the file specified
And on the cmd the following:
[E 14:39:14.265 NotebookApp] Failed to run command:
['jupyter-cling-kernel', '-f', 'path\\to\\jupyter\\runtime\\kernel-..(random string here)..json', '--std=c++11']
The troubleshooting (1): ... appearing to suggest that it is unable to locate a jupyter-cling-kernel. But I do have file named jupyter-cling-kernel in the .../Anaconda3/Scripts folder, and this folder is also in my PATH variable. After opening it, I discovered it is a python file with only a few lines. Looks like it corresponds to the command above.
#!C:\ProgramData\Anaconda3\python.exe
# EASY-INSTALL-DEV-SCRIPT: 'clingkernel==0.0.2','jupyter-cling-kernel'
__requires__ = 'clingkernel==0.0.2'
__import__('pkg_resources').require('clingkernel==0.0.2')
__file__ = 'C:\\Program Files\\cling_2019-11-28_arm64\\share\\cling\\Jupyter\\kernel\\scripts\\jupyter-cling-kernel'
with open(__file__) as f:
exec(compile(f.read(), __file__, 'exec'))
so then I modified my kernel.json file, adding the absolute python path (so that it knows to run it with python) and the absolute path of the jupyter-cling-kernel. (originally it was just "argv:["jupyter-cling-kernel", "-f", ...)
{
"display_name": "C++11",
"argv": [
**"C:\\ProgramData\\Anaconda3\\python.exe",
"C:\\ProgramData\\Anaconda3\\Scripts\\jupyter-cling-kernel",**
"-f",
"{connection_file}",
"--std=c++11"
],
"language": "C++"
}
The troubleshooting (2):... which indeed appears to be the right direction, at least it is running sth but now another error:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\Scripts\jupyter-cling-kernel", line 7, in <modu
le>
exec(compile(f.read(), __file__, 'exec'))
File "C:\Program Files\cling_2019-11-28_arm64\share\cling\Jupyter\kernel\scrip
ts\jupyter-cling-kernel", line 3, in <module>
from clingkernel import main
File "c:\program files\cling_2019-11-28_arm64\share\cling\jupyter\kernel\cling
kernel.py", line 24, in <module>
from fcntl import fcntl, F_GETFL, F_SETFL
ModuleNotFoundError: No module named 'fcntl'
Now with some googling this fcntl appears to be sth not for Windows. So at this point I am wondering have I downloaded the wrong binary or should I modify this clingkernel.py file or do I need to do some compilation myself?
Again, if any of you knows of how to get the c++ run of Jupyter (on windows), appreciate if you could share your experience. Thanks.
With Windows 10 + WSL, we can install xeus-cling for C++ on Windows
Steps includes
Enable Ubuntu on WSL
Install Miniconda
Setup Conda, Jupyter Notebook, Xeus-Cling
This cling notebook with the cpp environment can be made to run from a desktop shortcut. Steps are documented on C/C++ Jupyter Notebook using xeus-cling - Windows WSL Setup
The cling interpreter has been packaged for conda-forge.
You can simply run
conda install cling -c conda-forge
and then run cling. However, unfortunately, the Jupyter kernel is not included with that build, and the windows build has some issues with IO operations which I am currently investigating.
Maybe try restarting the kernel by pressing o(not 0) twice.

Can't install gensim with pip

I am completely new to Python so I am trying to install gensim but it's not installing. I am using mac.
Below is the output I am getting in terminal:
Requirement already up-to-date: six>=1.5.0 in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/six-1.10.0-py2.7.egg (from gensim)
Requirement already up-to-date: boto>=2.32 in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages (from smart-open>=1.2.1->gensim)
Requirement already up-to-date: bz2file in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages (from smart-open>=1.2.1->gensim)
Requirement already up-to-date: requests in /Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages (from smart-open>=1.2.1->gensim)
Installing collected packages: numpy, scipy, gensim
Found existing installation: numpy 1.8.0rc1
DEPRECATION: Uninstalling a distutils installed project (numpy) has been deprecated and will be removed in a future version. This is due to the fact that uninstalling a distutils project will only partially uninstall the project.
Uninstalling numpy-1.8.0rc1:
Exception:
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/basecommand.py", line 215, in main
status = self.run(options, args)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/commands/install.py", line 342, in run
prefix=options.prefix_path,
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_set.py", line 778, in install
requirement.uninstall(auto_confirm=True)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_install.py", line 754, in uninstall
paths_to_remove.remove(auto_confirm)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/req/req_uninstall.py", line 115, in remove
renames(path, new_path)
File "/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip/utils/__init__.py", line 267, in renames
shutil.move(old, new)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 302, in move
copy2(src, real_dst)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 131, in copy2
copystat(src, dst)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/shutil.py", line 103, in copystat
os.chflags(dst, st.st_flags)
OSError: [Errno 1] Operation not permitted: '/tmp/pip-lCcMmh-uninstall/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy-1.8.0rc1-py2.7.egg-info'
The system python is managed by the OS vendor (Apple, Canonical, etc), and is meant to be used by other OS built-in tools. Having pip mess things up in the vendored python is dangerous. The SIP protection since El Capital makes it harder to break these things, and for good reasons.
Installing a separate python distribution, or
using pip with --user option circumvents the need to require elevated root access. You may still have to add the user's bin folder to the $PATH --
use this:
sudo pip install gensim --user

Adding libraries to django nonrel

I've a project in Django which I'm trying to port to Django-nonrel so that I can upload it to Google app Engine. I've installed django-nonrel and other required libraries by going through the http://djangoappengine.readthedocs.org/en/latest/installation.html
namely: django-nonrel
djangoappengine
djangotoolbox
django-autoload
django-dbindexer
that is by downloading their zip files and placing them in my app directory.
So, my app directory is:
>
<app>/autoload
<app>/dbindexer
<app>/django
<app>/djangoappengine
<app>/djangotoolbox
I also have django in my project directory and have started the project by:
PYTHONPATH=. python django/bin/django-admin.py startproject \
--name=app.yaml --template=djangoappengine/conf/project_template app
If I am adding an external library with pip and adding it to the INSTALLED_APPS of my app's settings.py , it is not recognised by my django-nonrel which is pretty obvious considering the fact that django-nonrel is not installed on my system. It gives me the following error
Traceback (most recent call last):
File "/usr/local/google_appengine/google/appengine/tools/devappserver2/module.py", line 1390, in _warmup
request_type=instance.READY_REQUEST)
File "/usr/local/google_appengine/google/appengine/tools/devappserver2/module.py", line 884, in _handle_request
environ, wrapped_start_response)
File "/usr/local/google_appengine/google/appengine/tools/devappserver2/request_rewriter.py", line 314, in _rewriter_middleware
response_body = iter(application(environ, wrapped_start_response))
File "/usr/local/google_appengine/google/appengine/tools/devappserver2/module.py", line 1297, in _handle_script_request
request_type)
File "/usr/local/google_appengine/google/appengine/tools/devappserver2/module.py", line 1262, in _handle_instance_request
request_type)
File "/usr/local/google_appengine/google/appengine/tools/devappserver2/instance.py", line 371, in handle
raise CannotAcceptRequests('Instance has been quit')
CannotAcceptRequests: Instance has been quit
(nonrel)apurva#apurva-HP-ProBook-6470b:~/project/flogin$ python manage.py runserver
INFO 2015-08-11 16:06:54,606 sdk_update_checker.py:229] Checking for updates to the SDK.
INFO 2015-08-11 16:06:55,511 sdk_update_checker.py:257] The SDK is up to date.
INFO 2015-08-11 16:06:55,633 api_server.py:205] Starting API server at: http://localhost:60055
INFO 2015-08-11 16:06:55,847 dispatcher.py:197] Starting module "default" running at: http://127.0.0.1:8080
INFO 2015-08-11 16:06:55,847 admin_server.py:118] Starting admin server at: http://localhost:8000
INFO 2015-08-11 16:06:58,966 __init__.py:52] Validating models...
ERROR 2015-08-11 16:06:59,045 wsgi.py:263]
Traceback (most recent call last):
File "/usr/local/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/usr/local/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/usr/local/google_appengine/google/appengine/runtime/wsgi.py", line 96, in LoadObject
__import__(cumulative_path)
File "/home/apurva/project/flogin/djangoappengine/main/__init__.py", line 66, in <module>
validate_models()
File "/home/apurva/project/flogin/djangoappengine/main/__init__.py", line 55, in validate_models
num_errors = get_validation_errors(s, None)
File "/home/apurva/project/flogin/django/core/management/validation.py", line 34, in get_validation_errors
for (app_name, error) in get_app_errors().items():
File "/home/apurva/project/flogin/django/db/models/loading.py", line 196, in get_app_errors
self._populate()
File "/home/apurva/project/flogin/django/db/models/loading.py", line 75, in _populate
self.load_app(app_name, True)
File "/home/apurva/project/flogin/django/db/models/loading.py", line 97, in load_app
app_module = import_module(app_name)
File "/home/apurva/project/flogin/django/utils/importlib.py", line 42, in import_module
__import__(name)
ImportError: No module named oauth2_provider
However, I'm unsure on how to add external libraries to my project. So that my django-nonrel recognises it. I've also tried google's documentation on how to this i.e.
Adding Third-party Packages to the Application
You can add any third-party library to your application, as long as it
is implemented in "pure Python" (no C extensions) and otherwise
functions in the App Engine runtime environment. The easiest way to
manage this is with a ./lib directory.
Create a directory named lib in your application root directory:
mkdir lib To tell your app how to find libraries in this directory,
create (or modify) a file named appengine_config.py in the root of
your project, then add these lines:
from google.appengine.ext import vendor
£ Add any libraries installed in the "lib" folder. vendor.add('lib') Use pip with the -t lib flag to install libraries in this directory:
$ pip install -t lib gcloud Note: pip version 6.0.0 or higher is
required for vendor to work properly.
Tip: the appengine_config.py above assumes that the current working
directory is where the lib folder is located. In some cases, such as
unit tests, the current working directory can be different. To avoid
errors, you can explicity pass in the full path to the lib folder
using
vendor.add(os.path.join(os.path.dirname(os.path.realpath(file)),
'lib'))
didn't work either.
So I had a very similar dilemma. Here is how I solved it:
Followed Google's instructions noted above, using pip and a ./lib directory. Make sure you have an updated version of pip:
sudo pip install --upgrade pip
Then, because of pkg_resources issues, I did this:
pip install -t lib setuptools
That was necessary, I am just not sure if that was the right place to install setuptools or not. It certainly worked, though.
Then, I launched the local development server like this, in the project directory:
PYTHONPATH=lib ./manage.py runserver
I hope that works for you!

How to install and run the reindent.py

I have downloaded the Reindent-0.1.0 and trying to use this for automated indention purpose.
I don't know how to install and run these commands and while I am trying to use this command
I am getting following error
command:
C:\Python26\Scripts\Reindent-0.1.0>Python setup.py
C:\Python26\Scripts\Reindent-0.1.0>Pyth
Traceback (most recent call last):
File "setup.py", line 5, in <module>
from setuptools import setup
ImportError: No module named setuptools
I don't understand the setuptools, where it is and how to put inside
please note my folder files in Reindent-0.1.0
Reindent.egg-info
PKG-INFO
README
reindent
setup.cfg
setup.py
Also how can I run the commands for reindent, for an example, once after I installed the reindent, if I want to
run dryrun command how I should write?
If I write like this, will it be correct ???
C:\ProjFolder\ApplicationDevelopment\GUI>reindent -d Test.py
some realtime example of "-d (--dryrun) Dry run and -r (--recurse) Recurse" will be helpful!!
and where I should target the command file path, in dos
to my application running directory or C:\Python26\Scripts\Reindent-0.1.0 ?? OR Application development folder??
If you get the error "no module X" when you try to run some code, that code has a dependency on module X. When you run setup.py and it says there is no module named "setuptools", it is telling you that setup.py requires the module "setuptools". Since you don't have "setuptools" installed on your machine, you get the error.
The fix is simple: install the setuptools module. Here's one of several places on the internet that shows you how to install setuptools: https://pythonhosted.org/an_example_pypi_project/setuptools.html

Errors while installing python packages

I 'm not able to install python packages from both pip and easy_install. There's some absurd kind of error that keeps popping up. Kindly help to rectify it.
I get the same errors while using python setup.py install.
Error while installing django-memcached
C:\Users\Praful\Desktop\django-redis-master>easy_install django-memcached
Traceback (most recent call last):
File "C:\Python27\Scripts\easy_install-script.py", line 9, in <module>
load_entry_point('distribute==0.6.27', 'console_scripts', 'easy_install')()
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\setuptools\com
mand\easy_install.py", line 1915, in main
with_ei_usage(lambda:
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\setuptools\com
mand\easy_install.py", line 1896, in with_ei_usage
return f()
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\setuptools\com
mand\easy_install.py", line 1919, in <lambda>
distclass=DistributionWithoutHelpCommands, **kw
File "C:\Python27\lib\distutils\core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\setuptools\dis
t.py", line 222, in __init__
for ep in pkg_resources.iter_entry_points('distutils.setup_keywords'):
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\pkg_resources.
py", line 486, in iter_entry_points
entries = dist.get_entry_map(group)
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\pkg_resources.
py", line 2315, in get_entry_map
self._get_metadata('entry_points.txt'), self
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\pkg_resources.
py", line 2101, in parse_map
raise ValueError("Entry points must be listed in groups")
ValueError: Entry points must be listed in groups
Error while installing python-memcache
C:\Users\Praful\Desktop\mem>python setup.py install
Traceback (most recent call last):
File "setup.py", line 24, in <module>
"Topic :: Software Development :: Libraries :: Python Modules",
File "C:\Python27\lib\distutils\core.py", line 112, in setup
_setup_distribution = dist = klass(attrs)
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\setuptools\dis
t.py", line 222, in __init__
for ep in pkg_resources.iter_entry_points('distutils.setup_keywords'):
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\pkg_resources.
py", line 486, in iter_entry_points
entries = dist.get_entry_map(group)
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\pkg_resources.
py", line 2315, in get_entry_map
self._get_metadata('entry_points.txt'), self
File "C:\Python27\lib\site-packages\distribute-0.6.27-py2.7.egg\pkg_resources.
py", line 2101, in parse_map
raise ValueError("Entry points must be listed in groups")
ValueError: Entry points must be listed in groups
Find get_entry_map(self, group=None): into python\Lib\sitepackages\pkg_resources\__init__.py. Insert after print self.egg_info
Run python setup.py and look to the last printed - broken package.
Remember it, later to install again. Delete the folder of broken
package and folder broken_package-version.dist-info. Run again paragraph 2, until the error disappears.
Remove changes from paragraph 1.
python setup.py install 'broken_package'
This error happened to me installing any package. My solution was going to my file explorer, typing in the path bar %appdata%, going to the Python folder, and deleting everything inside.
I found the same problem to be caused by a misfometted entry_points.txt file in one instelled egg of mine.
It can be quite hard to track down which one is if there are many.
I managed to find that little ba##!"d by creating and run setup.py for a dummy package:
setup.py
from setuptools import setup, find_packages
setup(
name = "IWillFindYou",
version = "0.1",
packages = find_packages()
)
run this in debug mode would point to this line in pkg_resources.py
def parse_map(cls, data, dist=None):
[...]
raise ValueError("Entry points must be listed in groups")
if you go back to the stack trace, you will see that parse_map is called here:
def get_entry_map(self, group=None):
[...]
ep_map = self._ep_map = EntryPoint.parse_map(
self._get_metadata('entry_points.txt'), self
)
evaluating self.egg_info will point up your evil egg so you can give a look to the entry_points.txt file.
If you are not handy with debugger, you may try to place print self.egg_info in get_entry_map and look to the last guy printed.
My Resolution Approach
Platform: windows 10, ConEmu-Maximus5
Delete virtual environment automatically created by poetry install command.
windows users can find the virtual environment folder in the path below
C:\Users\YOUR_PC_USERNAME\AppData\Local\pypoetry\Cache\virtualenvs
(don't know of linux path)
close terminal / command prompt
open terminal / command prompt and navigate to project folder
re run poetry install
I hope it helps...
How i encountered the error
It was my first time using poetry, while running poetry install, the process got interrupted. running the command again popped out the error.
Could be a problem with distribute. I'd recommend re-installing Python.