The directory is not empty: '.elasticbeanstalk\\app_versions' Windows 10 - amazon-web-services

I am switching to a new computer with a fresh install of Windows 10 Pro and am having a very strange issue with the EB CLI. I am not able to run 'eb deploy' using Windows Power Shell, I get the following error:
ERROR: OSError - [WinError 145] The directory is not empty: '.elasticbeanstalk\\app_versions'
I and uninstalled/reinstalled Python along with the EB CLI but with the same result.
Note: I am able to run all other EB commands like eb ssh or eb logs with no issues.
An observation I was able to make while watching the '.elasticbeanstalk' folder, I see the 'app_versions' folder being created along with the application zip in that folder. Once the command fails the ZIP file remains in the 'app_versions' folder for about 10 to 15 seconds before it is removed. I checked S3 and the zip file is uploaded...
I have reviewed this other Stack Overflow issue: AWS Elastic Beanstalk deploy not working
I do not have Google/Dropbox or OneDrive running on the directory I am working in. Just to be safe I paused OneDrive but still nothing.
Please, any help would be amazing!
UPDATE:
Ran eb deploy --debug
There is no error until AFTER the upload is completed, confirmed this by checking the S3 bucket and seeing the latest upload.
2019-02-04 14:50:06,522 (INFO) eb : Traceback (most recent call last):
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\ebcli\core\ebrun.py", line 62, in run_app
app.run()
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\cement\core\foundation.py", line 797, in run
return_val = self.controller._dispatch()
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\cement\core\controller.py", line 472, in _dispatch
return func()
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\cement\core\controller.py", line 478, in _dispatch
return func()
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\ebcli\core\abstractcontroller.py", line 94, in default
self.do_command()
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\ebcli\controllers\deploy.py", line 78, in do_command
staged=self.staged, timeout=self.timeout, source=self.source)
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\ebcli\operations\deployops.py", line 59, in deploy
build_config=build_config
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\ebcli\operations\commonops.py", line 538, in create_app_version
fileoperations.delete_app_versions()
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\ebcli\core\fileoperations.py", line 432, in delete_app_versions
delete_directory(app_version_folder)
File "C:\Users\winng\AppData\Roaming\Python\Python37\site-packages\ebcli\core\fileoperations.py", line 425, in delete_directory
shutil.rmtree(location)
File "c:\users\winng\appdata\local\programs\python\python37\lib\shutil.py", line 513, in rmtree
return _rmtree_unsafe(path, onerror)
File "c:\users\winng\appdata\local\programs\python\python37\lib\shutil.py", line 401, in _rmtree_unsafe
onerror(os.rmdir, path, sys.exc_info())
File "c:\users\winng\appdata\local\programs\python\python37\lib\shutil.py", line 399, in _rmtree_unsafe
os.rmdir(path)
OSError: [WinError 145] The directory is not empty: '.elasticbeanstalk\\app_versions'
2019-02-04 14:50:06,526 (INFO) eb : OSError - [WinError 145] The directory is not empty: '.elasticbeanstalk\\app_versions'

Related

ImportError: No module named idlelib" when running Google Dataflow worker

I have a python 2.7 script I run locally to launch a Apache Beam / Google Dataflow job (SDK 2.12.0). The job takes a csv file from a Google storage bucket, processes it and then creates an entity in Google Datastore for each row. The script ran fine for years ...but now it is failing:
INFO:root:2019-05-15T22:07:11.481Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:root:2019-05-15T21:47:13.370Z: JOB_MESSAGE_ERROR: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 773, in run
self._load_main_session(self.local_staging_directory)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session
pickler.load_session(session_file)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 280, in load_session
return dill.load_session(file_path)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 410, in load_session
module = unpickler.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce
value = func(*args)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 827, in _import_module
return __import__(import_name)
ImportError: No module named idlelib
I believe this error is happening at the worker level (not locally). I don't make reference to it in my script. To make sure it wasn't me I have installed updates for all google-cloud packages, apache-beam[gcp] etc locally -just in case. I tried importing idlelib into my script I get the same error. Any suggestions?
It has been fine for years and started failing from SDK 2.12.0 release.
What was the last release that this script succeeding on? 2.11?

KeyError: 'opsworkscm' when attempting to use the AWS CLI

When attempting to use the AWS CLI for the EC2 instance I'm working with, I receive the following error.
[ec2-user#ip-xxx-xxx-xxx-xxx ~]$ aws
Traceback (most recent call last):
File "/usr/bin/aws", line 27, in <module>
sys.exit(main())
File "/usr/bin/aws", line 23, in main
return awscli.clidriver.main()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 54, in main
return driver.main()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 186, in main
command_table = self._get_command_table()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 96, in _get_command_table
self._command_table = self._build_command_table()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 116, in _build_command_table
command_object=self)
File "/usr/local/lib/python2.7/site-packages/botocore-1.4.8-py2.7.egg/botocore/session.py", line 680, in emit
return self._events.emit(event_name, **kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore-1.4.8-py2.7.egg/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore-1.4.8-py2.7.egg/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/lib/python2.7/dist-packages/awscli/customizations/opsworkscm.py", line 21, in alias_opsworks_cm
alias_command(command_table, 'opsworkscm', 'opsworks-cm')
File "/usr/lib/python2.7/dist-packages/awscli/customizations/utils.py", line 71, in alias_command
current = command_table[existing_name]
KeyError: 'opsworkscm'
I am not quite sure why this is happening. I am working with other ec2 instances setup similar to this one that work, but I am not sure what difference may be causing this error.
I ran across this issue in the aws-cli GH repo. I ran sudo pip install awscli and it updated botocore to version 1.4.86 which fixed my issue.
Issue in aws-cli GH repo
I was using Ubuntu Xenial and needed to have awscli newer than 1.4.38 so I was using awscli from Ubuntu / Zesty.
As with pip, you need to upgrade python3-botocore so this worked for me:
apt-get install awscli python3-botocore
(from zesty repository).
Your /usr/bin/aws must be an old executable.
Run whereis aws. You will get a list of aws executables.
Find the most recent one by running aws --version.
Remove the corrupted executable. In your case sudo rm /usr/bin/aws

webapp2 application not running locally

I am starting with webapp2. I have created an application at following directory.
/home/github_projects/hellowebapp2
But when I try to fire up a server using:
/usr/lib/google-cloud-sdk/bin/dev_appserver.py github_projects/hellowebapp2
I get following error:
This action requires the installation of components: [app-engine-python]
You cannot perform this action because this Cloud SDK installation is
managed by an external package manager. If you would like to get the
latest version, please see our main download page at:
https://cloud.google.com/sdk/
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/bin/dev_appserver.py", line 35, in <module>
main()
File "/usr/lib/google-cloud-sdk/bin/dev_appserver.py", line 22, in main
command=__file__)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 189, in EnsureInstalledAndRestart
return manager._EnsureInstalledAndRestart(components, msg, command)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 1139, in _EnsureInstalledAndRestart
restart_args=restart_args):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 660, in Install
restart_args=restart_args)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 690, in Update
self._EnsureNotDisabled()
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 357, in _EnsureNotDisabled
'The component manager is disabled for this installation')
googlecloudsdk.core.updater.update_manager.UpdaterDisableError: The component manager is disabled for this installation
P.S I have already installed sdk from https://cloud.google.com/sdk/docs/#deb
Ok I solved this by installing specific python package from :
https://cloud.google.com/sdk/downloads#apt-get
sudo apt-get install google-cloud-sdk-app-engine-python

Python pikascript.py fails from command prompt

I have a script in python which is used to connect to RabbitMQ server and consumes messages. When i tried to run the script from command prompt as "./pikascript.py" i am getting the proper output but the same script when i try to execute as "python pikascript.py" i get the following error:
WARNING:pika.adapters.base_connection:Connection to 16.125.72.210:5671 failed: [Errno 1] _ssl.c:503: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
Traceback (most recent call last):
File "pikascript.py", line 39, in <module>
ssl=True, ssl_options=ssl_options))
File "build\bdist.win-amd64\egg\pika\adapters\blocking_connection.py", line 130, in __init__
File "build\bdist.win-amd64\egg\pika\adapters\base_connection.py", line 72, in __init__
File "build\bdist.win-amd64\egg\pika\connection.py", line 600, in __init__
File "build\bdist.win-amd64\egg\pika\adapters\blocking_connection.py", line 230, in connect
File "build\bdist.win-amd64\egg\pika\adapters\blocking_connection.py", line 301, in _adapter_connect
pika.exceptions.AMQPConnectionError: Connection to 16.125.72.210:5671 failed: [Errno 1] _ssl.c:503: error:1407742E:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert protocol version
I gave the proper path in the environmenal variables. Are there any dependencies to run the pika libraries.. Could someone please help me out.
When I tried to run the script from command line as "./pikascript.py" it is referring to the python path in "C:\Python\python.exe", but when I run the same script as "python pikascript.py" it refers to another python path in the same machine, where setup tools and pika library are not installed properly.
So I started executing the script as "C:\Python\python.exe pikascript.py" and script gets executed without any error.

Connection reset by peer when using s3, boto, django-storage for static files

I'm trying to switch to use amazon s3 to host our static files for our django project. I am using django, boto, django-storage and django-compressor. When I run collect static on my dev server, I get the error
socket.error: [Errno 104] Connection reset by peer
The size of all of my static files is 74MB, which doesnt seem too large. Has anyone seen this before, or have any debugging tips?
Here is the full trace.
Traceback (most recent call last):
File "./manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 443, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 382, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 196, in run_from_argv
self.execute(*args, **options.__dict__)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 232, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 371, in handle
return self.handle_noargs(**options)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 163, in handle_noargs
collected = self.collect()
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 113, in collect
handler(path, prefixed_path, storage)
File "/usr/local/lib/python2.7/dist-packages/django/contrib/staticfiles/management/commands/collectstatic.py", line 303, in copy_file
self.storage.save(prefixed_path, source_file)
File "/usr/local/lib/python2.7/dist-packages/django/core/files/storage.py", line 45, in save
name = self._save(name, content)
File "/usr/local/lib/python2.7/dist-packages/storages/backends/s3boto.py", line 392, in _save
self._save_content(key, content, headers=headers)
File "/usr/local/lib/python2.7/dist-packages/storages/backends/s3boto.py", line 403, in _save_content
rewind=True, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1222, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 714, in send_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 890, in _send_file_internal
query_args=query_args
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 547, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 966, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 927, in _mexe
raise e
socket.error: [Errno 104] Connection reset by peer
UPDATE: I don't have the answer to how to debug this error, but later this just stopped happening which makes me think it may have to do with something on S3.
tl;dr
If your bucket is not in the default region, you need to tell boto which region to connect to, e.g. if your bucket is in us-west-2 you need to add the following line to settings.py:
AWS_S3_HOST = 's3-us-west-2.amazonaws.com'
Long explanation:
It's not a permission problem and you should not set your bucket permissions to 'Authenticated users'.
This problem happens if you create your bucket in a region which is not the default one (in my case I was using us-west-2).
If you don't use the default region and you don't tell boto in which region your bucket resides, boto will connect to the default region and S3 will reply with a 307 redirect to the region where the bucket belongs.
Unfortunately, due to this bug in boto:
https://github.com/boto/boto/issues/2207
if the 307 reply arrives before boto has finished uploading the file, boto won't see the redirect and will keep uploading to the default region.
Eventually S3 closes the socket resulting into a 'Connection reset by peer'.
It's a kind of race condition which depends on the size of the object being uploaded and the speed of your internet connection, which explains why it happens randomly.
There are two possible reasons why the OP stopped seeing the error after some time:
- he later created a new bucket in the default region and the problem went away by itself.
- he started uploading only small files, which are fast enough to be fully uploaded by the time S3 replies with 307
This is issue someties occurs when you create a new bucket the first time, you have to wait for some hours or mins before you start uploading. I don't know why s3 behave like that. To prove that try creating a new bucket, point your django storage to it and u will see it show connection peer reset when u try to upload any thing from your django project, but wait for couple of hour or min try again it will work. Repeat the same step and see.
I just had this issue trying to set up a second S3 bucket to use for testing/devel and what helped was deploying an older version of the application.
I have no clue why that would help, but for those of you reading this way after the fact (like me, a couple hours ago), it's worth trying to deploy a different application version.
You have to set your bucket permissions to Authenticated Users List + Upload/Delete or you can create a specific user in IAM section of amazon and setup the access rights only for that specific user
This helped me some times ago: Setup S3 for Django