AWS command line tools broken : ( - amazon-web-services

I tried to install awscli after ebcli, and they both broke. Currently, if I type aws s3 ls, it just hangs with no response, and if I try to use eb, I get this error:
Traceback (most recent call last):
File "/usr/local/bin/eb", line 11, in <module>
load_entry_point('awsebcli==3.8.4', 'console_scripts', 'eb')()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 565, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2631, in load_entry_point
return ep.load()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2291, in load
return self.resolve()
File "/usr/local/lib/python2.7/dist-packages/pkg_resources/__init__.py", line 2297, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/local/lib/python2.7/dist-packages/ebcli/core/ebcore.py", line 43, in <module>
from . import ebglobals, base, io, hooks
File "/usr/local/lib/python2.7/dist-packages/ebcli/core/base.py", line 19, in <module>
from ebcli import __version__
ImportError: cannot import name __version__
I basically need to have command line tools for s3 and elastic beanstalk, but I apparently have no luck, and will be spending my entire day googling the universe, and combing through error codes to try and fix this : (
I'm on Ubuntu 14.04 on a Thinkpad.

It is quite common for different Python libraries to install over each other, causing problems like this.
A popular fix is to use a the virtualenv tool to create isolated Python environments.
The AWS documentation for awsebcli has a page showing how: Install the EB CLI in a Virtual Environment
Alternatively, keep using the AWS Command-Line Interface (CLI) since it works across all AWS services, rather than using service-specific command sets like awsebcli (which pre-date the CLI).

Related

ImportError: No module named idlelib" when running Google Dataflow worker

I have a python 2.7 script I run locally to launch a Apache Beam / Google Dataflow job (SDK 2.12.0). The job takes a csv file from a Google storage bucket, processes it and then creates an entity in Google Datastore for each row. The script ran fine for years ...but now it is failing:
INFO:root:2019-05-15T22:07:11.481Z: JOB_MESSAGE_DETAILED: Workers have started successfully.
INFO:root:2019-05-15T21:47:13.370Z: JOB_MESSAGE_ERROR: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 773, in run
self._load_main_session(self.local_staging_directory)
File "/usr/local/lib/python2.7/dist-packages/dataflow_worker/batchworker.py", line 489, in _load_main_session
pickler.load_session(session_file)
File "/usr/local/lib/python2.7/dist-packages/apache_beam/internal/pickler.py", line 280, in load_session
return dill.load_session(file_path)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 410, in load_session
module = unpickler.load()
File "/usr/lib/python2.7/pickle.py", line 864, in load
dispatch[key](self)
File "/usr/lib/python2.7/pickle.py", line 1139, in load_reduce
value = func(*args)
File "/usr/local/lib/python2.7/dist-packages/dill/_dill.py", line 827, in _import_module
return __import__(import_name)
ImportError: No module named idlelib
I believe this error is happening at the worker level (not locally). I don't make reference to it in my script. To make sure it wasn't me I have installed updates for all google-cloud packages, apache-beam[gcp] etc locally -just in case. I tried importing idlelib into my script I get the same error. Any suggestions?
It has been fine for years and started failing from SDK 2.12.0 release.
What was the last release that this script succeeding on? 2.11?

AWS DataPipeline Maching Learning AMI tensorflow issues

I'm running the AWS Machine Learning AMI on an EC2 instance. I've confirmed that from the terminal, both in python and jupyter can run
import tensorflow as tf
along with
python pytest.py
from the terminal (which contains the above tensorflow import), with no issues.
I'm now trying to automate my script using DataPipeline along with TaskRunner. The bash command in DataPipeline is again, just:
python pytest.py
However, Immediately get the following error:
Traceback (most recent call last): File "pytest.py", line 1, in
import tensorflow as tf File "/usr/lib/python2.7/dist-packages/tensorflow/init.py", line 24, in
from tensorflow.python import * File "/usr/lib/python2.7/dist-packages/tensorflow/python/init.py", line
72, in
raise ImportError(msg) ImportError: Traceback (most recent call last): File
"/usr/lib/python2.7/dist-packages/tensorflow/python/init.py", line
61, in
from tensorflow.python import pywrap_tensorflow File "/usr/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py",
line 28, in
_pywrap_tensorflow = swig_import_helper() File "/usr/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py",
line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow', fp, pathname, description) ImportError: libcudart.so.7.5: cannot open shared object
file: No such file or directory
Failed to load the native TensorFlow runtime.
See
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/get_started/os_setup.md#import_error
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
It seems like AWS DataPipeline (or TaskRunner?) uses a different enviornment setup, because again, I have no issues running the script through an ssh terminal to the instance. I found a few posts which suggested adding cuda to the LD_LIBRARY_PATH, but the AMI instance already has it:
echo $LD_LIBRARY_PATH
/home/ec2-user/src/torch/install/lib:/home/ec2-user/src/cntk/bindings/python/cntk/libs:/usr/local/cuda/lib64:/usr/local/lib:/usr/lib:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/mpi/lib:/home/ec2-user/src/mxnet/mklml_lnx_2017.0.1.20161005/lib:
which clearly contains the cuda librarypath that tensorflow needs.

KeyError: 'opsworkscm' when attempting to use the AWS CLI

When attempting to use the AWS CLI for the EC2 instance I'm working with, I receive the following error.
[ec2-user#ip-xxx-xxx-xxx-xxx ~]$ aws
Traceback (most recent call last):
File "/usr/bin/aws", line 27, in <module>
sys.exit(main())
File "/usr/bin/aws", line 23, in main
return awscli.clidriver.main()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 54, in main
return driver.main()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 186, in main
command_table = self._get_command_table()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 96, in _get_command_table
self._command_table = self._build_command_table()
File "/usr/lib/python2.7/dist-packages/awscli/clidriver.py", line 116, in _build_command_table
command_object=self)
File "/usr/local/lib/python2.7/site-packages/botocore-1.4.8-py2.7.egg/botocore/session.py", line 680, in emit
return self._events.emit(event_name, **kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore-1.4.8-py2.7.egg/botocore/hooks.py", line 226, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python2.7/site-packages/botocore-1.4.8-py2.7.egg/botocore/hooks.py", line 209, in _emit
response = handler(**kwargs)
File "/usr/lib/python2.7/dist-packages/awscli/customizations/opsworkscm.py", line 21, in alias_opsworks_cm
alias_command(command_table, 'opsworkscm', 'opsworks-cm')
File "/usr/lib/python2.7/dist-packages/awscli/customizations/utils.py", line 71, in alias_command
current = command_table[existing_name]
KeyError: 'opsworkscm'
I am not quite sure why this is happening. I am working with other ec2 instances setup similar to this one that work, but I am not sure what difference may be causing this error.
I ran across this issue in the aws-cli GH repo. I ran sudo pip install awscli and it updated botocore to version 1.4.86 which fixed my issue.
Issue in aws-cli GH repo
I was using Ubuntu Xenial and needed to have awscli newer than 1.4.38 so I was using awscli from Ubuntu / Zesty.
As with pip, you need to upgrade python3-botocore so this worked for me:
apt-get install awscli python3-botocore
(from zesty repository).
Your /usr/bin/aws must be an old executable.
Run whereis aws. You will get a list of aws executables.
Find the most recent one by running aws --version.
Remove the corrupted executable. In your case sudo rm /usr/bin/aws

webapp2 application not running locally

I am starting with webapp2. I have created an application at following directory.
/home/github_projects/hellowebapp2
But when I try to fire up a server using:
/usr/lib/google-cloud-sdk/bin/dev_appserver.py github_projects/hellowebapp2
I get following error:
This action requires the installation of components: [app-engine-python]
You cannot perform this action because this Cloud SDK installation is
managed by an external package manager. If you would like to get the
latest version, please see our main download page at:
https://cloud.google.com/sdk/
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/bin/dev_appserver.py", line 35, in <module>
main()
File "/usr/lib/google-cloud-sdk/bin/dev_appserver.py", line 22, in main
command=__file__)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 189, in EnsureInstalledAndRestart
return manager._EnsureInstalledAndRestart(components, msg, command)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 1139, in _EnsureInstalledAndRestart
restart_args=restart_args):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 660, in Install
restart_args=restart_args)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 690, in Update
self._EnsureNotDisabled()
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/updater/update_manager.py", line 357, in _EnsureNotDisabled
'The component manager is disabled for this installation')
googlecloudsdk.core.updater.update_manager.UpdaterDisableError: The component manager is disabled for this installation
P.S I have already installed sdk from https://cloud.google.com/sdk/docs/#deb
Ok I solved this by installing specific python package from :
https://cloud.google.com/sdk/downloads#apt-get
sudo apt-get install google-cloud-sdk-app-engine-python

boto3 throws error in when packaged under rpm

I am using boto3 in my project and when i package it as rpm it is raising error while initializing ec2 client.
<class 'botocore.exceptions.DataNotFoundError'>:Unable to load data for: _endpoints. Traceback -Traceback (most recent call last):
File "roboClientLib/boto/awsDRLib.py", line 186, in _get_ec2_client
File "boto3/__init__.py", line 79, in client
File "boto3/session.py", line 200, in client
File "botocore/session.py", line 789, in create_client
File "botocore/session.py", line 682, in get_component
File "botocore/session.py", line 809, in get_component
File "botocore/session.py", line 179, in <lambda>
File "botocore/session.py", line 475, in get_data
File "botocore/loaders.py", line 119, in _wrapper
File "botocore/loaders.py", line 377, in load_data
DataNotFoundError: Unable to load data for: _endpoints
Can anyone help me here. Probably boto3 requires some run time resolutions which it not able to get this in rpm.
I tried with using LD_LIBRARY_PATH in /etc/environment which is not working.
export LD_LIBRARY_PATH="/usr/lib/python2.6/site-packages/boto3:/usr/lib/python2.6/site-packages/boto3-1.2.3.dist-info:/usr/lib/python2.6/site-packages/botocore:
I faced the same issue:
botocore.exceptions.DataNotFoundError: Unable to load data for: ec2/2016-04-01/service-2
For which I figured out the directory was missing. Updating botocore by running the following solved my issue:
pip install --upgrade botocore
Botocore depends on a set of service definition files that it uses to generate clients on the fly. Boto3 further depends on another set of files that it uses to generate resource clients. You will need to include these in any installs of boto3 or botocore. The files will need to be located in the 'data' folder of the root of the respective library.
I faced similar issue which was due to old version of botocore. Once I updated it, it started working.
Please consider using below command.
pip install --upgrade botocore
Also please ensure, you have setup boto configuration profile.
Boto searches credentials in below order.
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM
role configured.