When executing the following command in the mac terminal I got an error: aws configure
I couldn't really find anything helpful online and I am newbie to mac and to aws. Can somebody please help me fix it?
same thing would happen with other commands like aws --version
commands like which awswould work normally
Traceback (most recent call last):
File "botocore/configloader.py", line 149, in raw_config_parse
File "configparser.py", line 696, in read
File "configparser.py", line 1091, in _read
configparser.DuplicateOptionError: While reading from '/Users/sj-auteon/.aws/credentials' [line 4]: option 'aws_access_key_id' in section 'default' already exists
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "aws", line 27, in <module>
File "aws", line 23, in main
File "awscli/clidriver.py", line 90, in main
File "awscli/clidriver.py", line 99, in create_clidriver
File "botocore/session.py", line 361, in full_config
File "botocore/configloader.py", line 152, in raw_config_parse
botocore.exceptions.ConfigParseError: Unable to parse config file: /Users/sj-auteon/.aws/credentials
[831] Failed to execute script aws```
Based on the comments. The solution was to delete existing .aws/credentials and create new one using aws configure command.
Related
I'm trying to use SAM Accelerate as recommended by AWS. However, the sam sync command is failing
PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: jsonpickle==2.1.0
The requirement for jsonpickle is included in the requirements.txt file, and it's installed locally.
foo#bar:~/sam-project$ pip freeze | grep jsonpickle
jsonpickle==2.1.0
The exact same error occurs when I use sam build, but I'm able to use the sam build -u to use a container and make the build work. Unfortunately that doesn't seem to be an option for sam sync.
I have found a few occurrences of a similar issue, but none of them address the root cause and this I am unsure of how to fix this.
Full output
foo#bar:~/sam-project$ sam sync --watch
The SAM CLI will use the AWS Lambda, Amazon API Gateway, and AWS StepFunctions APIs to upload your code without
performing a CloudFormation deployment. This will cause drift in your CloudFormation stack.
**The sync command should only be used against a development stack**.
Confirm that you are synchronizing a development stack.
Enter Y to proceed with the command, or enter N to cancel:
[Y/n]: y
Queued infra sync. Wating for in progress code syncs to complete...
Starting infra sync.
Manifest file is changed (new hash: 1719a58de4024a0928ae0e3ddf42ac82) or dependency folder (.aws-sam/deps/ce2e5caa-e309-401a-8ab1-425d3c3e399d) is missing for (CoreLayer), downloading dependencies and copying/building source
Building layer 'CoreLayer'
Running PythonPipBuilder:CleanUp
Clean up action: .aws-sam/deps/ce2e5caa-e309-401a-8ab1-425d3c3e399d does not exist and will be skipped.
Running PythonPipBuilder:ResolveDependencies
Build Failed
Failed to sync infra. Code sync is paused until template/stack is fixed.
Traceback (most recent call last):
File "aws_lambda_builders/workflows/python_pip/actions.py", line 54, in execute
File "aws_lambda_builders/workflows/python_pip/packager.py", line 156, in build_dependencies
File "aws_lambda_builders/workflows/python_pip/packager.py", line 258, in build_site_packages
File "aws_lambda_builders/workflows/python_pip/packager.py", line 282, in _download_dependencies
File "aws_lambda_builders/workflows/python_pip/packager.py", line 365, in _download_all_dependencies
File "aws_lambda_builders/workflows/python_pip/packager.py", line 717, in download_all_dependencies
aws_lambda_builders.workflows.python_pip.packager.NoSuchPackageError: Could not satisfy the requirement: jsonpickle==2.1.0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "aws_lambda_builders/workflow.py", line 301, in run
File "aws_lambda_builders/workflows/python_pip/actions.py", line 57, in execute
aws_lambda_builders.actions.ActionFailedError: Could not satisfy the requirement: jsonpickle==2.1.0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "samcli/lib/build/app_builder.py", line 760, in _build_function_in_process
File "aws_lambda_builders/builder.py", line 164, in build
File "aws_lambda_builders/workflow.py", line 95, in wrapper
File "aws_lambda_builders/workflow.py", line 308, in run
aws_lambda_builders.exceptions.WorkflowFailedError: PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: jsonpickle==2.1.0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "samcli/commands/build/build_context.py", line 248, in run
File "samcli/lib/build/app_builder.py", line 221, in build
File "samcli/lib/build/build_strategy.py", line 358, in build
File "samcli/lib/build/build_strategy.py", line 78, in build
File "samcli/lib/build/build_strategy.py", line 361, in _build_layers
File "samcli/lib/build/build_strategy.py", line 380, in _run_builds_async
File "samcli/lib/utils/async_utils.py", line 131, in run_async
File "samcli/lib/utils/async_utils.py", line 90, in run_given_tasks_async
File "asyncio/base_events.py", line 587, in run_until_complete
File "samcli/lib/utils/async_utils.py", line 58, in _run_given_tasks_async
File "concurrent/futures/thread.py", line 57, in run
File "samcli/lib/build/build_strategy.py", line 388, in build_single_layer_definition
File "samcli/lib/build/build_strategy.py", line 546, in build_single_layer_definition
File "samcli/lib/build/build_strategy.py", line 430, in build_single_layer_definition
File "samcli/lib/build/build_strategy.py", line 218, in build_single_layer_definition
File "samcli/lib/build/app_builder.py", line 552, in _build_layer
File "samcli/lib/build/app_builder.py", line 763, in _build_function_in_process
samcli.lib.build.exceptions.BuildError: PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: jsonpickle==2.1.0
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "samcli/lib/sync/watch_manager.py", line 190, in _execute_infra_sync
File "samcli/lib/sync/watch_manager.py", line 142, in _execute_infra_context
File "samcli/commands/build/build_context.py", line 308, in run
samcli.commands.exceptions.UserException: PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: jsonpickle==2.1.0
samcli.commands.exceptions.UserException: PythonPipBuilder:ResolveDependencies - Could not satisfy the requirement: jsonpickle==2.1.0
Unfortunately no one was able to assist here, so I opened an issue on GitHub.
Eventually the issue became clear and it's not actually an issue with SAM. The problem is that I use an AWS BuildArtifact feed, so any sam build or sam sync action will try and pull packages from that feed. However, the token for that feed expires after 12 hours.
The issue remains open and the SAM team are investigating the error that is displayed, and hopefully they will implement a solution that will surface the underlying error message, which would have made diagnosing this issue a whole lot easier.
After updating gcloud from version 290.0.1 to version 306.0.0, I'm getting an error when I run a gsutil cp command:
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil", line 21, in <module>
gsutil.RunMain()
File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil.py", line 122, in RunMain
import gslib.__main__
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 53, in <module>
import boto
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/__init__.py", line 1216, in <module>
boto.plugin.load_plugins(config)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/plugin.py", line 93, in load_plugins
_import_module(file)
File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/vendored/boto/boto/plugin.py", line 75, in _import_module
return imp.load_module(name, file, filename, data)
File "/usr/lib/python3.6/imp.py", line 235, in load_module
return load_source(name, filename, file)
File "/usr/lib/python3.6/imp.py", line 172, in load_source
module = _load(spec)
File "/usr/share/google/boto/boto_plugins/compute_auth.py", line 18, in <module>
import urllib2
ModuleNotFoundError: No module named 'urllib2'
Following the downgrade instructions at https://cloud.google.com/sdk/docs/downloads-apt-get#downgrading_cloud_sdk_versions temporarily fixes the issue:
sudo apt-get update && sudo apt-get install google-cloud-sdk=290.0.1-0
But I'd like to know how to get this working with the latest version.
I have installed the version 306.0.0 and I ran a gcloud cp command, but I didn't face the issue. For this reason, checking for causes for the error ModuleNotFoundError: No module named 'urllib2', it seems that they are always related to a Python library that isn't working correctly - as you can check in this two examples here and here.
However, in further searches, this plugin usually is used within Compute Engine and startup scripts to VMs with Python, more specific relating to the file compute_auth.py - which in the message seems to be related to the error - and as you can check for more information here about this file.
Considering that, the new version of Cloud SDK bring some updates to Compute Engine that could be causing the error. In case you are indeed, using Python within your applications, I would give it a try the solution from this case here, that would be to update the file compute_auth.py, changing the line import urllib2 toimport urllib.request as urllib2.
In case this doesn't fix, raising a bug within Google's Issue Tracker will be the best option, for a further investigation.
I had a similar case. In my case, Travis CI/CD was giving the below error. What I did is add the below script to my .travis.yml file before_script section.
Error:
Traceback (most recent call last):
635 File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil", line 21, in <module>
636 gsutil.RunMain()
637 File "/usr/lib/google-cloud-sdk/platform/gsutil/gsutil.py", line 121, in RunMain
638 import gslib.__main__
639 File "/usr/lib/google-cloud-sdk/platform/gsutil/gslib/__main__.py", line 83, in <module>
640 import httplib2
641ModuleNotFoundError: No module named 'httplib2'
642error Command failed with exit code 1
Fix:
before_script:
- pip install httplib2 crcmod
I tried installing Apache Atlas on a single EC2 node but if fails to start:
wget http://www-eu.apache.org/dist/atlas/1.0.0/apache-atlas-1.0.0-sources.tar.gz
tar xvfz apache-atlas-1.0.0-sources.tar.gz
cd apache-atlas-sources-1.0.0/
export MAVEN_OPTS="-Xms2g -Xmx2g"
mvn clean -DskipTests package -Pdist,embedded-hbase-solr
python atlas_start.py
/tmp/apache-atlas-sources-1.0.0/distro/src/conf/atlas-env.sh: line 59: MANAGE_LOCAL_HBASE=${hbase.embedded}: bad substitution
/tmp/apache-atlas-sources-1.0.0/distro/src/conf/atlas-env.sh: line 62: MANAGE_LOCAL_SOLR=${solr.embedded}: bad substitution
/tmp/apache-atlas-sources-1.0.0/distro/src/conf/atlas-env.sh: line 65: MANAGE_EMBEDDED_CASSANDRA=${cassandra.embedded}: bad substitution
/tmp/apache-atlas-sources-1.0.0/distro/src/conf/atlas-env.sh: line 68: MANAGE_LOCAL_ELASTICSEARCH=${elasticsearch.managed}: bad substitution
Exception: [Errno 2] No such file or directory
Traceback (most recent call last):
File "atlas_start.py", line 163, in <module>
returncode = main()
File "atlas_start.py", line 73, in main
mc.expandWebApp(atlas_home)
File "/tmp/apache-atlas-sources-1.0.0/distro/src/bin/atlas_config.py", line 160, in expandWebApp
jar(atlasWarPath)
File "/tmp/apache-atlas-sources-1.0.0/distro/src/bin/atlas_config.py", line 213, in jar
process = runProcess(commandline)
File "/tmp/apache-atlas-sources-1.0.0/distro/src/bin/atlas_config.py", line 249, in runProcess
p = subprocess.Popen(commandline, stdout=stdoutFile, stderr=stderrFile, shell=shell)
File "/usr/lib64/python2.7/subprocess.py", line 390, in __init__
errread, errwrite)
File "/usr/lib64/python2.7/subprocess.py", line 1025, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
How to install Apache Atlas on one AWS EC2?
Thanks.
I agree you should check the script. However, the notes are not very clear. You need to have configured it too. That means defining if you are using a pre-built ZK installation or not, but more importantly, Atlas by default uses HBase as its store. You MUST have HDFS available too, and change the config to point to HDFS Namenode (usually on port 9000).
Hope this helps.
Please check if JAVA_HOME initialized or has correct value. Initializing with the valid value solved the issue for me.
I'm trying to get a Google app up and running on my local machine, however, am facing an issue when running the setup scripts. The script errors out and tells me that there is no module time and seems to be breaking in the google-cloud-sdk....
Things I've tried:
Importing time in Python (it works)
Trying this to no avail: https://apple.stackexchange.com/questions/96308/python-installation-messed-up
Traceback (most recent call last):
File "/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/platform/google_appengine/_python_runtime.py", line 83, in <module>
_run_file(__file__, globals())
File "/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/platform/google_appengine/_python_runtime.py", line 79, in _run_file
execfile(_PATHS.script_file(script_name), globals_)
File "/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime.py", line 175, in <module>
main()
File "/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime.py", line 155, in main
sandbox.enable_sandbox(config)
File "/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 183, in enable_sandbox
__import__('%s.threading' % dist27.__name__)
File "/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/platform/google_appengine/google/appengine/dist27/threading.py", line 13, in <module>
from time import time as _time, sleep as _sleep
File "/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/sandbox.py", line 984, in load_module
raise ImportError('No module named %s' % fullname)
ImportError: No module named time
Here is my current $PATH:
/Users/kennethryan/Projects/go-edu-store/y/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
Seeing as this is an inactive year old issue, we can assume that updating the Google Cloud tools to their latest versions by running 'gcloud components update' will fix this.
Also ensuring that you are using the Python installation provided by GCloud, and that there are no conflicting 'CLOUDSDK_PYTHON' environment variables should prevent this.
If this issue is seen again in the future, it is recommended to directly report this to the Google Public Issue Tracker so that this can be properly handled and triaged to the GCloud engineering team.
In my case, I resolved this problem by setting
export PYTHONPATH=$PYTHONPATH:/usr/lib64/python2.7/lib-dynload/ where timemodule.so file is located.
i have web app that i have developed using bottle micro framework.
However it crash a lot and all of them suddenly without any action ( without using the web app) . So i have reviewed the logs file and find the following errors (i have no idea what the causation of these errors):
Traceback (most recent call last):
File "/home/hamoud/lib/python2.7/bottle.py", line 2699, in run
server.run(app)
File "/home/hamoud/lib/python2.7/bottle.py", line 2385, in run
srv = make_server(self.host, self.port, handler, **self.options)
File "/usr/local/lib/python2.7/wsgiref/simple_server.py", line 144, in make_server
server = server_class((host, port), handler_class)
File "/usr/local/lib/python2.7/SocketServer.py", line 419, in __init__
self.server_bind()
File "/usr/local/lib/python2.7/wsgiref/simple_server.py", line 48, in server_bind
HTTPServer.server_bind(self)
File "/usr/local/lib/python2.7/BaseHTTPServer.py", line 108, in server_bind
SocketServer.TCPServer.server_bind(self)
File "/usr/local/lib/python2.7/SocketServer.py", line 430, in server_bind
self.socket.bind(self.server_address)
File "/usr/local/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
TypeError: 'int' object is not callable
and
Traceback (most recent call last):
File "interface.py", line 29, in <module>
run(host="localhost", port=32471, reloader=True, debug=True)
File "/home/hamoud/lib/python2.7/bottle.py", line 2657, in run
os.utime(lockfile, None) # I am alive!
OSError: [Errno 2] No such file or directory: '/tmp/bottle.gQmJc8.lock'
However the second error doesn't crash the application ( application would continue working ) but for the first one it's require manual work ( run the app again ).
i could schedule task using cron job to run the application when it's crash. but i'd like to know what's the problem in my app.
A few ideas come to mind:
Could there be another program on your machine (e.g., a cron job) that is deleting files from /tmp?
Are you using the latest version of Bottle? (From the line number in your stacktrace, it looks like you might not be.)
If nothing else works, try running without reloader=True (or use reloader=False). I looked at the Bottle code, and that change should at least work around the problem, even though we don't know the cause (yet).
Hope that helps.