I am using aws-cli version 2.8.8.
Connecting to AWS using LDAP and it is successful.
If I run command aws s3 ls then I get the results.
However when I try run command aws dynamodb list-tables nothing get displayed. Same for aws ec2 describe-instances no response.
When I run same command in debug mode I can see exception in awscli.clidriver file:
Exception details
2022-11-03 12:27:55,762 - MainThread - awscli.clidriver - DEBUG - Exception caught in main()
Traceback (most recent call last):
File "awscli/clidriver.py", line 458, in main
File "awscli/clidriver.py", line 593, in __call__
File "awscli/clidriver.py", line 769, in __call__
My team members uses the same cli version and account and then can access to all data. The issue is with my Mac terminal.
I tried searching for this issue online but no one has reported it. This could be with my terminal but I am not able to identify root cause.
Related
I managed to have a script deploying a GCP Function using the following command :
gcloud beta functions deploy pipeline-helper --set-env-vars PROPFILE_BUCKET=${my_bucket},PROPFILE_PATH=${some_property} --source https://source.developers.google.com/projects/{PROJECT}/repos/{REPO}/fixed-aliases/1.0.1/paths/ --entry-point onFlagFileCreation --runtime nodejs6 --trigger-resource ${my_bucket} --trigger-event google.storage.object.finalize --region europe-west1 --memory 1G --timeout 300s
That worked for a few days, the last one being December 4th. Then, when launched on December 27th ... the command failed with the following output (with debug option added) :
Deploying function (may take a while - up to 2 minutes)...
..failed.
DEBUG: (gcloud.beta.functions.deploy) OperationError: code=13, message=Failed to retrieve function source code
Traceback (most recent call last):
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 841, in Execute
resources = calliope_command.Run(cli=self, args=args)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 770, in Run
resources = command_instance.Run(args)
File "/usr/lib/google-cloud-sdk/lib/surface/functions/deploy.py", line 203, in Run
return _Run(args, track=self.ReleaseTrack(), enable_env_vars=True)
File "/usr/lib/google-cloud-sdk/lib/surface/functions/deploy.py", line 157, in _Run
return api_util.PatchFunction(function, updated_fields)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 308, in CatchHTTPErrorRaiseHTTPExceptionFn
return func(*args, **kwargs)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/util.py", line 364, in PatchFunction
operations.Wait(op, messages, client, _DEPLOY_WAIT_NOTICE)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 126, in Wait
_WaitForOperation(client, request, notice)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 101, in _WaitForOperation
sleep_ms=SLEEP_MS)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/core/util/retry.py", line 219, in RetryOnResult
result = func(*args, **kwargs)
File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/api_lib/functions/operations.py", line 65, in _GetOperationStatus
raise exceptions.FunctionsError(OperationErrorToString(op.error))
FunctionsError: OperationError: code=13, message=Failed to retrieve function source code
ERROR: (gcloud.beta.functions.deploy) OperationError: code=13, message=Failed to retrieve function source code
Build step 'Execute shell' marked build as failure
Finished: FAILURE
My problem relates to the use of the --source option of this command when it points to a Google Source repository url (it works with gcs bucket or local directory)
I tried using the minimal valid source repository url https://source.developers.google.com/projects/PROJECT/repos/REPO as mentioned in the official doc here ... with no success (same error)
After that, i cloned the official sample « Google cloud functions - hello world sample to GC Repositories and tried to deploy it using an equivalent command ... with no more success (same error). However, i was able to deploy it via a zip uploaded to a gcs bucket in my project or from a local repository but not from Google Source repositories ...
The account used to deploy the Function (xxx-compute#developer.gserviceaccount.com) has the following right :
Stackdriver Debugger Agent
Cloud Functions Developer
Cloud Functions Service Agent
Editor
Service Account User
Source Repository Writer
Cloud Source Repositories Service Agent
Storage Object Creator
Storage Object Viewer
Any help would be greatly appreciated
As mentioned in my last comment to #Raj, the problem was due to a bug in GCP ... that is now fixed. Support « people » where kind and reactive.
All is working as expected now !
I used to work at a company and had setup my gcloud previously with gcloud init or gcloud auth login (I don't recall which one). We were using google container engine (GKE).
I've since left the company and been removed from the permissions on that project.
Now today, I wanted to setup a brand new app engine for myself unrelated to the previous company.
Why is it that I cant run any commands without getting the below error? gcloud init, gcloud auth login or even gcloud --help or gcloud config list all display errors. It seems like it's trying to login to my previous company's project with gcloud container cluster but I'm not typing that command at all and am in a differerent zone and interested in a different project. Where is my config for gcloud getting these defaults?
Is this a case where I need to delete my .config/gcloud folder? Seems rather extreme of a solution just to login to a different project?
Traceback (most recent call last):
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 65, in <module>
main()
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/gcloud.py", line 61, in main
sys.exit(googlecloudsdk.gcloud_main.main())
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 130, in main
gcloud_cli = CreateCLI([])
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/gcloud_main.py", line 119, in CreateCLI
generated_cli = loader.Generate()
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 329, in Generate
cli = self.__MakeCLI(top_group)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 517, in __MakeCLI
log.AddFileLogging(self.__logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 676, in AddFileLogging
_log_manager.AddLogsDir(logs_dir=logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 365, in AddLogsDir
self._CleanUpLogs(logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 386, in _CleanUpLogs
self._CleanLogsDir(logs_dir)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/core/log.py", line 412, in _CleanLogsDir
os.remove(log_file_path)
OSError: [Errno 13] Permission denied: '/Users/terence/.config/gcloud/logs/2017.07.27/19.07.37.248117.log'
And the log file:
/Users/terence/.config/gcloud/logs/2017.07.27/19.07.37.248117.log
2017-07-27 19:07:37,252 DEBUG root Loaded Command Group: ['gcloud', 'container']
2017-07-27 19:07:37,253 DEBUG root Loaded Command Group: ['gcloud', 'container', 'clusters']
2017-07-27 19:07:37,254 DEBUG root Loaded Command Group: ['gcloud', 'container', 'clusters', 'get_credentials']
2017-07-27 19:07:37,330 DEBUG root Running [gcloud.container.clusters.get-credentials] with arguments: [--project: "REMOVED_PROJECT", --zone: "DIFFERENT_ZONE", NAME: "REMOVED_CLUSTER_NAME"]
2017-07-27 19:07:37,331 INFO ___FILE_ONLY___ Fetching cluster endpoint and auth data.
2017-07-27 19:07:37,591 DEBUG root (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
Traceback (most recent call last):
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 712, in Execute
resources = args.calliope_command.Run(cli=self, args=args)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 871, in Run
resources = command_instance.Run(args)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/surface/container/clusters/get_credentials.py", line 69, in Run
cluster = adapter.GetCluster(cluster_ref)
File "/usr/local/Caskroom/google-cloud-sdk/latest/google-cloud-sdk/lib/googlecloudsdk/api_lib/container/api_adapter.py", line 213, in GetCluster
raise api_error
HttpException: ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
2017-07-27 19:07:37,596 ERROR root (gcloud.container.clusters.get-credentials) ResponseError: code=403, message=Required "container.clusters.get" permission for "projects/REMOVED_PROJECT/zones/DIFFERENT_ZONE/clusters/REMOVED_CLUSTER_NAME".
I had to delete my .config/gcloud to make this work although I don't believe that is a good "solution".
Okay so not sure if things have changed but ran into a similar issue. Please try this before nuking your configuration.
gcloud supports multiple accounts and you can see what account is active by running gcloud auth list.
ACTIVE ACCOUNT
* Work-Email#company.com
Personal-Email#gmail.com
If you are not on the correct one, you can do
$ gcloud config set account Personal-Email#gmail.com
And it'll set the correct account. Running a gcloud auth list again should show the ACTIVE now on your personal.
if you haven't auth'd into your personal, you'll need to login. You can rungcloud auth login Personal-Email#gmail.com and follow the flow from there and then return to the above.
Make sure to set PROJECT_ID or whatever things you may need when switching.
Now from there I found it's STILL possible that you might not be auth'd correctly. I think for this, you may need to restart your terminal session or even simply doing a source ~/.bash_profile was sufficient. (Perhaps I needed to do this to refresh the GOOGLE_APPLICATION_CREDENTIALS environment variable but I'm not sure).
Hope this helps. Try this before nuking
Rename / delete config/gcloud/logs folder and try Instead of deleting .config/gcloud folder.
This Solution worked for me :)
I went through the quickstart located here: https://boto3.readthedocs.io/en/latest/guide/quickstart.html
I installed the AWS CLI and configured it with my valid keys. I've double checked in the ~/.aws/credentials and ~/.aws/config
At this point I should be able to run my py script with python bin/process_sqs_messages.py at the command prompt. The script looks like this:
__author__ = 'chris'
import boto3
sqs = boto3.client('sqs')
# List SQS queues
response = sqs.list_queues()
print(response['QueueUrls'])
I get the following error:
botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId)
when calling the ListQueues operation: No account found for the given parameters
Full stack trace:
Traceback (most recent call last):
File "bin/process_sqs_messages.py", line 12, in <module>
response = client.list_queues()
File "/Users/xxxx/.environments/xxxx_env/lib/python3.6/site-packages/botocore/client.py", line 310, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/xxxxx/.environments/xxxxx_env/lib/python3.6/site-packages/botocore/client.py", line 599, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the ListQueues operation: No account found for the given parameters
My guess is that i'm missing a session token, but i'm not sure and if I am where/how do I get one? The sample doesn't mention it at all.
I just created a new user and magically they work again. Must have had my credentials invalidated somewhere, but that user still existed and that user's credentials matched inside awscli.
In my case, I created new users/keys but it didn't work. Also, I double-checked the following.
My AWS keys were correct.
Policy/Permission was right.
The machine time was correct as well.
It was my old terminal that was causing the problem(which I didn't shut nearly for weeks) so After closing it, I just relaunched it and it worked perfectly fine.
I am using boto3 in my project and when i package it as rpm it is raising error while initializing ec2 client.
<class 'botocore.exceptions.DataNotFoundError'>:Unable to load data for: _endpoints. Traceback -Traceback (most recent call last):
File "roboClientLib/boto/awsDRLib.py", line 186, in _get_ec2_client
File "boto3/__init__.py", line 79, in client
File "boto3/session.py", line 200, in client
File "botocore/session.py", line 789, in create_client
File "botocore/session.py", line 682, in get_component
File "botocore/session.py", line 809, in get_component
File "botocore/session.py", line 179, in <lambda>
File "botocore/session.py", line 475, in get_data
File "botocore/loaders.py", line 119, in _wrapper
File "botocore/loaders.py", line 377, in load_data
DataNotFoundError: Unable to load data for: _endpoints
Can anyone help me here. Probably boto3 requires some run time resolutions which it not able to get this in rpm.
I tried with using LD_LIBRARY_PATH in /etc/environment which is not working.
export LD_LIBRARY_PATH="/usr/lib/python2.6/site-packages/boto3:/usr/lib/python2.6/site-packages/boto3-1.2.3.dist-info:/usr/lib/python2.6/site-packages/botocore:
I faced the same issue:
botocore.exceptions.DataNotFoundError: Unable to load data for: ec2/2016-04-01/service-2
For which I figured out the directory was missing. Updating botocore by running the following solved my issue:
pip install --upgrade botocore
Botocore depends on a set of service definition files that it uses to generate clients on the fly. Boto3 further depends on another set of files that it uses to generate resource clients. You will need to include these in any installs of boto3 or botocore. The files will need to be located in the 'data' folder of the root of the respective library.
I faced similar issue which was due to old version of botocore. Once I updated it, it started working.
Please consider using below command.
pip install --upgrade botocore
Also please ensure, you have setup boto configuration profile.
Boto searches credentials in below order.
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM
role configured.
I have tried both the commands below and did set the env variables prior to launch of the scripts, but I am hit with "AWS was not able to validate the provided access credentials" error. I don't think there is an issue with keys.
I would appreciate any sort help to fix this.
I am on ubuntu t2.micro instance.
https://spark.apache.org/docs/latest/ec2-scripts.html
export AWS_SECRET_ACCESS_KEY=
export AWS_ACCESS_KEY_ID=
./spark-ec2 -k admin-key1 -i /home/ubuntu/admin-key1.pem -s 3 launch my-spark-cluster
./spark-ec2 --key-pair=admin-key1 --identity-file=/home/ubuntu/admin-key1.pem --region=ap-southeast-2 --zone=ap-southeast-2a launch my-spark-cluster
AuthFailure
AWS was not able to validate the provided access credentials
Traceback (most recent call last):
File "./spark_ec2.py", line 1465, in <module>
main()
File "./spark_ec2.py", line 1457, in main
real_main()
File "./spark_ec2.py", line 1277, in real_main
opts.zone = random.choice(conn.get_all_zones()).name
File "/cskmohan/spark-1.4.1/ec2/lib/boto-2.34.0/boto/ec2/connection.py", line 1759, in get_all_zones
[('item', Zone)], verb='POST')
File "/cskmohan/spark-1.4.1/ec2/lib/boto-2.34.0/boto/connection.py", line 1182, in get_list
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 401 Unauthorized