Chef aws client - amazon-web-services

I don't quite figure out how to use this aws cookbook. My goal is to download some file from my s3 bucket. According to documentation, I've set this content in my recipe:
aws = data_bag_item('aws', 'dev')
aws_s3_file '/tmp/authz.war' do
bucket 'living-artifacts-dev'
remote_path '/authz/authz.war'
aws_access_key aws['aws_access_key_id']
aws_secret_access_key aws['aws_secret_access_key']
region 'eu-central-1'
end
All values are populated correctly and I've also tried to test them using aws-cli. Nevertheless, chef client is getting this message:
=========================================================================
Error executing action `create` on resource 'aws_s3_file[/tmp/authz.war]'
=========================================================================
Net::HTTPServerException
------------------------
remote_file[/tmp/authz.war] (/var/chef/cache/cookbooks/aws/providers/s3_file.rb line 40) had an error: Net::HTTPServerException: 403 "Forbidden"
How could I debug this?
EDIT
I've tested it using aws command client. I've firstly set credentials using aws configure and I've provided requested values. So, this command:
aws s3 cp s3://living-artifacts-dev/authz/authz.war authz.war
is correctly performed and file is downloaded.
EDIT
More detailed error message:
==> default: * aws_s3_file[/tmp/authz.war] action create
==> default:
==> default: * chef_gem[aws-sdk] action install
==> default: [2017-03-03T11:25:16+00:00] INFO: chef_gem[aws-sdk] installed aws-sdk at ~> 2.2
==> default:
==> default: - install version ~> 2.2 of package aws-sdk
==> default: [2017-03-03T11:25:16+00:00] INFO: Remote and local files do not match, running create operation.
==> default: * chef_gem[aws-sdk] action install (up to date)
==> default: * remote_file[/tmp/authz.war] action create
==> default: [2017-03-03T11:25:16+00:00] INFO: HTTP Request Returned 403 Forbidden:
==> default: [2017-03-03T11:25:16+00:00] WARN: remote_file[/tmp/authz.war] cannot be downloaded from https://living-artifacts-dev.s3.e
u-central-1.amazonaws.com/authz/authz.war?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=sFo6JjohgYi%2BYi4Ut7pTy9EGVDCG89IROX%2Bw7E
RR%2F20170303%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20170303T112516Z&X-Amz-Expires=300&X-Amz-SignedHeaders=host&X-Amz-Signatur
e=f3c2b371ad4e1fe24745459adf0463c708e0363a139b598b04e40789c43ded7d: 403 "Forbidden"

Remove the first slash from remote_path '/authz/authz.war'
Here is the example from the AWS cookbook documentation:
aws_s3_file '/tmp/foo' do
bucket 'i_haz_an_s3_buckit'
remote_path 'path/in/s3/bukket/to/foo'
aws_access_key aws['aws_access_key_id']
aws_secret_access_key aws['aws_secret_access_key']
region 'us-west-1'
end

You have a forbidden error
403 "Forbidden"
You need to ensure your system if on AWS has an appropriate IAM policy attached to it which has at least READ on the bucket and specifically the file you need.

Related

Why I got Error loading state: Failed to open state file(GCP)?

I am new in GCP. I added bucket
gsutil mb -p chris02 gs://chris02-state-bucket
When I try to initialize project
terraform init
Initializing the backend...
Error loading state: Failed to open state file at gs://chris02-state-bucket/m3/gcs_state/default.tfstate: googleapi: got HTTP response code 403 with body: <?xml version='1.0' encoding='UTF-8'?><Error><Code>UserProjectAccountProblem</Code><Message>User project billing account not in good standing.</Message><Details>The billing account for the owning project is disabled in state absent</Details></Error>
these are the bucket permissions
what else should I check?

tensorboard with AWS s3 bucket

On linux, with tensorflow/tensorboard 2.9.1, I'm trying to run tensorboard with a folder on a S3 bucket. I'm well authentified: aws s3 ls is working. I'm working behind a proxy.
But when I'm running:
AWS_LOG_LEVEL=1 AWS_DEFAULT_PROFILE=myprofile AWS_REGION=eu-west-1 tensorboard --logdir=s3://mybucket/tensorboard-output/
I'm having the following issue:
2022-07-11 10:01:18.596111: E tensorflow/c/logging.cc:40] Curl returned error code 77 - Problem with the SSL CA cert (path? access rights?)
2022-07-11 10:01:18.596228: E tensorflow/c/logging.cc:40] HTTP response code: -1
Resolved remote host IP address:
Request ID:
Exception name:
Error message: curlCode: 77, Problem with the SSL CA cert (path? access rights?)
0 response headers:
2022-07-11 10:01:18.596256: W tensorflow/c/logging.cc:37] If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2022-07-11 10:01:18.596267: W tensorflow/c/logging.cc:37] Request failed, now waiting 0 ms before attempting again.
It seems to be an issue with Curl and certificate issue. I already tried to specify the environment variable CURL_CA_BUNDLE= to the right ca.crt, but that's not helping, it seems it's not checking this environment variable...
Any idea ? Thank you in advance!

Configure AWS toolkit for Visual Studio code

Trying to install and configure AWS toolkit to Visual Studio Code.
Command Command palette->Create Credentials profile brings two files :
credentials file content
[default]
aws_access_key_id = XXXXXXXXXXXXX
aws_secret_access_key = XXXXXXXXXXXXXXX
config file content
[default]
region = eu-central-1
output = text
Choose Command palette->AWS profile profile:default generates error:
2022-02-03 10:03:51 [ERROR]: log level: info
2022-02-03 10:03:52 [INFO]: Retrieving AWS endpoint data
2022-02-03 10:03:52 [INFO]: OS: Windows_NT x64 10.0.19043
2022-02-03 10:03:52 [INFO]: Visual Studio Code Extension Host Version: 1.63.2
2022-02-03 10:03:52 [INFO]: AWS Toolkit Version: 1.35.0
2022-02-03 10:03:52 [INFO]: telemetry cache not found: 'c:\Users\g\AppData\Roaming\Code\User\globalStorage\amazonwebservices.aws-toolkit-vscode\telemetryCache'
2022-02-03 10:04:18 [ERROR]: Error getting AccountId: [InvalidClientTokenId: The security token included in the request is invalid.
at constructor.h (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:5:9005)
at constructor.callListeners (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:21079)
at constructor.emit (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:20788)
at constructor.emitEvent (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:6641)
at constructor.e (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:2227)
at U.runTo (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:18:1767)
at c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:18:1979
at constructor.<anonymous> (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:2438)
at constructor.<anonymous> (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:6696)
at constructor.callListeners (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:21183)
at constructor.emit (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:20788)
at constructor.emitEvent (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:6641)
at constructor.e (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:2227)
at U.runTo (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:18:1767)
at c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:18:1979
at constructor.<anonymous> (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:2438)
at constructor.<anonymous> (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:6696)
at constructor.callListeners (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:21183)
at e (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:6:20964)
at IncomingMessage.<anonymous> (c:\Users\g\.vscode\extensions\amazonwebservices.aws-toolkit-vscode-1.35.0\dist\extension.js:1:209012)
at IncomingMessage.emit (events.js:327:22)
at IncomingMessage.EventEmitter.emit (domain.js:467:12)
at endReadableNT (internal/streams/readable.js:1327:12)
at processTicksAndRejections (internal/process/task_queues.js:80:21)] {
code: 'InvalidClientTokenId',
time: 2022-02-03T08:04:18.158Z,
requestId: '00c18899-6f97-40c1-9788-b2156b350ebb',
statusCode: 403,
retryable: false,
retryDelay: 83.95345343935642
}
2022-02-03 10:04:18 [ERROR]: login: failed to connect with "profile:default": Could not determine Account Id for credentials
How to connect AWS toolkit to my VSCode?
You'll need to get the access and secret key from AWS and insert them in place of the XXXXXXX placeholders.
You can get this information in the AWS Cloud -> IAM -> Access Management -> Users -> Select your user -> Security credentials -> Access Keys
You will find here the Access Key ID, but the Secret Key is only shown once when you are creating this item. You maybe have it stored somewhere, or you can create another Access Key pair and use that.
I have done this and I can connect to AWS Toolkit fine.
AWS Toolkit Config
Couple of things I tried to make it work:
Ensure that the credentials file is in C:\Users\UserName\ .aws\ {credentials}
Was prompted that default region for this profile was xxx. Changed it accordingly.
Restarted the VS code(yeah, I know) :D
Chose the profile and it opened okay.

Jenkins Cloudformation plugin gives InValid Client Id error

I am trying to launch a cloudformation stack via the jenkins-cloudformation plugin from a template stored in git but I receive an error "Invalid Client Id" even though I give proper access_key and secret_key.
Besides, an appropriate IAM role is attached to the ec2 instance on which jenkins is running and the instance metadata is accessible to jenkins user.
And this error comes up irrespective of whether I pass secretKey, accessKey in jenkins configuration or not.
Can someone please guide me where it's going wrong.
Error
Building in workspace /apps/jenkins/.jenkins/workspace/Cloudformation_Test
> /usr/bin/git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
> /usr/bin/git config remote.origin.url https://xxxx.git # timeout=10
Fetching upstream changes from https://xxxx.git
> /usr/bin/git --version # timeout=10
using GIT_ASKPASS to set credentials Gitlab user webadmdeamon to perform CICD with Jenkins
> /usr/bin/git fetch --tags --progress https://xxx.get +refs/heads/*:refs/remotes/origin/*
> /usr/bin/git rev-parse refs/remotes/origin/master^{commit} # timeout=10
> /usr/bin/git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 827b91075eb0ae5901b641a7588b9b5769ad2ce7 (refs/remotes/origin/master)
> /usr/bin/git config core.sparsecheckout # timeout=10
> /usr/bin/git checkout -f 827b91075eb0ae5901b641a7588b9b5769ad2ce7
Commit message: "Add new file"
> /usr/bin/git rev-list --no-walk 827b91075eb0ae5901b641a7588b9b5769ad2ce7 # timeout=10
Determining to create or update Cloud Formation stack: JenkinsCloudformationTest
Stack not found: JenkinsCloudformationTest. Reason: Detailed Message: The security token included in the request is invalid. (Service: AmazonCloudFormation; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: be71618c-3027-11e9-8d00-45421bf87ce0)
Status Code: 403
Error Code: InvalidClientTokenId
Creating Cloud Formation stack: JenkinsCloudformationTest
Failed to create stack: JenkinsCloudformationTest. Reason: Detailed Message: The security token included in the request is invalid. (Service: AmazonCloudFormation; Status Code: 403; Error Code: InvalidClientTokenId; Request ID: be73364d-3027-11e9-8d00-45421bf87ce0)
Status Code: 403
Error Code: InvalidClientTokenId
Finished: FAILURE
EDIT---
I am able to create a stack using aws cli in the same ec2 instance and with the same user.
The log shows that your issue is authentication-related:
Reason: Detailed Message: The security token included in the request is invalid.
(Service: AmazonCloudFormation; Status Code: 403; Error Code: InvalidClientTokenId; Request
ID: be71618c-3027-11e9-8d00-45421bf87ce0)
Status Code: 403
Error Code: InvalidClientTokenId
The problem could be either a bug in the Jenkins plugin or (more likely) a problem with the keys you are providing to the plugin.
The source code for the plugin (code ref), meanwhile, appears to indicate that the plugin always tries to use the access keys you provide. If you leave the key fields blank I guess it tries empty strings as the keys. Thus, the IAM role attached to the instance is probably not relevant.
Note that the error you receive InvalidClientTokenId is documented here:
InvalidClientTokenId
The X.509 certificate or AWS access key ID provided does not exist in our records.
HTTP Status Code: 403
Now, you mention in your update that:
I am able to create a stack using aws cli in the same ec2 instance and with the same user.
So firstly, try that again, and then have a look in CloudTrail. Filter by EventName=CreateStack, and then you'll see something like this:
Is it really the same user and Access Key?
I suspect you're going to find that it isn't, and the fix for you will be to provide correct Access Keys. If not, let me know and we can consider other possibilities.

Cannot create image with packer in Google Cloud due to service account error

I have created a service account in Gcloud.
Installed Gcloud on my mac.
When ever I run my packer template, it complains about this account which I have no idea where it is coming from.
Packer template:
{
"builders": [
{
"type": "googlecompute",
"account_file": "/Users/Joe/Downloads/account.json",
"project_id": "rare-truck-123456",
"source_image": "centos-7-v20180129",
"zone": "us-west1-a",
"ssh_username": "centos"
}
]
}
Error:
➜ packer git:(master) ✗ packer build release_google_image.json
googlecompute output will be in this color.
==> googlecompute: Checking image does not exist...
==> googlecompute: Creating temporary SSH key for instance...
==> googlecompute: Using image: centos-7-v20180129
==> googlecompute: Creating instance...
googlecompute: Loading zone: us-west1-a
googlecompute: Loading machine type: n1-standard-1
googlecompute: Loading network: default
googlecompute: Requesting instance creation...
googlecompute: Waiting for creation operation to complete...
==> googlecompute: Error creating instance: 1 error(s) occurred:
==> googlecompute:
==> googlecompute: * The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
Build 'googlecompute' errored: Error creating instance: 1 error(s) occurred:
* The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
==> Some builds didn't complete successfully and had errors:
--> googlecompute: Error creating instance: 1 error(s) occurred:
* The resource '123412341234-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found.
==> Builds finished but no artifacts were created.
Why is it trying to use 123412341234-compute#developer.gserviceaccount.com?
I created a service account with compute admin v1 permissions under my project in google cloud and I downloaded my json file and renamed it to accounts.json. The name of that service account is different (release-builder#rare-truck-123456.iam.gserviceaccount.com), but packer seems to ignore it and go after some strange account.
Even my cli command gcloud info gives back the right service account:
Google Cloud SDK [188.0.1]
Platform: [Mac OS X, x86_64] ('Darwin', 'Alexs-MacBook-Pro.local', '16.7.0', 'Darwin Kernel Version 16.7.0: Thu Jan 11 22:59:40 PST 2018; root:xnu-3789.73.8~1/RELEASE_X86_64', 'x86_64', 'i386')
Python Version: [2.7.14 (default, Sep 25 2017, 09:53:22) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]]
Python Location: [/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/Resources/Python.app/Contents/MacOS/Python]
Site Packages: [Disabled]
Installation Root: [/Users/Joe/Downloads/google-cloud-sdk]
Installed Components:
core: [2018.02.08]
gsutil: [4.28]
bq: [2.0.28]
System PATH: [/Users/Joe/Downloads/google-cloud-sdk/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin]
Python PATH: [/Users/Joe/Downloads/google-cloud-sdk/lib/third_party:/Users/Joe/Downloads/google-cloud-sdk/lib:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python27.zip:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old:/usr/local/Cellar/python/2.7.14/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload]
Cloud SDK on PATH: [True]
Kubectl on PATH: [False]
Installation Properties: [/Users/Joe/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/Joe/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/Joe/.config/gcloud/configurations/config_default]
Account: [release-builder#rare-truck-123456.iam.gserviceaccount.com]
Project: [rare-truck-123456]
Current Properties:
[core]
project: [rare-truck-123456]
account: [release-builder#rare-truck-123456.iam.gserviceaccount.com]
disable_usage_reporting: [True]
[compute]
region: [us-west1]
zone: [us-west1-a]
Logs Directory: [/Users/Joe/.config/gcloud/logs]
Last Log File: [/Users/Joe/.config/gcloud/logs/2018.02.09/15.51.18.911677.log]
git: [git version 2.14.3 (Apple Git-98)]
ssh: [OpenSSH_7.4p1, LibreSSL 2.5.0]
Google Compute Engine instances use default services account to have a better integration with Google Platform.
As you can read on the documentation [1]: "(...)If the user will be managing virtual machine instances that are configured to run as a service account, you must also grant the roles/iam.serviceAccountActor role."
You need to add the role "Service Account User" to your service account in order to be able to create Google Compute Engine instances.
[1] https://cloud.google.com/iam/docs/understanding-roles#compute_name_short_roles