I'm trying to run Terraform on an AWS EC2 instance, which is setup with an instance profile. However, Terraform doesn't seem to be implicitly using the Instance Profile and as such, I'm getting an "access denied" error whenever it tries to access my S3 remote state.
From the docs, I'm having trouble telling if, I am required to specify an AWS_METADATA_URL, or if there's anything else I'm explicitly required to do to make this work.
Per the Terraform docs:
EC2 Role If you're running Terraform from an EC2 instance with IAM
Instance Profile using IAM Role, Terraform will just ask the metadata
API endpoint for credentials.
This is a preferred approach over any other when running in EC2 as you
can avoid hard coding credentials. Instead these are leased on-the-fly
by Terraform which reduces the chance of leakage.
You can provide the custom metadata API endpoint via the
AWS_METADATA_URL variable which expects the endpoint URL, including
the version, and defaults to http://169.254.169.254:80/latest
Here's an example of what I'm trying to run:
# main.tf
provider "aws" {
region = "${var.region}"
}
terraform {
backend "s3" {}
}
module "core" {
// ....
}
# init .sh
terraform init -force-copy -input=false \
-backend-config="bucket=$TERRAFORM_STATE_BUCKET" \
-backend-config="key=$ENVIRONMENT/$SERVICE" \
-backend-config="region=$REGION" \
-upgrade=true
# AWS policy
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"*"
]
},
]
}
UPDATE
It seems the s3 list-objects command is failing in Terraform, though my policies should be allowing this to go through
-----------------------------------------------------
2018/02/20 21:09:37 [DEBUG] [aws-sdk-go] DEBUG: Response s3/ListObjects Details:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 403 Forbidden
Connection: close
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Tue, 20 Feb 2018 21:09:36 GMT
Server: AmazonS3
X-Amz-Id-2: OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=
X-Amz-Request-Id: FE6B77C5C74BCFFF
-----------------------------------------------------
2018/02/20 21:09:37 [DEBUG] [aws-sdk-go] <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>FE6B77C5C74BCFFF</RequestId><HostId>OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=</HostId></Error>
2018/02/20 21:09:37 [DEBUG] [aws-sdk-go] DEBUG: Validate Response s3/ListObjects failed, not retrying, error AccessDenied: Access Denied
status code: 403, request id: FE6B77C5C74BCFFF, host id: OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=
2018/02/20 21:09:37 [DEBUG] plugin: waiting for all plugin processes to complete...
[31mError inspecting state in "s3": AccessDenied: Access Denied
status code: 403, request id: FE6B77C5C74BCFFF, host id: OVK5E3d5R+Jgj3if5lxAXkwuERPZWsJNFJ7NeMYFbSrhQ/h4FfpV4z2mlgXFKT1Hg7lsqJ/jE6Q=
Related
I'm going mad over a fluent bit DaemonSet installed via Helm in EKS on Account AWS yyyyyyy unable to send data to Kinesis in AWS account xxxxxxxxxx.
It looks like EKS does not have OIDC provider on IAM but it's false! Can you help?
fluent bit logs:
[2022/06/29 15:22:34] [debug] [output:kinesis_firehose:kinesis_firehose.0] firehose:PutRecordBatch: events=157, payload=71245 bytes
[2022/06/29 15:22:34] [debug] [output:kinesis_firehose:kinesis_firehose.0] Sending log records to delivery stream kinesis_backend
[2022/06/29 15:22:34] [debug] [http_client] not using http_proxy for header
[2022/06/29 15:22:34] [debug] [aws_credentials] Requesting credentials from the EC2 provider..
[2022/06/29 15:22:34] [debug] [input:tail:tail.0] inode=19100461 events: IN_MODIFY
[2022/06/29 15:22:34] [debug] [input chunk] update output instances with new chunk size diff=693
[2022/06/29 15:22:34] [debug] [input:tail:tail.0] inode=19100461 events: IN_MODIFY
[2022/06/29 15:22:34] [debug] [http_client] server firehose.eu-west-1.amazonaws.com:443 will close connection #74
[2022/06/29 15:22:34] [debug] [aws_client] firehose.eu-west-1.amazonaws.com: http_do=0, HTTP Status: 400
[2022/06/29 15:22:34] [error] [aws_client] auth error, refreshing creds
[2022/06/29 15:22:34] [debug] [aws_credentials] Refresh called on the env provider
[2022/06/29 15:22:34] [debug] [aws_credentials] Refresh called on the profile provider
[2022/06/29 15:22:34] [debug] [aws_credentials] Reading shared config file.
[2022/06/29 15:22:34] [debug] [aws_credentials] Shared config file /root/.aws/config does not exist
[2022/06/29 15:22:34] [debug] [aws_credentials] Reading shared credentials file.
[2022/06/29 15:22:34] [error] [aws_credentials] Shared credentials file /root/.aws/credentials does not exist
[2022/06/29 15:22:34] [debug] [aws_credentials] Refresh called on the EKS provider
[2022/06/29 15:22:34] [debug] [aws_credentials] Calling STS..
[2022/06/29 15:22:34] [debug] [http_client] not using http_proxy for header
[2022/06/29 15:22:34] [debug] [http_client] server sts.eu-west-1.amazonaws.com:443 will close connection #74
[2022/06/29 15:22:34] [debug] [aws_client] sts.eu-west-1.amazonaws.com: http_do=0, HTTP Status: 400
[2022/06/29 15:22:34] [debug] [aws_client] Unable to parse API response- response is not valid JSON.
[2022/06/29 15:22:34] [debug] [aws_credentials] STS raw response:
<ErrorResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
<Error>
<Type>Sender</Type>
<Code>InvalidIdentityToken</Code>
<Message>No OpenIDConnect provider found in your account for https://oidc.eks.eu-west-1.amazonaws.com/id/AAAAAAAAAAAAAAAAAA</Message>
</Error>
<RequestId>c517249d-c018-43c3-a712-d0e5080ded86</RequestId>
</ErrorResponse>
fluent-bit service account in namespace newrelic (created by fluentbit Helm chart)
kubectl -n newrelic describe sa fluent-bit
Name: fluent-bit
Namespace: newrelic
Labels: app.kubernetes.io/instance=fluent-bit
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=fluent-bit
app.kubernetes.io/version=1.9.4
helm.sh/chart=fluent-bit-0.20.2
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::xxxxxxxxxx:role/kinesis-write
meta.helm.sh/release-name: fluent-bit
meta.helm.sh/release-namespace: newrelic
Policy permissions attached to role arn:aws:iam::xxxxxxxxxx:role/kinesis-write
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:PutRecordBatch"
],
"Resource": "arn:aws:firehose:region:xxxxxxxxxx:deliverystream/kinesis-backend"
}
]
}
Role arn:aws:iam::xxxxxxxxxx:role/kinesis-write trusted relationships (I included OIDC Provider for my EKS cluster on account yyyyyyyyyy)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::yyyyyyyyy:oidc-provider/oidc.eks.eu-west-1.amazonaws.com/id/AAAAAAAAAAAAAAAAAA"
},
"Action": [
"sts:AssumeRole",
"sts:AssumeRoleWithWebIdentity"
],
"Condition": {
"StringEquals": {
"oidc.eks.eu-west-1.amazonaws.com/id/AAAAAAAAAAAAAAAAAA:sub": "system:serviceaccount:newrelic:fluent-bit"
}
}
}
]
}
I would like to access AWS SQS with short lived credentials from an Apache Beam Pipleline.
In AWS IAM I have created a role with the following trust relationship:
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::xxxxxx:assumed-role/gcp_role/gcp-project-session-name",
"Service": "sqs.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
With this role I am able to access SQS from my local machine.
I used AWS BasicSessionCredentials as followed:
BasicSessionCredentials refreshedAWSCredentials = new BasicSessionCredentials(
refreshedCredentials.getAccessKeyId(),
refreshedCredentials.getSecretAccessKey(),
refreshedCredentials.getSessionToken());
AWSSecurityTokenService service = AWSSecurityTokenServiceClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(refreshedAWSCredentials))
.withRegion(options.getAwsRegion()).build();
I add the credentials object to the pipeline options:
options.setAwsSessionToken(refreshedAWSCredentials.getSessionToken());
options.setAwsCredentialsProvider(new AWSStaticCredentialsProvider(refreshedAWSCredentials));
return Pipeline.create(options);
At the end I always run into the following error:
Caused by: org.apache.beam.sdk.util.UserCodeException: com.amazonaws.services.sqs.model.AmazonSQSException:
The security token included in the request is invalid. (Service: AmazonSQS; Status Code: 403; Error Code:
InvalidClientTokenId; Request ID: 501e9869-ea58-5e80-9ec1-c1exxxx; Proxy: null
I assume that the AWSStaticCredentialsProvider does not know about the AWS_SECRET_TOKEN.
That's why I setup a STSAssumeRoleSessionCredentialsProvider which should be work with temporary credentials
STSAssumeRoleSessionCredentialsProvider stsSessionProvider = new STSAssumeRoleSessionCredentialsProvider
.Builder(awsRoleArn, awsRoleSession)
.withStsClient(service)
.build();
This is the associated pipeline code
p.apply(SqsIO.read().withQueueUrl(options.getSourceQueueUrl())
.withMaxNumRecords(options.getNumberOfRecords()))
.apply(ParDo.of(new SqsMessageToJson()))
.apply(TextIO.write()
.to(options.getDestinationBucketUrl() + "/purchase_intent/")
.withSuffix(".json"));
Even if I used the above provider which worked locally as well, I got the sam exception shown above. So, I am wondering how to setup SqsIO with temp credentials.
I decided to automate the creation of GC projects using Terraform.
One resource that Terraform will create during the run, is a new GSuite user. This is done using the terraform-provider-gsuite. So I set all up (service account, domain-wide delegation, etc) and all works fine when I run the Terraform steps from my command line.
Next, instead of relying on my command line, I decided to have a Cloud Build trigger that would execute Terraform init-plan-apply. As you all know, Cloud builds run under the identity of the GCB Service Account. This means we need to give that SA the permissions that Terraform might need during the execution. So far so good.
So I run the build, and I see that the only resource that Terraform is not able to create is the GSuite user. Digging through the logs I found these 2 requests (and their responses):
GET /admin/directory/v1/users?alt=json&customer=my_customer&prettyPrint=false&query=email%3Alolloso-admin%40codedby.pm HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
HTTP/2.0 400 Bad Request
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sun, 28 Feb 2021 12:58:25 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 400,
"message": "Invalid Input",
"errors": [
{
"domain": "global",
"reason": "invalid"
}
]
}
}
POST /admin/directory/v1/users?alt=json&prettyPrint=false HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
Content-Length: 276
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
{
"changePasswordAtNextLogin": true,
"externalIds": [],
"includeInGlobalAddressList": true,
"name": {
"familyName": "********",
"givenName": "*******"
},
"orgUnitPath": "/",
"password": "********",
"primaryEmail": "*********",
"sshPublicKeys": []
}
HTTP/2.0 403 Forbidden
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sun, 28 Feb 2021 12:58:25 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
Www-Authenticate: Bearer realm="https://accounts.google.com/", error="insufficient_scope", scope="https://www.googleapis.com/auth/admin.directory.user https://www.googleapis.com/auth/directory.user"
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 403,
"message": "Request had insufficient authentication scopes.",
"errors": [
{
"message": "Insufficient Permission",
"domain": "global",
"reason": "insufficientPermissions"
}
],
"status": "PERMISSION_DENIED"
}
}
I think this is the API complaining that the Cloud Build Service Account does not have enough rights to access the Directory API. And here is where the situation gets wild.
In order to do so I thought to grant domain-wide delegation to the Cloud Build SA. But that SA is special and I could not find a way to grant it.
I tried then to give the role serviceAccountUser to the Cloud Build SA on my SA (the one which has domain-wide delegation). But I did not manage to succeed. In fact the build still trows the same error of insufficient permission.
I then tried to use my SA (with domain-wide delegatuion) as custom Cloud Build Service Account. Also there, no luck.
Is it even possible from a Cloud Build to access certain resources for which normally one would use domain-wide delegation?
Thanks
UPDATE 1 (using custom build service account)
As per John comment, I tried to use a user-specified service account to execute my build. The necessary setup info has been taken from the official guide.
This is my cloudbuild.yaml file
steps:
- id: 'tf init'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform init
- id: 'tf plan'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform plan
- id: 'tf apply'
name: 'hashicorp/terraform'
entrypoint: 'sh'
args:
- '-c'
- |
terraform apply -auto-approve
logsBucket: 'gs://tf-project-creator-cloudbuild-logs'
serviceAccount: 'projects/tf-project-creator/serviceAccounts/sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com'
options:
env:
- 'TF_LOG=DEBUG'
where sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com is the service account which has domain-wide delegation on my Google Workspace.
I then executed the build manually
export GOOGLE_APPLICATION_CREDENTIALS=.secrets/sa-terraform-project-creator.json; gcloud builds submit --config cloudbuild.yaml
specifying the json private key of the same SA of above.
I would have expected the build to pass but I still get the same error of above
POST /admin/directory/v1/users?alt=json&prettyPrint=false HTTP/1.1
Host: www.googleapis.com
User-Agent: google-api-go-client/0.5 (linux amd64) Terraform/0.14.7
Content-Length: 276
Content-Type: application/json
X-Goog-Api-Client: gl-go/1.15.6 gdcl/20200514
Accept-Encoding: gzip
{
"changePasswordAtNextLogin": true,
"externalIds": [],
"includeInGlobalAddressList": true,
"name": {
"familyName": "REDACTED",
"givenName": "REDACTED"
},
"orgUnitPath": "/",
"organizations": [],
"password": "REDACTED",
"primaryEmail": "REDACTED",
"sshPublicKeys": []
}
-----------------------------------------------------
2021/03/06 17:26:19 [DEBUG] Google API Response Details:
---[ RESPONSE ]--------------------------------------
HTTP/2.0 403 Forbidden
Cache-Control: private
Content-Type: application/json; charset=UTF-8
Date: Sat, 06 Mar 2021 17:26:19 GMT
Server: ESF
Vary: Origin
Vary: X-Origin
Vary: Referer
Www-Authenticate: Bearer realm="https://accounts.google.com/", www.googleapis.com/auth/directory.user"
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Xss-Protection: 0
{
"error": {
"code": 403,
"message": "Request had insufficient authentication scopes.",
{
"message": "Insufficient Permission",
"domain": "global",
"reason": "insufficientPermissions"
}
],
"status": "PERMISSION_DENIED"
}
}
Is there anything I am missing?
UPDATE 2 (check on active identity when submitting a build)
As deviavir pointed out in their comment, I tried
enabling "Service Accounts" in the GCB settings, but as suspected it did not work.
double checking the active identity while submitting the build. One of the limitations of using a custom build SA, is that the build must be manually triggered. So using gcloud, that means
gcloud builds submit --config cloudbuild.yaml
Til now, when executing this command I have always prepended it by setting GOOGLE_APPLICATION_CREDENTIALS var like this
export GOOGLE_APPLICATION_CREDENTIALS=.secrets/sa-terraform-project-creator.json
The specified private key is the key to my build SA (the one with domain-wide delegation). While doing that, I was always logged in in gcloud with another account (the Owner of the project) which does not have the domain-wide delegation permission). But I thought that by setting GOOGLE_APPLICATION_CREDENTIALS, gcloud would have picked up that credentials. I still think that is the case, but I tried to then submit the build while being logged in gcloud using that same build SA.
So I did
gcloud auth activate-service-account sa-terraform-project-creator#tf-project-creator.iam.gserviceaccount.com --key-file='.secrets/sa-terraform-project-creator.json'
and right after
gcloud builds submit --config cloudbuild.yaml
Yet again, I hit the same permission problem when accessing the Directory API.
As deviavir suspected, I start to think that during the execution of the build, the call to the Directory API is done with the wrong credentials.
Is there a way to log the identity used while executing certain Terraform plugin API calls? That would help a lot.
I have an AWS ElasticSearch domain in eu-west-1 region and have taken a snapshot to an S3 bucket sub folder also in the same region.
I have also deployed a second AWS ElasticSearch domain in another aws region - eu-west-2.
Added an S3 bucket replication between the buckets but when I try to register the repository on the eu-west-2 AWS ES domain, I get the following error:
500
{"error":{"root_cause":[{"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists"}],"type":"blob_store_exception","reason":"Failed to check if blob [master.dat] exists","caused_by":{"type":"amazon_s3_exception","reason":"Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: 14F0571DFF522922; S3 Extended Request ID: U1OnlKPOkfCNFzoV9HC5WBHJ+kfhAZDMOG0j0DzY5+jwaRFJvHkyzBacilA4FdIqDHDYWPCrywU=)"}},"status":500}
this is the code i am using to register the repository on the new cluster (taken from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-managedomains-snapshots.html#es-managedomains-snapshot-restore):
import boto3
import requests
from requests_aws4auth import AWS4Auth
host = 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/' # include https:// and trailing /
region = 'eu-west-2' # e.g. us-west-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
# Register repository
path = '_snapshot/es-elk-prod' # the Elasticsearch API endpoint
url = host + path
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
headers = {"Content-Type": "application/json"}
r = requests.put(url, auth=awsauth, json=payload, headers=headers)
print(r.status_code)
print(r.text)
from the logs, i get:
curl -X GET 'https://search-**es-elk-prod**.eu-west-2.es.amazonaws.com/_snapshot/es-mw-elk-prod/_all?pretty'
{
"error" : {
"root_cause" : [
{
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
}
],
"type" : "amazon_s3_exception",
"reason" : "Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 72A59132E2830D81; S3 Extended Request ID: o0XalToNp19HDJKSOVxmna71hx3LkwoSFEobm3HQGH1HEzxOrAtYHg+asnKxJ03iGSDDhUz5GUI=)"
},
"status" : 500
}
the ARN is able to access the S3 bucket as is the same ARN i use to snapshot the eu-west-2 domain to S3 as the eu-west-1 snapshot is stored in a sub-folder on the S3 bucket, I added a path to the code, such that:
payload = {
"type": "s3",
"settings": {
"bucket": "es-prod-eu-west-2",
"path": "es-elk-prod",
"region": "eu-west-2",
"role_arn": "arn:aws:iam::1234567:role/EsProd-***-snapshotS3role-***"
}
}
but this didn't work either.
What is the correct way to restore snapshot created in one aws region into another aws region?
Any advice is much appreciated.
I've had similar but not identical error messages about The bucket is in this region: eu-west-1. Please use this region to retry the request when moving from eu-west-1 to us-west-2.
According to Amazon's documentation (under "Migrating data to a different domain") you will need to specify an endpoint rather than a region:
If you encounter this error, try replacing "region": "us-east-2" with "endpoint": "s3.amazonaws.com" in the PUT statement and retry the request.
I have an IAM user called server that uses s3cmd to backup up to S3.
s3cmd sync /path/to/file-to-send.bak s3://my-bucket-name/
Which gives:
ERROR: S3 error: 403 (SignatureDoesNotMatch): The request signature we calculated does not match the signature you provided. Check your key and signing method.
The same user can send email via SES so I know that the access_key and secret_key are correct.
I have also attached AmazonS3FullAccess policy to the IAM user and clicked on Simulate policy. I added all of the Amazon S3 actions and then clicked Run simulation. All of the actions were allowed so it seems that S3 thinks I should have access. The policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
The only way I can get access is to use use the root accounts access_key and secret_key. I can not get any IAM user to be able to login.
Using s3cmd --debug gives:
DEBUG: Response: {'status': 403, 'headers': {'x-amz-bucket-region': 'eu-west-1', 'x-amz-id-2': 'XXX', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': 'XXX', 'date': 'Tue, 30 Aug 2016 09:10:52 GMT', 'content-type': 'application/xml'}, 'reason': 'Forbidden', 'data': '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXX</AWSAccessKeyId><StringToSign>GET\n\n\n\nx-amz-date:Tue, 30 Aug 2016 09:10:53 +0000\n/XXX/</StringToSign><SignatureProvided>XXX</SignatureProvided><StringToSignBytes>XXX</StringToSignBytes><RequestId>490BE76ECEABF4B3</RequestId><HostId>XXX</HostId></Error>'}
DEBUG: ConnMan.put(): connection put back to pool (https://XXX.s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
Where I have replaced anything sensitive looking with XXX.
Have I missed something in the permissions setup?
explictly use the correct iam access key and secret key used with the s3cmd ie
s3cmd --access_key=75674745756 --secret_key=F6AFHDGFTFJGHGH sync /path/to/file-to-send.bak s3://my-bucket-name/
The error shown is for an incorrect access key and/or secret key